Skip to content

KietzmannLab/AVS-machine-room

Repository files navigation

AVS Machine Room

This module provides preprocessing and analysis tools for the Active Visual Semantics (AVS) dataset. It focuses on MEG and eye-tracking data preprocessing workflows, with the core AVSComposer class for fusing multimodal neuroimaging data.

Overview

The AVS Machine Room package handles complex multimodal neuroimaging data preprocessing for the Active Visual Semantics experiment. The dataset combines eye tracking (ET) + MEG + anatomical MRI data for a semantic active viewing task of natural scenes.

For detailed information, consult the AVS dataset user handbook: https://www.notion.so/097bf30670bf41309ab1b500a86fbae5?v=58f35a22527f4cda90822bd812be303a&pvs=4

Core Features

  • AVSComposer: Main class for fusing MEG and eye-tracking data to generate event-based epochs
  • MEG Preprocessing: Maxwell filtering, bad channel detection, and signal preprocessing
  • Eye Tracking Preprocessing: Fixation analysis, event detection, and metadata enrichment
  • Source Reconstruction: Complete pipeline for MEG source reconstruction
  • Scene Processing: Tools for handling natural scene stimuli and object detection

Installation

pip install -e .

Dependencies

The package requires the following main dependencies:

  • mne - For MEG data processing
  • google - For various utilities
  • pycocotools - For COCO dataset handling
  • scikit-image - For image processing
  • autoreject - For automated artifact rejection in MEG data

Optional dependencies:

  • deepspeech - For speech processing functionality
  • psychopy - For experimental paradigm support

Package Structure

avs_machine_room/
├── prepro/
│   ├── eye_tracking/               # Eye-tracking preprocessing
│   ├── meg/                        # MEG preprocessing and AVSComposer
│   ├── scenes/                     # Scene and image processing
│   └── source_reconstruction/      # Source reconstruction pipeline
├── dataloader/
│   └── tools/                      # Directory and data management tools

Key Components

AVSComposer (prepro/meg/avs_composer.py)

The AVSComposer is the central class that fuses MEG and eye-tracking data to generate event-based epochs with associated metadata. It handles the complete preprocessing pipeline from raw data to analysis-ready epochs.

Key Features:

  • Multimodal data fusion: Combines MEG and eye-tracking data streams
  • Event-based epoching: Generates MEG epochs based on eye-tracking events (fixations, saccades)
  • Metadata enrichment: Adds scene information, fixation sequences, and object labels
  • Preprocessing pipeline: Handles Maxwell filtering, bad channel detection, and artifact rejection
  • Flexible configuration: Supports different servers, block selections, and preprocessing options

Usage Example:

from avs_machine_room.prepro.meg.avs_composer import AVSComposer

# Initialize AVSComposer
composer = AVSComposer(
    data_dir="/path/to/meg/data",
    output_dir="/path/to/output", 
    et_dir="/path/to/eyetracking/data",
    subject=1,
    session_num=1,
    diagnostics={},
    preprocessed=False,
    recompute_prepro=False,
    server="uos",
    verbose=True
)

# Read MEG data for a specific block
block_num, raw_data = composer.read_meg_block(block=1)

# Process eye-tracking events and create epochs
epochs = composer.create_epochs_from_events()
  • avs_meg_prep.py: Core MEG preprocessing utilities

    • run_maxwell_on_block(): Apply Maxwell filtering to MEG blocks
    • get_avs_crosstalk_file(): Get crosstalk compensation file
    • get_avs_fine_cal_file(): Get fine calibration file
    • read_bad_chan_logbook(): Read bad channel information from logbook
  • avs_trigger_tools.py: MEG trigger and event handling

    • get_meg_trigger_dict(): Extract trigger events from MEG data
    • repair_meg_trigger_events(): Repair corrupted trigger events
    • add_fix_event_trigger(): Add fixation events to trigger stream

Eye Tracking Preprocessing (prepro/eye_tracking/)

  • avs_prep.py: Main eye-tracking preprocessing pipeline

    • avs_combine_events(): Combine eye-tracking events across blocks
    • add_fixation_sequence_position(): Add sequential fixation information
    • get_fix_object_labels(): Extract object labels for fixations
    • add_cross_event_information(): Enrich events with cross-modal information
  • avs_et_analysis_tools.py: Eye-tracking analysis functions

    • compute_fdm(): Compute fixation density maps with Gaussian smoothing

Source Reconstruction (prepro/source_reconstruction/)

  • avs_compute_population_codes.py: Compute population codes from MEG data
  • avs_source_reconstruction_worker.py: Main source reconstruction pipeline
  • avs_make_forward.py: Forward model computation
  • avs_compute_cross_session_ncov.py: Cross-session noise covariance estimation

Usage Examples

Basic AVSComposer Workflow

from avs_machine_room.prepro.meg.avs_composer import AVSComposer

# Initialize composer for subject 1, session 1
composer = AVSComposer(
    data_dir="/share/klab/datasets/avs/rawdir",
    output_dir="/share/klab/datasets/avs/results", 
    et_dir="/share/klab/datasets/avs/eyetracking",
    subject=1,
    session_num=1,
    diagnostics={'run_diagnostics': True},
    preprocessed=False,
    recompute_prepro=False,
    server="uos",
    interpolate_bad_channels=True,
    n_jobs=4
)

# Process all blocks for this session
for block in composer.blocks_this_session:
    block_num, raw_data = composer.read_meg_block(block=block)
    # Further processing...

Computing Fixation Density Maps

from avs_machine_room.prepro.eye_tracking.avs_et_analysis_tools import compute_fdm

# Compute fixation density map
fdm = compute_fdm(
    x=fixation_x,
    y=fixation_y,
    im_height=1080,
    im_width=1920,
    pix_per_degree=40,
    fwhm=2
)

MEG Preprocessing Pipeline

from avs_machine_room.prepro.meg.avs_meg_prep import run_maxwell_on_block, get_avs_crosstalk_file

# Apply Maxwell filtering to a MEG block
crosstalk_file = get_avs_crosstalk_file()
filtered_raw = run_maxwell_on_block(
    raw_data, 
    crosstalk_file, 
    calibration_file="path/to/cal.dat"
)

Eye Tracking Event Processing

from avs_machine_room.prepro.eye_tracking.avs_prep import avs_combine_events, add_fixation_sequence_position

# Combine eye-tracking events across blocks
combined_events = avs_combine_events(
    subject=1,
    session=1, 
    data_dir="/path/to/et/data"
)

# Add fixation sequence information
enriched_events = add_fixation_sequence_position(combined_events)

Data Organization

The package expects data to be organized according to the AVS experiment structure:

  • Subject IDs are zero-padded (e.g., 01, 02, etc.)
  • Sessions are represented by letters (1→a, 2→b, etc.)
  • Subject-session identifiers follow the format as01a (subject 1, session 1)

Server Configuration

The package supports multiple server configurations:

  • mpi: Max Planck Institute setup
  • ikw: IKW server configuration
  • uos: University of Osnabrück HPC

To add a new server configuration, modify the get_data_dirs() and get_input_dirs() functions in avs_machine_room/dataloader/tools/avs_directory_tools.py to include your custom data paths.

Authors

Version

Current version: 0.1.0

License

This package is part of the Active Visual Semantics project research infrastructure.

About

dataloaders, preproceccing, and analysis tools for the AVS dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •