This module provides preprocessing and analysis tools for the Active Visual Semantics (AVS) dataset. It focuses on MEG and eye-tracking data preprocessing workflows, with the core AVSComposer class for fusing multimodal neuroimaging data.
The AVS Machine Room package handles complex multimodal neuroimaging data preprocessing for the Active Visual Semantics experiment. The dataset combines eye tracking (ET) + MEG + anatomical MRI data for a semantic active viewing task of natural scenes.
For detailed information, consult the AVS dataset user handbook: https://www.notion.so/097bf30670bf41309ab1b500a86fbae5?v=58f35a22527f4cda90822bd812be303a&pvs=4
- AVSComposer: Main class for fusing MEG and eye-tracking data to generate event-based epochs
- MEG Preprocessing: Maxwell filtering, bad channel detection, and signal preprocessing
- Eye Tracking Preprocessing: Fixation analysis, event detection, and metadata enrichment
- Source Reconstruction: Complete pipeline for MEG source reconstruction
- Scene Processing: Tools for handling natural scene stimuli and object detection
pip install -e .The package requires the following main dependencies:
mne- For MEG data processinggoogle- For various utilitiespycocotools- For COCO dataset handlingscikit-image- For image processingautoreject- For automated artifact rejection in MEG data
Optional dependencies:
deepspeech- For speech processing functionalitypsychopy- For experimental paradigm support
avs_machine_room/
├── prepro/
│ ├── eye_tracking/ # Eye-tracking preprocessing
│ ├── meg/ # MEG preprocessing and AVSComposer
│ ├── scenes/ # Scene and image processing
│ └── source_reconstruction/ # Source reconstruction pipeline
├── dataloader/
│ └── tools/ # Directory and data management tools
The AVSComposer is the central class that fuses MEG and eye-tracking data to generate event-based epochs with associated metadata. It handles the complete preprocessing pipeline from raw data to analysis-ready epochs.
- Multimodal data fusion: Combines MEG and eye-tracking data streams
- Event-based epoching: Generates MEG epochs based on eye-tracking events (fixations, saccades)
- Metadata enrichment: Adds scene information, fixation sequences, and object labels
- Preprocessing pipeline: Handles Maxwell filtering, bad channel detection, and artifact rejection
- Flexible configuration: Supports different servers, block selections, and preprocessing options
from avs_machine_room.prepro.meg.avs_composer import AVSComposer
# Initialize AVSComposer
composer = AVSComposer(
data_dir="/path/to/meg/data",
output_dir="/path/to/output",
et_dir="/path/to/eyetracking/data",
subject=1,
session_num=1,
diagnostics={},
preprocessed=False,
recompute_prepro=False,
server="uos",
verbose=True
)
# Read MEG data for a specific block
block_num, raw_data = composer.read_meg_block(block=1)
# Process eye-tracking events and create epochs
epochs = composer.create_epochs_from_events()-
avs_meg_prep.py: Core MEG preprocessing utilitiesrun_maxwell_on_block(): Apply Maxwell filtering to MEG blocksget_avs_crosstalk_file(): Get crosstalk compensation fileget_avs_fine_cal_file(): Get fine calibration fileread_bad_chan_logbook(): Read bad channel information from logbook
-
avs_trigger_tools.py: MEG trigger and event handlingget_meg_trigger_dict(): Extract trigger events from MEG datarepair_meg_trigger_events(): Repair corrupted trigger eventsadd_fix_event_trigger(): Add fixation events to trigger stream
-
avs_prep.py: Main eye-tracking preprocessing pipelineavs_combine_events(): Combine eye-tracking events across blocksadd_fixation_sequence_position(): Add sequential fixation informationget_fix_object_labels(): Extract object labels for fixationsadd_cross_event_information(): Enrich events with cross-modal information
-
avs_et_analysis_tools.py: Eye-tracking analysis functionscompute_fdm(): Compute fixation density maps with Gaussian smoothing
avs_compute_population_codes.py: Compute population codes from MEG dataavs_source_reconstruction_worker.py: Main source reconstruction pipelineavs_make_forward.py: Forward model computationavs_compute_cross_session_ncov.py: Cross-session noise covariance estimation
from avs_machine_room.prepro.meg.avs_composer import AVSComposer
# Initialize composer for subject 1, session 1
composer = AVSComposer(
data_dir="/share/klab/datasets/avs/rawdir",
output_dir="/share/klab/datasets/avs/results",
et_dir="/share/klab/datasets/avs/eyetracking",
subject=1,
session_num=1,
diagnostics={'run_diagnostics': True},
preprocessed=False,
recompute_prepro=False,
server="uos",
interpolate_bad_channels=True,
n_jobs=4
)
# Process all blocks for this session
for block in composer.blocks_this_session:
block_num, raw_data = composer.read_meg_block(block=block)
# Further processing...from avs_machine_room.prepro.eye_tracking.avs_et_analysis_tools import compute_fdm
# Compute fixation density map
fdm = compute_fdm(
x=fixation_x,
y=fixation_y,
im_height=1080,
im_width=1920,
pix_per_degree=40,
fwhm=2
)from avs_machine_room.prepro.meg.avs_meg_prep import run_maxwell_on_block, get_avs_crosstalk_file
# Apply Maxwell filtering to a MEG block
crosstalk_file = get_avs_crosstalk_file()
filtered_raw = run_maxwell_on_block(
raw_data,
crosstalk_file,
calibration_file="path/to/cal.dat"
)from avs_machine_room.prepro.eye_tracking.avs_prep import avs_combine_events, add_fixation_sequence_position
# Combine eye-tracking events across blocks
combined_events = avs_combine_events(
subject=1,
session=1,
data_dir="/path/to/et/data"
)
# Add fixation sequence information
enriched_events = add_fixation_sequence_position(combined_events)The package expects data to be organized according to the AVS experiment structure:
- Subject IDs are zero-padded (e.g.,
01,02, etc.) - Sessions are represented by letters (1→a, 2→b, etc.)
- Subject-session identifiers follow the format
as01a(subject 1, session 1)
The package supports multiple server configurations:
mpi: Max Planck Institute setupikw: IKW server configurationuos: University of Osnabrück HPC
To add a new server configuration, modify the get_data_dirs() and get_input_dirs() functions in avs_machine_room/dataloader/tools/avs_directory_tools.py to include your custom data paths.
- P. Sulewski ([email protected])
- C. Amme ([email protected])
Current version: 0.1.0
This package is part of the Active Visual Semantics project research infrastructure.