ICA components

Carpet plots

Adaptive mask

T2*

S0

T2* and S0 model fit (RMSE). (Scaled between 2nd and 98th percentiles)

External Regressors

Info

Tedana command used:

      
        tedana_workflow(data=['five-echo-dataset/p06.SBJ01_S09_Task11_e1.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e2.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e3.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e4.sm.nii.gz', 'five-echo-dataset/p06.SBJ01_S09_Task11_e5.sm.nii.gz'], tes=[15.4, 29.7, 44.0, 58.3, 72.6], out_dir=/Users/handwerkerd/code/meica/ohbm-2025-multiecho/five-echo-dataset/tedana_external_regress_processed, mask=None, convention=bids, prefix=, dummy_scans=0, masktype=['dropout'], fittype=loglin, combmode=t2s, n_independent_echos=None, tree=demo_external_regressors_motion_task_models, external_regressors=five-echo-dataset/external_regressors.tsv, ica_method=fastica, n_robust_runs=30, tedpca=aic, fixed_seed=42, maxit=500, maxrestart=10, tedort=False, gscontrol=None, no_reports=False, png_cmap=coolwarm, verbose=False, low_mem=False, debug=False, quiet=False, overwrite=False, t2smap=None, mixing_file=five-echo-dataset/tedana_processed/desc-ICA_mixing.tsv)
      
    

System: Darwin
Node: MH02276312MLI.local
Release: 23.6.0
System version: Darwin Kernel Version 23.6.0: Thu Apr 24 20:29:18 PDT 2025; root:xnu-10063.141.1.705.2~1/RELEASE_ARM64_T6000
Machine: arm64
Processor: arm
Python: 3.13.2 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 12:55:35) [Clang 14.0.6 ]
Tedana version: 25.0.1
Other library versions: {'bokeh': '3.6.2', 'mapca': '0.0.5', 'matplotlib': '3.10.0', 'nibabel': '5.3.2', 'nilearn': '0.11.1', 'numpy': '2.2.2', 'pandas': '2.2.3', 'robustica': '0.1.4', 'scikit-learn': '1.6.1', 'scipy': '1.15.1', 'threadpoolctl': '3.5.0', 'tqdm': '4.67.1'}

About tedana

This is based on the minimal criteria of the original MEICA decision tree (Kundu et al. 2013) without the more aggressive noise removal steps (DuPre et al. 2021). TE-dependence analysis was performed on input data using the tedana workflow (DuPre et al. 2021). An initial mask was generated from the first echo using nilearn's compute_epi_mask function. An adaptive mask was then generated using the dropout method(s), in which each voxel's value reflects the number of echoes with 'good' data. An adaptive mask was then generated using the dropout method(s), in which each voxel's value reflects the number of echoes with 'good' data. A two-stage masking procedure was applied, in which a liberal mask (including voxels with good data in at least the first echo) was used for optimal combination, T2*/S0 estimation, and denoising, while a more conservative mask (restricted to voxels with good data in at least the first three echoes) was used for the component classification procedure. A monoexponential model was fit to the data at each voxel using log-linear regression in order to estimate T2* and S0 maps. For each voxel, the value from the adaptive mask was used to determine which echoes would be used to estimate T2* and S0. Multi-echo data were then optimally combined using the T2* combination method (Posse et al. 1999). The following metrics were calculated: R2stat nuisance model, R2stat task model, countsigFS0, countsigFT2, dice_FS0, dice_FT2, kappa, normalized variance explained, pval nuisance CSF partial model, pval nuisance Motion partial model, pval nuisance model, pval task model, rho, signal-noise_t, variance explained. Kappa (kappa) and Rho (rho) were calculated as measures of TE-dependence and TE-independence, respectively. A t-test was performed between the distributions of T2*-model F-statistics associated with clusters (i.e., signal) and non-cluster voxels (i.e., noise) to generate a t-statistic (metric signal-noise_z) and p-value (metric signal-noise_p) measuring relative association of the component to signal over noise. {'External nuisance regressors that fit to components using a linear model were rejected.'} {'Task regressors that fit to components using a linear model and have some T2* weighting were accepted even if they would have been rejected base on other criteriea.'} Next, component selection was performed to identify BOLD (TE-dependent) and non-BOLD (TE-independent) components using a decision tree. This workflow used numpy (Van Der Walt et al. 2011), scipy (Virtanen et al. 2020), pandas (McKinney et al. 2010, pandas development team et al. 2020), scikit-learn (Pedregosa et al. 2011), nilearn, bokeh (Team et al. 2018), matplotlib (Hunter et al. 2007), and nibabel (Brett et al. 2019). This workflow also used the Dice similarity index (Dice et al. 1945, Sorensen et al. 1948).

References