Download Data

Download Data#

For the tutorials in this book, we will use partially-preprocessed data from two open multi-echo datasets: Euskalibur and Cambridge. For more information about these datasets, see Open Multi-Echo Datasets.

import os
from pprint import pprint

from tedana import datasets

DATA_DIR = os.path.abspath("../data")

euskalibur_dataset = datasets.fetch_euskalibur(
    n_subjects=5,
    low_resolution=False,
    data_dir=DATA_DIR,
)
pprint(euskalibur_dataset)

cambridge_dataset = datasets.fetch_cambridge(
    n_subjects=5,
    low_resolution=False,
    data_dir=DATA_DIR,
)
pprint(cambridge_dataset)

For now, we will use repo2data to download some data we’re storing on Google Drive.

import os

from repo2data.repo2data import Repo2Data

# Install the data if running locally, or point to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")

# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.abspath(data_path[0])
---- repo2data starting ----
/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/repo2data
Config from file :
../binder/data_requirement.json
Destination:
./../data/multi-echo-data-analysis

Info : Starting to download from Google drive https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF ...
Downloading...
From (original): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF
From (redirected): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF&confirm=t&uuid=f7d31725-cc7b-4a3b-8f0a-73feb395bba4
To: /home/runner/work/multi-echo-data-analysis/multi-echo-data-analysis/data/multi-echo-data-analysis/sub-04570.zip

  0%|          | 0.00/315M [00:00<?, ?B/s]
  0%|          | 524k/315M [00:00<01:06, 4.71MB/s]
  2%|▏         | 4.72M/315M [00:00<00:18, 16.4MB/s]
  4%|▎         | 11.0M/315M [00:00<00:15, 19.4MB/s]
  6%|▌         | 18.4M/315M [00:00<00:13, 21.2MB/s]
  8%|▊         | 26.2M/315M [00:01<00:15, 18.6MB/s]
 11%|█         | 34.1M/315M [00:02<00:18, 15.3MB/s]
 14%|█▎        | 42.5M/315M [00:02<00:14, 18.2MB/s]
 16%|█▌        | 50.9M/315M [00:02<00:12, 20.4MB/s]
 19%|█▉        | 59.2M/315M [00:03<00:13, 19.2MB/s]
 22%|██▏       | 67.6M/315M [00:03<00:11, 21.4MB/s]
 24%|██▍       | 76.0M/315M [00:03<00:10, 23.0MB/s]
 27%|██▋       | 84.4M/315M [00:04<00:09, 23.8MB/s]
 30%|██▉       | 92.8M/315M [00:04<00:09, 23.7MB/s]
 32%|███▏      | 101M/315M [00:04<00:08, 23.8MB/s] 
 35%|███▍      | 110M/315M [00:05<00:08, 24.2MB/s]
 38%|███▊      | 118M/315M [00:05<00:07, 25.1MB/s]
 40%|████      | 126M/315M [00:05<00:07, 24.6MB/s]
 43%|████▎     | 135M/315M [00:06<00:07, 24.1MB/s]
 46%|████▌     | 143M/315M [00:06<00:06, 27.4MB/s]
 48%|████▊     | 152M/315M [00:06<00:06, 25.1MB/s]
 51%|█████     | 160M/315M [00:07<00:05, 26.8MB/s]
 54%|█████▎    | 168M/315M [00:07<00:05, 26.5MB/s]
 56%|█████▌    | 177M/315M [00:07<00:06, 22.4MB/s]
 59%|█████▉    | 185M/315M [00:08<00:05, 21.7MB/s]
 62%|██████▏   | 193M/315M [00:08<00:04, 24.7MB/s]
 64%|██████▍   | 202M/315M [00:08<00:04, 26.9MB/s]
 68%|██████▊   | 214M/315M [00:08<00:02, 38.1MB/s]
 70%|██████▉   | 220M/315M [00:09<00:02, 36.6MB/s]
 74%|███████▎  | 231M/315M [00:09<00:01, 48.4MB/s]
 76%|███████▌  | 238M/315M [00:09<00:01, 43.2MB/s]
 79%|███████▉  | 250M/315M [00:09<00:01, 55.3MB/s]
 82%|████████▏ | 257M/315M [00:09<00:01, 49.1MB/s]
 84%|████████▍ | 264M/315M [00:09<00:00, 53.2MB/s]
 86%|████████▌ | 271M/315M [00:10<00:00, 45.1MB/s]
 91%|█████████ | 285M/315M [00:10<00:00, 63.9MB/s]
 93%|█████████▎| 294M/315M [00:10<00:00, 52.7MB/s]
 96%|█████████▌| 303M/315M [00:10<00:00, 50.6MB/s]
100%|██████████| 315M/315M [00:10<00:00, 29.6MB/s]
INFO patool: Extracting ./../data/multi-echo-data-analysis/sub-04570.zip ...
INFO patool: running /usr/bin/7z x -y -p- -aou -o./../data/multi-echo-data-analysis -- ./../data/multi-echo-data-analysis/sub-04570.zip
INFO patool: ... ./../data/multi-echo-data-analysis/sub-04570.zip extracted to `./../data/multi-echo-data-analysis'.
Info : sub-04570.zip Decompressed