Download Data

Download Data#

For the tutorials in this book, we will use partially-preprocessed data from two open multi-echo datasets: Euskalibur and Cambridge. For more information about these datasets, see Open Multi-Echo Datasets.

import os
from pprint import pprint

from tedana import datasets

DATA_DIR = os.path.abspath("../data")

euskalibur_dataset = datasets.fetch_euskalibur(
    n_subjects=5,
    low_resolution=False,
    data_dir=DATA_DIR,
)
pprint(euskalibur_dataset)

cambridge_dataset = datasets.fetch_cambridge(
    n_subjects=5,
    low_resolution=False,
    data_dir=DATA_DIR,
)
pprint(cambridge_dataset)

For now, we will use repo2data to download some data we’re storing on Google Drive.

import os

from repo2data.repo2data import Repo2Data

# Install the data if running locally, or point to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")

# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.abspath(data_path[0])
---- repo2data starting ----
/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/repo2data
Config from file :
../binder/data_requirement.json
Destination:
./../data/multi-echo-data-analysis

Info : Starting to download from Google drive https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF ...
Downloading...
From (original): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF
From (redirected): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF&confirm=t&uuid=be3ecd0a-fc84-4618-b5cf-53d7aff42baf
To: /home/runner/work/multi-echo-data-analysis/multi-echo-data-analysis/data/multi-echo-data-analysis/sub-04570.zip

  0%|          | 0.00/315M [00:00<?, ?B/s]
  3%|▎         | 9.44M/315M [00:00<00:03, 92.6MB/s]
  9%|▉         | 28.3M/315M [00:00<00:01, 148MB/s] 
 14%|█▍        | 43.5M/315M [00:00<00:01, 142MB/s]
 20%|█▉        | 61.3M/315M [00:00<00:01, 156MB/s]
 25%|██▍       | 77.1M/315M [00:00<00:01, 147MB/s]
 31%|███       | 96.5M/315M [00:00<00:01, 161MB/s]
 36%|███▌      | 113M/315M [00:00<00:01, 146MB/s] 
 41%|████      | 128M/315M [00:00<00:01, 147MB/s]
 46%|████▌     | 143M/315M [00:01<00:01, 123MB/s]
 51%|█████     | 161M/315M [00:01<00:01, 137MB/s]
 56%|█████▌    | 176M/315M [00:01<00:01, 136MB/s]
 61%|██████    | 192M/315M [00:01<00:00, 142MB/s]
 66%|██████▌   | 208M/315M [00:01<00:00, 147MB/s]
 71%|███████▏  | 224M/315M [00:01<00:00, 151MB/s]
 76%|███████▋  | 240M/315M [00:01<00:00, 148MB/s]
 82%|████████▏ | 257M/315M [00:01<00:00, 154MB/s]
 87%|████████▋ | 273M/315M [00:01<00:00, 142MB/s]
 92%|█████████▏| 290M/315M [00:02<00:00, 144MB/s]
 97%|█████████▋| 305M/315M [00:02<00:00, 144MB/s]
100%|██████████| 315M/315M [00:02<00:00, 145MB/s]
INFO patool: Extracting ./../data/multi-echo-data-analysis/sub-04570.zip ...
INFO patool: running /usr/bin/7z x -y -p- -o./../data/multi-echo-data-analysis -- ./../data/multi-echo-data-analysis/sub-04570.zip
INFO patool:     with input=''
INFO patool: ... ./../data/multi-echo-data-analysis/sub-04570.zip extracted to `./../data/multi-echo-data-analysis'.
Info : sub-04570.zip Decompressed