Download Data#
For the tutorials in this book, we will use partially-preprocessed data from two open multi-echo datasets: Euskalibur and Cambridge. For more information about these datasets, see Open Multi-Echo Datasets.
import os
from pprint import pprint
from tedana import datasets
DATA_DIR = os.path.abspath("../data")
euskalibur_dataset = datasets.fetch_euskalibur(
n_subjects=5,
low_resolution=False,
data_dir=DATA_DIR,
)
pprint(euskalibur_dataset)
cambridge_dataset = datasets.fetch_cambridge(
n_subjects=5,
low_resolution=False,
data_dir=DATA_DIR,
)
pprint(cambridge_dataset)
For now, we will use repo2data to download some data we’re storing on Google Drive.
import os
from repo2data.repo2data import Repo2Data
# Install the data if running locally, or point to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")
# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.abspath(data_path[0])
---- repo2data starting ----
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/repo2data
Config from file :
../binder/data_requirement.json
Destination:
./../data/multi-echo-data-analysis
Info : Starting to download from Google drive https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF ...
Downloading...
From (original): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF
From (redirected): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF&confirm=t&uuid=ae479ec7-b728-4298-99d2-f5346b14ad36
To: /home/runner/work/multi-echo-data-analysis/multi-echo-data-analysis/data/multi-echo-data-analysis/sub-04570.zip
0%| | 0.00/315M [00:00<?, ?B/s]
2%|▏ | 4.72M/315M [00:00<00:14, 21.9MB/s]
6%|▌ | 17.3M/315M [00:00<00:06, 46.0MB/s]
10%|▉ | 29.9M/315M [00:00<00:04, 58.2MB/s]
12%|█▏ | 36.2M/315M [00:00<00:05, 54.6MB/s]
14%|█▍ | 44.6M/315M [00:00<00:04, 56.7MB/s]
18%|█▊ | 57.1M/315M [00:01<00:03, 64.9MB/s]
20%|██ | 64.0M/315M [00:01<00:03, 63.9MB/s]
24%|██▎ | 73.9M/315M [00:01<00:03, 70.0MB/s]
27%|██▋ | 83.4M/315M [00:01<00:03, 75.8MB/s]
29%|██▉ | 91.2M/315M [00:01<00:02, 76.1MB/s]
32%|███▏ | 99.1M/315M [00:01<00:04, 52.5MB/s]
37%|███▋ | 116M/315M [00:01<00:03, 64.7MB/s]
40%|███▉ | 124M/315M [00:01<00:02, 68.3MB/s]
43%|████▎ | 136M/315M [00:02<00:02, 78.6MB/s]
46%|████▌ | 145M/315M [00:02<00:02, 73.4MB/s]
50%|████▉ | 156M/315M [00:02<00:01, 82.0MB/s]
52%|█████▏ | 165M/315M [00:02<00:01, 77.5MB/s]
55%|█████▌ | 173M/315M [00:02<00:02, 70.1MB/s]
60%|██████ | 189M/315M [00:02<00:01, 86.4MB/s]
63%|██████▎ | 199M/315M [00:02<00:01, 80.4MB/s]
68%|██████▊ | 213M/315M [00:02<00:01, 94.6MB/s]
71%|███████ | 223M/315M [00:03<00:01, 82.0MB/s]
74%|███████▍ | 232M/315M [00:03<00:01, 78.1MB/s]
77%|███████▋ | 241M/315M [00:03<00:01, 65.5MB/s]
79%|███████▉ | 248M/315M [00:03<00:01, 65.8MB/s]
84%|████████▍ | 265M/315M [00:03<00:00, 79.0MB/s]
87%|████████▋ | 274M/315M [00:03<00:00, 81.2MB/s]
92%|█████████▏| 288M/315M [00:03<00:00, 89.2MB/s]
96%|█████████▌| 301M/315M [00:04<00:00, 99.3MB/s]
99%|█████████▉| 311M/315M [00:04<00:00, 99.2MB/s]
100%|██████████| 315M/315M [00:04<00:00, 74.6MB/s]
INFO patool: Extracting ./../data/multi-echo-data-analysis/sub-04570.zip ...
INFO patool: running /usr/bin/7z x -y -p- -aou -o./../data/multi-echo-data-analysis -- ./../data/multi-echo-data-analysis/sub-04570.zip
INFO patool: ... ./../data/multi-echo-data-analysis/sub-04570.zip extracted to `./../data/multi-echo-data-analysis'.
Info : sub-04570.zip Decompressed