Download Data#
For the tutorials in this book, we will use partially-preprocessed data from two open multi-echo datasets: Euskalibur and Cambridge. For more information about these datasets, see Open Multi-Echo Datasets.
import os
from pprint import pprint
from tedana import datasets
DATA_DIR = os.path.abspath("../data")
euskalibur_dataset = datasets.fetch_euskalibur(
n_subjects=5,
low_resolution=False,
data_dir=DATA_DIR,
)
pprint(euskalibur_dataset)
cambridge_dataset = datasets.fetch_cambridge(
n_subjects=5,
low_resolution=False,
data_dir=DATA_DIR,
)
pprint(cambridge_dataset)
For now, we will use repo2data to download some data we’re storing on Google Drive.
import os
from repo2data.repo2data import Repo2Data
# Install the data if running locally, or point to cached data if running on neurolibre
DATA_REQ_FILE = os.path.join("../binder/data_requirement.json")
# Download data
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.abspath(data_path[0])
---- repo2data starting ----
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/repo2data
Config from file :
../binder/data_requirement.json
Destination:
./../data/multi-echo-data-analysis
Info : Starting to download from Google drive https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF ...
Downloading...
From (original): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF
From (redirected): https://drive.google.com/uc?id=1SVPP1vd2UobKf1djztpi-DcYAFOFXQtF&confirm=t&uuid=6487c664-3231-4cc4-a776-d557456b8441
To: /home/runner/work/multi-echo-data-analysis/multi-echo-data-analysis/data/multi-echo-data-analysis/sub-04570.zip
0%| | 0.00/315M [00:00<?, ?B/s]
2%|▏ | 4.72M/315M [00:00<00:14, 21.5MB/s]
3%|▎ | 10.5M/315M [00:00<00:08, 35.7MB/s]
6%|▌ | 17.3M/315M [00:00<00:07, 37.2MB/s]
7%|▋ | 23.1M/315M [00:00<00:06, 43.1MB/s]
9%|▉ | 28.3M/315M [00:00<00:06, 44.7MB/s]
11%|█ | 33.6M/315M [00:00<00:07, 35.8MB/s]
12%|█▏ | 37.7M/315M [00:01<00:07, 35.8MB/s]
14%|█▍ | 44.6M/315M [00:01<00:07, 37.8MB/s]
18%|█▊ | 57.1M/315M [00:01<00:05, 46.0MB/s]
21%|██ | 64.5M/315M [00:01<00:04, 50.9MB/s]
24%|██▍ | 75.0M/315M [00:01<00:03, 62.7MB/s]
26%|██▌ | 82.3M/315M [00:01<00:04, 56.0MB/s]
30%|██▉ | 93.3M/315M [00:01<00:03, 66.6MB/s]
32%|███▏ | 101M/315M [00:02<00:04, 53.1MB/s]
36%|███▌ | 112M/315M [00:02<00:03, 55.6MB/s]
39%|███▉ | 122M/315M [00:02<00:03, 61.9MB/s]
41%|████ | 129M/315M [00:02<00:03, 50.2MB/s]
43%|████▎ | 135M/315M [00:02<00:03, 52.5MB/s]
47%|████▋ | 147M/315M [00:02<00:02, 64.9MB/s]
50%|████▉ | 156M/315M [00:02<00:02, 69.2MB/s]
52%|█████▏ | 164M/315M [00:03<00:02, 68.1MB/s]
57%|█████▋ | 179M/315M [00:03<00:01, 77.9MB/s]
60%|█████▉ | 187M/315M [00:03<00:01, 67.4MB/s]
64%|██████▎ | 200M/315M [00:03<00:01, 67.0MB/s]
66%|██████▌ | 208M/315M [00:03<00:01, 58.3MB/s]
72%|███████▏ | 228M/315M [00:03<00:01, 84.5MB/s]
76%|███████▌ | 238M/315M [00:04<00:01, 72.5MB/s]
80%|███████▉ | 251M/315M [00:04<00:00, 85.2MB/s]
83%|████████▎ | 261M/315M [00:04<00:00, 67.8MB/s]
88%|████████▊ | 275M/315M [00:04<00:00, 78.8MB/s]
91%|█████████ | 285M/315M [00:04<00:00, 71.5MB/s]
95%|█████████▌| 299M/315M [00:04<00:00, 87.3MB/s]
99%|█████████▊| 310M/315M [00:05<00:00, 75.8MB/s]
100%|██████████| 315M/315M [00:05<00:00, 62.1MB/s]
INFO patool: Extracting ./../data/multi-echo-data-analysis/sub-04570.zip ...
INFO patool: running /usr/bin/7z x -y -p- -aou -o./../data/multi-echo-data-analysis -- ./../data/multi-echo-data-analysis/sub-04570.zip
INFO patool: ... ./../data/multi-echo-data-analysis/sub-04570.zip extracted to `./../data/multi-echo-data-analysis'.
Info : sub-04570.zip Decompressed