Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Data Sets Visual cortex vim-4 About vim-4

About vim-4

Description of dataset vim-4 (visual imaging 4).

Summary of the data

This dataset contains data from two tasks: a visual attention task, and a video game task. In both tasks, fMRI was used to record whole-brain BOLD activity with a 2.0045 s TR from healthy human subjects.

In the visual attention task, subjects watched naturalistic movies in different attentional states. Each subject watched 2 hours of naturalistic movies passively, 30 minutes while attending to the presence of humans, and 30 minutes while attending to the presence of vehicles. The movies used in the attend-human and attend-vehicle conditions were identical, and was subsampled from the 2 hours of movies used during passive viewing.

In the video game task, subjects played Counterstrike: Source. The gameplay was open-ended. Because this data was originally collected as part of a pilot, subjects played for different amounts of time. One subject played for 90 minutes, and the other subject played for 45 minutes.

Data from the visual attention task was used in the following publications:

  • Çukur, Tolga, Shinji Nishimoto, Alexander G. Huth, and Jack L. Gallant. 2013. “Attention during Natural Vision Warps Semantic Representation across the Human Brain.” Nature Neuroscience 16 (6): 763–70.
  • Zhang, Tianjiao, James S. Gao, Tolga Cukur, and Jack L. Gallant. 2020. Voxel-based state space modeling recovers task-related cognitive states in naturalistic fMRI experiments. Submitted to Frontiers in Neuroscience.

Data from the video game task was used in the following publication:

Zhang, Tianjiao, James S. Gao, Tolga Cukur, and Jack L. Gallant. 2020. Voxel-based state space modeling recovers task-related cognitive states in naturalistic fMRI experiments. Submitted to Frontiers in Neuroscience.

S4 is not included as the subject did not consent to the public release of their data.

Format of the data

The data are stored in HDF5 containers in numpy arrays. Details about the data files are given in document linked to at the bottom of this page. Total size of the data is about 15 GB.

Conditions for usage of this data in publications

If you publish any work using data from the visual attention task, please cite Cukur et al. 2013. If you publish any work using data from the video game task, please cite Zhang et al. 2020. In either case, please also cite the data set using the following:

Zhang, Tianjiao, James S. Gao, Tolga Cukur, and Jack L. Gallant. (2020); Whole-brain BOLD activity recorded by fMRI during a visual attention task and a video game task. CRCNS.org
http://dx.doi.org/10.6080/K0668BDF

The above citation uses a Digital Object Identifier (DOI) which is assigned to the data set.  The DOI was created using DataCite (www.datacite.org) and the California Digital Library, "EZID" system (ezid.cdlib.org).

How to download the data

Data may be downloaded from:
https://portal.nersc.gov/project/crcns/download/vim-4
A CRCNS.org account is required. See the download link for more instructions.

Getting help using the data

If you have questions about using the data, please post them on the forum for using data sets.

Documentation files

crcns_vim-4_data_description.pdf.

attention_categories.txt - semantic attention categories.

Document Actions