Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Data Sets Visual cortex vim-1 About vim-1.

About vim-1.

Description of dataset vim-1 (visual imaging 1).

Summary of the data

This data set contains BOLD fMRI responses in human subjects viewing natural images.   Analysis of this data showed that it was possible to determine which of many images a human subject was looking at from the observed fMRI responses.  These results received wide-spread media coverage because they demonstrated a kind of "mind reading".  The data was originally published in these two papers:

Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452(7185), 352-355.

Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron, 63(6), 902-915.

An example popular press account:
Mind reading closer to becoming a reality. Research could lead to brain-controlled prosthetic devices. Tom Randall, Bloomberg News. Monday, March 10, 2008

Format of the data

There were two subjects. For each subject, seven experimental runs were done on each of five separate days, making a total of 70 runs (35 for each subject). Data from these experiments is provided in two formats: a minimally processed format in which data for each run is stored in a separate gzipped NIFTI file (.nii.gz) file. Data in this format is in 70 files, about 10GB total space. Data is also provided in a more processed format that has the estimated BOLD responses for all the images stored in a single file (file EstimatedResponses.mat). This file is about about 670MB and can be loaded into both MatLab and Python. Details of the data formats are given in crcns-vim-1-readme.pdf.

Legality of the data

The sharing of this data on CRCNS.org has been approved by the UC Berkeley Office for Protection of Human Subjects (cphs.berkeley.edu).

Conditions for usage of this data in publications

If you publish any work using the data, please cite the two articles listed above and the data set (see below). Additionally, please notify Kendrick Kay ([email protected]).

Conditions for usage of images

The images in this data set that are from the Berkeley Segmentation Data Set are copyright-free. Permission has been given by Kendrick Kay to re-use the images he took that are in this data set in any scientific publication.

How to download the data

Data may be downloaded from:
https://portal.nersc.gov/project/crcns/download/vim-1
A CRCNS.org account is required. See the download link for more instructions.

Getting help using the data

If you have questions about using the data, please post them on the forum for using data sets.

How to cite the data

In addition to citing the two articles listed above, publications created through usage of the data should cite the data set in the following recommended format:

Kay, K.N.; Naselaris, T.; Gallant, J. (2011): fMRI of human visual areas in response to natural images. CRCNS.org.
http://dx.doi.org/10.6080/K0QN64NG

The above citation uses a Digital Object Identifier (DOI) which is assigned to the data set.  The DOI was created using DataCite (www.datacite.org) and the California Digital Library, "EZID" system (n2t.net/ezid/).

Documentation file

crcns-vim-1-readme.pdf.
Document Actions