Skip to content. | Skip to navigation

Personal tools
You are here: Home Data Sets Auditory cortex ac-4 About ac-4

About ac-4

Information about the data.

Summary of the data.

This dataset consists of electrocorticography recordings of patients passively listening to auditory stimuli. Each auditory stimulus is a sentence being spoken by another person (the sentences are drawn from the TIMIT database). There are two types of sentences: filtered, and unfiltered. The filtered sentences are hard to understand, unless you have heard an unfiltered version already. The task is structured such that people hear sentences in groups of 3s. First a filtered sentence, then an unfiltered sentence, and finally a repetition of the filtered sentence. Perceptually, subjects usually don’t understand the first time they hear the filtered sentence, but do understand after the second time. This is because they’ve heard the unfiltered sentence in between. On a subset of subjects, there is pink noise played instead of unfiltered speech. This task is meant to investigate a perceptual “pop-out” effect whereby a stimulus that was once incomprehensible becomes understandable. Each subject contains roughly 40-50 sentences (filtered, unfiltered, and filtered for each one).

The provided data are the raw recordings taken from a grid of electrodes, generally placed over auditory cortices. The locations vary from patient to patient, and their locations can be determined from the 2d locations of each channel on a picture of the subject’s brain. We have also included timing information for when the stimuli were presented, though we cannot provide the actual recorded audio due to patient privacy concerns.

We have included several sample jupyter notebooks to get you started with loading and exploring the data. They can be run with free software, all of which is written in the open-source language Python. Results from an analysis of the change in neural activity from before to after hearing the unfiltered speech are described in:

Holdgraf, Christopher R., Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen. “Rapid Tuning Shifts in Human Auditory Cortex Enhance Speech Intelligibility.” Nature Communications 7, no. 1 (December 2016).

Format of the Data

The data files are organized using the "Brain Imaging Data Structure" (BIDS) specification (described at Raw data files are in the Brainvision data format. Links to scripts to load the raw data files into Matlab and Python are provided. Details are in the document linked to at the bottom of this page. Total size of the data is about 1.5 GB.

How to download the data

The data must be downloaded from:

A account is required.  This link allows bulk downloading of multiple files using the download script as described in the "Alternative download method" section on the download page.

Getting help using the data.

If you have questions about using the data, post them on the forum for using data sets. 

How to cite the data

If you publish any work using the data, please cite the publication above (Holdgraf et. al, 2016) also cite the data set using the following:

Holdgraf, C.R.H., de Heer, W., Pasley, B., Rieger, J., Crone, N., Lin, J.J., Knight, R.T., Theunissen, F.E. (2019). Electrocorticography recordings from patients during a passive listening task of degraded and intact English speech.

The above citation uses a Digital Object Identifier (DOI) which is assigned to the data set.  The DOI was created using DataCite ( and the California Digital Library, "EZID" system (

Documentation file

Document Actions