Restricted access

Data Analysis Competition

Biomag2012 analysis competition - distributed representations

Organizers: Ole Jensen and Ali Bahramisharif 

The decoding of mental states and neuronal representations from brain imaging data is a research field in rapid development (Spiers HJ, Maguire EA. Decoding human brain activity during real-world experiences. Trends Cogn Sci. 2007 ; Haynes JD, Rees G. Decoding mental states from brain activity in humans. Nat Rev Neurosci. 2006). These decoding approaches have a great potential in MEG research where data are recorded from hundreds of sensors with a millisecond time resolution. In particular cognitive neuroscience could benefit from further development of decoding approaches in order to identify representational specific brain activity.

The aim of the competition is to:

  • Promote the development and application of new multivariate analysis techniques for decoding of brain activity
  • Make the audience aware of novel approaches
  • Elucidate the pros and cons of different the techniques

    - Which assumptions are behind a given approach?
    - What are the limitations?
  • Attract signal-processing experts from outside the MEG field
  • Encourage a discussion on the cognitive insight which the techniques can bring about

The deadline for submitting results is Aug 17, 2012.

The submission should contain:

  1. A matlab file with the labels of all the trials
  2. A PPT/PDF presentation shortly outlining 
    1. The approach 
    2. Key findings + experiences 
    3. Future perspectives. 

We encourage the participants to send us an email including their group name and country to () by the first of August, with the following subject: "Participating in the Biomag analysis competition"

The winners will be asked to present their findings at the meeting. There will be two prizes (350 Euro per prize); one for each data set.

The datasets and PDF files are available on an ftp-server as a zip file:

Data set 1: Decoding word and category specific representations

 

Winners: Emanuele Olivetti(1, 2), Yuan Tao(2), Nathan Weisz(2) (Bruno Kessler Foundation(1) and University of Trento(2), Italy)

 

Please find the data on the ftp server

  • S1
  • S2
  • parameters.txt

The data are from the paper:

Chan AM, Halgren E, Marinkovic K, Cash SS.(2011) Decoding word and category-specific spatiotemporal representations from MEG and EEG. NeuroImage 54(4):3028-39

Summary:

The organization and localization of lexico-semantic information in the brain has been debated for many years. Specifically, lesion and imaging studies have attempted to map the brain areas representing living versus nonliving objects, however, results remain variable. This may be due, in part, to the fact that the univariate statistical mapping analyses used to detect these brain areas are typically insensitive to subtle, but widespread, effects. Decoding techniques, on the other hand, allow for a powerful multivariate analysis of multichannel neural data. In this study, we utilize machine-learning algorithms to first demonstrate that semantic category, as well as individual words, can be decoded from EEG and MEG recordings of subjects performing a language task. Mean accuracies of 76% (chance=50%) and 83% (chance=20%) were obtained for the decoding of living vs. nonliving category or individual words respectively. Furthermore, we utilize this decoding analysis to demonstrate that the representations of words and semantic category are highly distributed both spatially and temporally. In particular, bilateral anterior temporal, bilateral inferior frontal, and left inferior temporal-occipital sensors are most important for discrimination. Successful intersubject and intermodality decoding shows that semantic representations between stimulus modalities and individuals are reasonably consistent. These results suggest that both word and category-specific information are present in extracranially recorded neural activity and that these representations may be more distributed, both spatially and temporally, than previous studies suggest. 

Specifics:

The contestants will be provided ~80% of the trials which are labelled. They will also be provided ~20% unlabeled test trials. The goal is to identity the  labels of the test trials using a decoding approach. 

From the data of the two subjects decode living versus non-living objects for presentations being 

a) auditory

b) visual 

- How high is the decoding accuracy for:

  • Auditory living versus auditory non-living
  • Visual living versus visual non-living
  • Living versus non-living 

- Which temporal features allow for optimal decoding (e.g. time and/or frequency)?

- Which spatial features allow for optimal decoding?

Data set 2: Decoding long-term memory representations

 

 

Winners: Vassilis Tsiaras and Yannis Stylianou (University of Crete, Greece)

 

 

Please find the data on the ftp server (training and test data are combined) 

  • ltmcla_S08
  • ltmcla_S09
  • parameters.txt

From the reports:

Alexander Backus, Ole Jensen , Esther Meeuwissen, Marcel van Gerven and Serge Dumoulin (2011) Investigating the temporal dynamics of long-term memory representation retrieval using multivariate pattern analyses on magnetoencephalography data. 

 

  • MSc Thesis (Backus_AR_2011_decoding_long-term_memory_FULL_REPORT.pdf)
  • NIPS abstract. (Backus_AR_2011_decoding_long-term_memory.pdf) 

(can be found on the ftp server)

Summary:

It is generally accepted that long-term memory (LTM) representations are synaptically encoded in the brain. The temporal dynamics of the retrieval of these LTM representations however remain to be elucidated. In the current study, we utilize multivariate pattern analyses (MVPA) on magnetoencephalography (MEG) data of a paired-associate recall task, where pictures of objects are randomly paired to different types of gratings. By training and testing a classifier algorithm on data segments of the recall interval, we attempt predict on a trial-by-trial bases the orientation or color of the grating being recalled by the subject. We consistently observe an increase in classification accuracy at around 500 milliseconds after object cue presentation, albeit not statistically significant on the group level. Exploratory analyses of the topographic plots of classifier parameters reveal that data from posterior sensors contain the most informative features for classification. This observed importance of posterior regions may relate to a previously reported positive ERF component during a paired?associate recall task. If so, results from our study suggest that this ERF component concerns processing of representational information and possibly reflects the macroscopic aggregate of so-called pair-coding neurons, earlier observed in monkey inferior temporal cortex. Moreover, clusters of frontal sensors also appear to be important for classification at some time intervals. The reported existence of pair-coding neurons in monkey prefrontal cortex may be related to this observed importance of frontal regions. Additionally, frontal regions may be involved in language processes as a result of learning strategies. In conclusion, our exploratory MPVA of MEG data provides some support for temporally specificneurophysiological processes underlying LTM retrieval during paired?associate recall.

The contestants will be provided ~80% of the trials which are labelled. They will also be provided ~20% unlabeled test trials. The goal is to identity the  labels of the test trials using a decoding approach. 

From the data of subject 8 and 9 decode 

a. Orientation of the recalled gratings

b. Color of the recalled gratings 

 

- How high is the decoding accuracy for:

  • left versus right
  • red versus green

- Which temporal features allow for optimal decoding (e.g. time and/or frequency)?

- Which spatial features allow for optimal decoding?

For inquiries please contact

 



© Biomag 2012 |