Ghent University Academic Bibliography

Advanced

Stability based testing for the analysis of fMRI data

Joke Durnez UGent and Beatrijs Moerkerke UGent (2011) 7th international conference on multiple comparison procedures, Abstracts.
abstract
Neurological imaging has become increasingly important in the field of psychological research. The leading technique is functional magnetic resonance imaging (fMRI), in which a correlate of the oxygen-level in the blood is measured (the BOLD-signal). In an fMRI-experiment, a time series of brain images is taken while participants perform a certain task. By comparing different conditions, the task-related areas in the brain can be localised. An fMRI study leads to enormous amounts of data. To analyse the data adequately, the brain images are devided into a large number of volume units (or voxels). Subsequently, a time series of the measured signal is modelled voxelwise as a linear combination of different signal components, after which an indication of activation can be tested in each voxel. This encompasses an enormous number of simultaneous statistical tests (+/-250 000 voxels). As a result, the multiple testing problem is a serious challenge for the analysis of fMRI data. In this context, classical multiple testing procedures such as Bonferroni and Benjamini-Hochberg (Benjamini & Hochberg, 1995) have been applied to respectively control the family-wise error rate (FWER) and the false discovery rate FDR)(Genovese, Lazar, & Nichols, 2002). Random Field Theory (Worsley, Evans, Marrett, & Neelin, 1992) controls the FWER while accounting for the spatial character of the data. Because of the dramatically decrease in power when controlling the FWER, methods to control the topological false discovery rate (FDR) were developed (Chumbley & Friston, 2009; Heller, Stanley, Yekutieli, Rubin, & Benjamini, 2006). A general shortcoming of current procedures is the focus on detecting non-null activation while a non-null effect is not necessarily biologically relevant. Moreover, failing to reject the hypothesis of no activation is not the same as confidently excluding important effects. Another aspect that remains largely unexplored is the stability of test results which can be defined as selection variability of individual voxels (Qiu, Xiao, Gordon, & Yakovlev, 2006). Given the need to control both false positives (type I errors) and false negatives (type II errors) in a direct manner (Lieberman & Cunningham, 2009), we approach the multiple testing problem from a different angle. Following the procedure of (Gordon, Chen, Glazko, & Yakovlev, 2009) in the context of gene selection, we present a statistical method to detect brain activation that not only includes information on false positives, but also on power and stability. The method uses bootstrap resampling to extract information on stability and uses this information to detect the most reliable voxels in relation to the experiment. The findings indicate that the method can improve stability of procedures and allows a direct trade-off between type I and type II errors. In this particular setting, it is shown how the proposed method enables researchers to adapt classical procedures while improving their stability. The method is evaluated and illustrated using simulation studies and a real data example.
Please use this url to cite or link to this publication:
author
organization
year
type
conference
publication status
published
subject
keyword
fMRI neuroimaging imaging stability error rates multiple testing problem
in
7th international conference on multiple comparison procedures, Abstracts
conference name
7th International Conference on Multiple Comparison Procedures (MCP - 2011)
conference location
Washington DC
conference start
2011-08-29
conference end
2011-09-01
language
English
UGent publication?
yes
classification
C3
copyright statement
I have retained and own the full copyright for this publication
id
1942353
handle
http://hdl.handle.net/1854/LU-1942353
date created
2011-11-11 20:27:07
date last changed
2012-02-08 09:15:56
@inproceedings{1942353,
  abstract     = {Neurological imaging has become increasingly important in the field of psychological research. The leading technique is functional magnetic resonance imaging (fMRI), in which a correlate of the oxygen-level in the blood is measured (the BOLD-signal). In an fMRI-experiment, a time series of brain images is taken while participants perform a certain task. By comparing different conditions, the task-related areas in the brain can be localised. An fMRI study leads to enormous amounts of data. To analyse the data adequately, the brain images are devided into a large number of volume units (or voxels). Subsequently, a time series of the measured signal is modelled voxelwise as a linear combination of different signal components, after which an indication of activation can be tested in each voxel. This encompasses an enormous number of simultaneous statistical tests (+/-250 000 voxels). As a result, the multiple testing problem is a serious challenge for the analysis of fMRI data. In this context, classical multiple testing procedures such as Bonferroni and Benjamini-Hochberg (Benjamini \& Hochberg, 1995) have been applied to respectively control the family-wise error rate (FWER) and the false discovery rate  FDR)(Genovese, Lazar, \& Nichols, 2002). Random Field Theory (Worsley, Evans, Marrett, \& Neelin, 1992) controls the FWER while accounting for the spatial character of the data. Because of the dramatically decrease in power when controlling the FWER, methods to control the topological false discovery rate (FDR) were developed (Chumbley \& Friston, 2009; Heller, Stanley, Yekutieli, Rubin, \& Benjamini, 2006). A general shortcoming of current procedures is the focus on detecting non-null activation while a non-null effect is not necessarily biologically relevant. Moreover, failing to reject the hypothesis of no activation is not the same as con\unmatched{fb01}dently excluding important effects. Another aspect that remains largely unexplored is the stability of test results which can be de\unmatched{fb01}ned as selection variability of individual voxels (Qiu, Xiao, Gordon, \& Yakovlev, 2006). Given the need to control both false positives (type I errors) and false negatives (type II errors) in a direct manner (Lieberman \& Cunningham, 2009), we approach the multiple testing problem from a different angle. Following the procedure of (Gordon, Chen, Glazko, \& Yakovlev, 2009) in the context of gene selection, we present a statistical method to detect brain activation that not only includes information on false positives, but also on power and stability. The method uses bootstrap resampling to extract information on stability and uses this information to detect the most reliable voxels in relation to the experiment. The \unmatched{fb01}ndings indicate that the method can improve stability of procedures and allows a direct trade-off between type I and type II errors. In this particular setting, it is shown how the proposed method enables researchers to adapt classical procedures while improving their stability. The method is evaluated and illustrated using simulation studies and a real data example.},
  author       = {Durnez, Joke and Moerkerke, Beatrijs},
  booktitle    = {7th international conference on multiple comparison procedures, Abstracts},
  keyword      = {fMRI neuroimaging imaging stability error rates multiple testing problem},
  language     = {eng},
  location     = {Washington DC},
  title        = {Stability based testing for the analysis of fMRI data},
  year         = {2011},
}

Chicago
Durnez, Joke, and Beatrijs Moerkerke. 2011. “Stability Based Testing for the Analysis of fMRI Data.” In 7th International Conference on Multiple Comparison Procedures, Abstracts.
APA
Durnez, J., & Moerkerke, B. (2011). Stability based testing for the analysis of fMRI data. 7th international conference on multiple comparison procedures, Abstracts. Presented at the 7th International Conference on Multiple Comparison Procedures (MCP - 2011).
Vancouver
1.
Durnez J, Moerkerke B. Stability based testing for the analysis of fMRI data. 7th international conference on multiple comparison procedures, Abstracts. 2011.
MLA
Durnez, Joke, and Beatrijs Moerkerke. “Stability Based Testing for the Analysis of fMRI Data.” 7th International Conference on Multiple Comparison Procedures, Abstracts. 2011. Print.