Advanced search
1 file | 125.28 MB

The story of ‘the data’ : on validity of data and performativity of research participation in psychotherapy research

Femke Truijens (UGent)
(2019)
Author
Promoter
(UGent) and (UGent)
Organization
Abstract
This dissertation is focused on the validity of “the data” that are collected in psychotherapy research for the purpose of evidencing treatment efficacy. In the ‘Evidence Based Treatment’ (EBT) paradigm, researchers rely on the so-called ‘gold standard methodology’ to gather sound and trustworthy evidence, which increasingly influences the organization of mental health care worldwide (Kazdin & Sternberg, 2006). In the gold standard, data are collected by quantified self-report measures, to assess the presence and severity of symptoms before and after treatment. When the pre-post difference is larger for a group of people that received the treatment of interest and a group of people who received no or an alternative treatment (Chambless & Ollendick, 2001), the treatment of interest is called effective. In this methodological procedure, researchers tend to assume that when this gold standard methodology is conducted properly, “the data” will speak for themselves. However, when evidence is based on data, that evidence is principally limited by that data. In other words: output depends on input. That implies that when input is flawed, output will be flawed, even when the very best methods of analysis would be used. So what if “the data” yield validity issues despite (or because of) being collected by validated measures? When “the data” do not straightforwardly evidence treatment effects, will the subsequent steps in the analysis of these data be enough to secure that the evidence in the end does evidence treatment efficacy? In this dissertation, the focus is turned to validity of “the data” that are concretely provided by patient-participants by scoring their own experienced symptoms in self-report questionnaires, in function of evidencing treatment efficacy. A series of empirical case studies were conducted to scrutinize how patient-participants in a randomized controlled psychotherapy study (‘The Ghent Psychotherapy Study’; Meganck et al., 2017) experienced the process of data collection, and how these experiences affected the data that they provided. Each of the studied patient-participants experienced a substantial effect of the questionnaire administration on their complaints. This impacted the level (presence and severity) of complaints, but also changed the way in which the complaints were understood at all, and how patient-participants perceived themselves. Thus, rather than neutrally measuring symptoms, questionnaire administration changed the experienced complaints ‘performatively’ (Cavanaugh, 2015), which turned measurement into a clinical intervention of its own. Consequently, what is measured cannot straightforwardly be called ‘treatment efficacy’, as it may be entangled with, enabled by or even obstructed by effects of measurement and research. Therefore, the act of measurement itself can pose a vital threat to the validity of “the data”. This way, the empirical case studies exhibited that data can yield validity problems despite (or because of) the use of validated measures. Consequently, the validity of a measure as such is no guarantee for the validity of data. Nonetheless, in gold standard research, the data are straightforwardly taken as input for analyses of general treatment efficacy. The question is what happens with those validity issues on the level of individual data, when they pursue their journey towards becoming evidence. In this dissertation, it was argued that the validity issues are not sufficiently solved in the methodological steps after data collection, so when data are invalid in the beginning, these validity issues will simply become part of the data set that forms the input for analysis of the final evidence. This urges that validity issues should be solved on the level of individually provided data, as the validity issues will otherwise become inherent to “the data”. Put formally: valid data is a precondition for evidence in EBT. In conclusion, it is crucial for a sound evidence-base to scrutinize the validity of data in function of the overall goal and utility of the research. For this, it is important not to take “the data” as speaking for themselves, but to regard them as clinical narratives, which are framed in a specific format to be communicated between researcher and respondent in a research context. This emphasizes that the choice for a certain format determines what can be evidenced, so it is vital that these choices indeed allow for obtaining evidence that is useful and valid to serve the clinical goal of EBT.
Keywords
Validity, epistemic validity, validity or research, validity of data, psychotherapy research, evidence-based treatment, evidence, empirical case study, evidence-based case study, methodology, self-report questionnaires, symptom measures, philosophy of science, interdisciplinary

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 125.28 MB

Citation

Please use this url to cite or link to this publication:

Chicago
Truijens, Femke. 2019. “The Story of ‘the Data’ : on Validity of Data and Performativity of Research Participation in Psychotherapy Research”. Ghent, Belgium: Ghent University. Faculty of Psychology and Educational Sciences.
APA
Truijens, F. (2019). The story of “the data” : on validity of data and performativity of research participation in psychotherapy research. Ghent University. Faculty of Psychology and Educational Sciences, Ghent, Belgium.
Vancouver
1.
Truijens F. The story of “the data” : on validity of data and performativity of research participation in psychotherapy research. [Ghent, Belgium]: Ghent University. Faculty of Psychology and Educational Sciences; 2019.
MLA
Truijens, Femke. “The Story of ‘the Data’ : on Validity of Data and Performativity of Research Participation in Psychotherapy Research.” 2019 : n. pag. Print.
@phdthesis{8627917,
  abstract     = {This dissertation is focused on the validity of “the data” that are collected in psychotherapy research for the purpose of evidencing treatment efficacy. In the ‘Evidence Based Treatment’ (EBT) paradigm, researchers rely on the so-called ‘gold standard methodology’ to gather sound and trustworthy evidence, which increasingly influences the organization of mental health care worldwide (Kazdin & Sternberg, 2006). In the gold standard, data are collected by quantified self-report measures, to assess the presence and severity of symptoms before and after treatment. When the pre-post difference is larger for a group of people that received the treatment of interest and a group of people who received no or an alternative treatment (Chambless & Ollendick, 2001), the treatment of interest is called effective. In this methodological procedure, researchers tend to assume that when this gold standard methodology is conducted properly, “the data” will speak for themselves. However, when evidence is based on data, that evidence is principally limited by that data. In other words: output depends on input. That implies that when input is flawed, output will be flawed, even when the very best methods of analysis would be used. So what if “the data” yield validity issues despite (or because of) being collected by validated measures? When “the data” do not straightforwardly evidence treatment effects, will the subsequent steps in the analysis of these data be enough to secure that the evidence in the end does evidence treatment efficacy? In this dissertation, the focus is turned to validity of “the data” that are concretely provided by patient-participants by scoring their own experienced symptoms in self-report questionnaires, in function of evidencing treatment efficacy. 

A series of empirical case studies were conducted to scrutinize how patient-participants in a randomized controlled psychotherapy study (‘The Ghent Psychotherapy Study’; Meganck et al., 2017) experienced the process of data collection, and how these experiences affected the data that they provided. Each of the studied patient-participants experienced a substantial effect of the questionnaire administration on their complaints. This impacted the level (presence and severity) of complaints, but also changed the way in which the complaints were understood at all, and how patient-participants perceived themselves. Thus, rather than neutrally measuring symptoms, questionnaire administration changed the experienced complaints ‘performatively’ (Cavanaugh, 2015), which turned measurement into a clinical intervention of its own. Consequently, what is measured cannot straightforwardly be called ‘treatment efficacy’, as it may be entangled with, enabled by or even obstructed by effects of measurement and research. Therefore, the act of measurement itself can pose a vital threat to the validity of “the data”.  

This way, the empirical case studies exhibited that data can yield validity problems despite (or because of) the use of validated measures. Consequently, the validity of a measure as such is no guarantee for the validity of data. Nonetheless, in gold standard research, the data are straightforwardly taken as input for analyses of general treatment efficacy. The question is what happens with those validity issues on the level of individual data, when they pursue their journey towards becoming evidence. In this dissertation, it was argued that the validity issues are not sufficiently solved in the methodological steps after data collection, so when data are invalid in the beginning, these validity issues will simply become part of the data set that forms the input for analysis of the final evidence. This urges that validity issues should be solved on the level of individually provided data, as the validity issues will otherwise become inherent to “the data”. Put formally: valid data is a precondition for evidence in EBT. 

In conclusion, it is crucial for a sound evidence-base to scrutinize the validity of data in function of the overall goal and utility of the research. For this, it is important not to take “the data” as speaking for themselves, but to regard them as clinical narratives, which are framed in a specific format to be communicated between researcher and respondent in a research context. This emphasizes that the choice for a certain format determines what can be evidenced, so it is vital that these choices indeed allow for obtaining evidence that is useful and valid to serve the clinical goal of EBT.
},
  author       = {Truijens, Femke},
  isbn         = {9789090322261},
  keywords     = {Validity,epistemic validity,validity or research,validity of data,psychotherapy research,evidence-based treatment,evidence,empirical case study,evidence-based case study,methodology,self-report questionnaires,symptom measures,philosophy of science,interdisciplinary},
  language     = {eng},
  pages        = {329},
  publisher    = {Ghent University. Faculty of Psychology and Educational Sciences},
  school       = {Ghent University},
  title        = {The story of ‘the data’ : on validity of data and performativity of research participation in psychotherapy research},
  year         = {2019},
}