17.07.2019 - Causality and Plausibility
Various methodological approaches have been developed for causality analysis in evaluation. However, in practice evaluations are sometimes different from scientific studies: Evaluators often see restrictions in the implementation of recognized methods as unavoidable. Regarding causal claims in evaluation reference is made to 'plausibility' or 'plausibility considerations'. Yet, plausibility is not clearly defined and accordingly used differently depending on the evaluator. Ultimately, the methodological content of 'plausibility' is unclear.
The Spring Conference 2019 of DeGEval’s Working Group on Methods – which is co-organised by DEval – intended to contribute to defining the term plausibility in evaluations from a methodological perspective. The discussion also sought to identify potential boundaries between science and evaluation and ways to overcome them. 50 participants from the Austrian and German evaluation community, including researchers, evaluators and evaluation institutions discussed these topics during the two-day conference.
The conference opened with a fishbowl discussion in which scientists, evaluators and development practitioners from the University of Vienna and the Austrian Development Agency discussed similarities and differences of academic studies and evaluations with regard to the production of knowledge. Although differences between both might prevail on first sight, the discussants identified more commonalities than differences between academic studies and evaluations during the discussion. In the first keynote presentation Conny Wunsch, Professor for labour economics at the University of Basel, defined plausibility as internal validity and presented possibilities to account for threats to internal validity in quasi-experimental and experimental designs. Building on this presentation, Derek Beach, Professor for Political Science at the University of Aarhus, presented a similar definition of plausibility from a process-tracing perspective. He argued for a second evidence hierarchy for theory-based evaluations. Both presenters emphasized the similarities between evaluations and academic studies based on the methodological necessities. In the concluding presentation Jos Vaessen, Methods Advisor at the Independent Evaluation Group of the World Bank, provided an antithetical point of view by highlighting differences between both fields – mainly due to the structural integration of evaluations in organisations as well as the intended use of evaluations.
Further information can be found on the methods section’s homepage.