Journal of Data Science,
Statistics, and Visualisation

Should We Test the Model Assumptions Before Running a Model-based Test?

Authors

DOI:

https://doi.org/10.52933/jdssv.v3i3.73

Keywords:

Misspecification testing, Goodness of fit, Combined procedure, Two-stage testing, Misspecification paradox

Abstract

Statistical methods are based on model assumptions, and it is statistical folklore that a method's model assumptions should be checked before applying it. This can be formally done by running one or more misspecification tests testing model assumptions before running a method that requires these assumptions; here we focus on model-based tests. A combined test procedure can be defined by specifying a protocol in which first model assumptions are tested and then, conditionally on the outcome, a test is run that requires or does not require the tested assumptions. Although such an approach is often taken in practice, much of the literature that investigated this is surprisingly critical of it.
Our aim is to explore conditions under which model checking is advisable or not advisable. For this, we review results regarding such ``combined procedures'' in the literature, we review and discuss controversial views on the role of model checking in statistics, and we present a general setup in which we can show that preliminary model checking is advantageous, which implies conditions for making model checking worthwhile.

Downloads

Additional Files

Published

2023-11-07

How to Cite

Shamsudheen, I., & Hennig, C. (2023). Should We Test the Model Assumptions Before Running a Model-based Test?. Journal of Data Science, Statistics, and Visualisation, 3(3). https://doi.org/10.52933/jdssv.v3i3.73
Journal of Data Science,
Statistics, and Visualisation
Pages