Menu

There’s no such thing as a ‘true’ model: The challenge of assessing face validity

Abstract:

To select among competing generative models of timeseries data, it is necessary to balance the goodness of fit (accuracy) and model complexity. Bayesian methods are a mathematically principled way to achieve this balance. However, when performing simulations-to assess the identifiability of models (face validation)-the best model identified by Bayesian model comparison might appear more complex than the model that actually generated the data. We illustrate this using dynamic causal models of human electrophysiological data, where models with multiple parameter modulations are selected as the best model, even if the true modulations are sparse. We explain this by the form of the complexity penalty, which is equivalent to weighted L2 norm. This phenomenon is an example of implicit prior biases that necessarily entail a complexity penalty.