When observational data are used for calibrating models, inconsistencies between the temporal resolution of observations and the scale of simulation may introduce uncertainties in model outputs. This is especially true when large area simulations are run and extreme weather events occur with complex and fine granularity. Using virtual series of observations from Latin hypercube sampling, Confalonieri et al. illustrate a procedure to quantify the impact of uncertainty (related to random errors) in the observations (used for calibrating model parameters) on the uncertainty in model outputs. This study paves the way to the development of conceptual and mathematical frameworks to account for multiple sources of uncertainty. Methodologies of this type can also be useful to assess the suitability of datasets for the purpose of calibration.
Posted in Science briefs.