File(s) not publicly available

How modelers model: The overlooked social and human dimensions in model intercomparison studies

journal contribution
posted on 2023-05-03, 20:27 authored by Fabrizio Albanito, David McBey, Matthew Harrison, Pete Smith, Fiona Ehrhardt, Arti Bhatia, Gianni Bellocchi, Lorenzo Brilli, Marco Carozzi, Karen Christie, Jordi Doltra, Christopher Dorich, Luca Doro, Peter Grace, Brian Grant, Joel Léonard, Mark Liebig, Cameron Ludemann, Raphael Martin, Elizabeth Meier, Rachelle Meyer, Massimiliano De Antoni Migliorati, Vasileios Myrgiotis, Sylvie Recous, Renata Sandor, Val SnowVal Snow, Jean-Francois Soussana, Ward Smith, Nuala Fitton
There is a growing realization that the complexity of model ensemble studies depends not only on the models used but also on the experience and approach used by modelers to calibrate and validate results, which remain a source of uncertainty. Here, we applied a multi-criteria decision-making method to investigate the rationale applied by modelers in a model ensemble study where 12 process-based different biogeochemical model types were compared across five successive calibration stages. The modelers shared a common level of agreement about the importance of the variables used to initialize their models for calibration. However, we found inconsistency among modelers when judging the importance of input variables across different calibration stages. The level of subjective weighting attributed by modelers to calibration data decreased sequentially as the extent and number of variables provided increased. In this context, the perceived importance attributed to variables such as the fertilization rate, irrigation regime, soil texture, pH, and initial levels of soil organic carbon and nitrogen stocks was statistically different when classified according to model types. The importance attributed to input variables such as experimental duration, gross primary production, and net ecosystem exchange varied significantly according to the length of the modeler’s experience. We argue that the gradual access to input data across the five calibration stages negatively influenced the consistency of the interpretations made by the modelers, with cognitive bias in “trial-and-error” calibration routines. Our study highlights that overlooking human and social attributes is critical in the outcomes of modeling and model intercomparison studies. While complexity of the processes captured in the model algorithms and parameterization is important, we contend that (1) the modeler’s assumptions on the extent to which parameters should be altered and (2) modeler perceptions of the importance of model parameters are just as critical in obtaining a quality model calibration as numerical or analytical details.


Rights statement

© 2022 The Authors. Published by American Chemical Society


  • English

Does this contain Māori information or data?

  • No


ACS Publication

Journal title

Environmental Science & Technology




Albanito, F., McBey, D., Harrison, M., Smith, P., Ehrhardt, F., Bhatia, A., Bellocchi, G., Brilli, L., Carozzi, M., Christie, K., Doltra, J., Dorich, C., Doro, L., Grace, P., Grant, B., Léonard, J., Liebig, M., Ludemann, C., Martin, R., Meier, E., Meyer, R., De Antoni Migliorati, M., Myrgiotis, V., Recous, S., Sandor, R., Snow, V., Soussana, J.-F., Smith, W. N., & Fitton, N. (2022). How modelers model: The overlooked social and human dimensions in model intercomparison studies. Environmental Science & Technology, 56(18), 13485–13498.