Monthly Archives: April 2019

What happened at the “Integrating Qualitative and Quantitative Evidence using Social Simulation” workshop in Leiden

Poster med

The Lorentz workshop on “Integrating Qualitative and Quantitative Evidence using Social Simulation” happened this week in Leiden this week (8th-12th April 2019).

The work in this split up into different sub-projects:

  • QualML4ABM. 3 different ways of using Machine Learning (ML), (1) to derive insights from large and complex data sets (2) to allow greater flexibility in agent behaviours (more complex and data derived than traditional rule-base approaches), and (3) using ML to understand the simulations analysing the output from many runs in many dimensions. They are writing a review paper for how and why ML might be used to help  simulation modelling. Contact Timo <>.
  • Descriptive approaches (The ‘other group’). Doing a survey on the different approaches to using qualitative (or mixed methods) methods for informing the design of simulation models. Starting with a literature view of cases where this is done and then assessing these on a variety of dimensions – how are they used, their function. If interested pleas contact Patrycia <>.
  • Animal Farm. Working on a common language to integrating qualitative and quantitative evidence (in the context of  simulation models). Looked at how each person would approach a particular case study, seeing the variety of approaches. If interested please contact Nanda <>. They also talked about a grant application.
  • NetLogo Extensions. Nicolas Payette <> is considering what NetLogo extensions or other tools could  be helpful for this kind of work. In particular, he is considering a tool to aid in rule provenance (the links between parts of the simulation code and the qualitative data and/or analysis it relates to). We plan to have a session at the Social Simulation conference in Mainz (2019) on this.
  • Are we done yet?. Looked at how one decides when to know to stop adding new details into a simulation. They are planning a pilot study/experiment with students to assess how students decide when a model has enough data sources ideas for processes. The hypothesis is that different students (hence people) are different in how they formulate questions, decide the granularity, interpret a model etc. If interested please contact Thomas <>.
  • RAT (Rigour and Transparency) group. Looking to standards and methods to improve the rigour and transparency of simulation development. Developing a protocol/framework that would could be applied to a wide variety of kinds and purposes of modelling. They have developed a road map of the process which will be developed into a protocol. If interested please contact Melania <>.

There is a private blog where ideas, results pictures from the workshop are published. If  you want access to this please email, me <>