In this previous post, I described the motivation for this site, this blog, and a new software solution I’ve been building since catching COVID19.

Here, I’ll give a bit more detail about what I’ve been putting together. One view of my system looks like this:

What you’re seeing is a simulated downtown San Francisco; people are walking around town, and you can see their illness level and COVID particles if you zoom in. This one goes beyond things you can’t control to what you can control. If you are, for instance, a city manager, you can experiment with how much to invest in a mask-wearing campaign, and see the results (although note that the underlying epidemiological model is not correct, since this is a prototype). And you can see how that decision interacts with other interventions.

To view the results of thousands of simulations, the simulator includes another view. In this “immediate mode”, you can see the results not of just one simulation, but of many, which have determined the probable impact of multiple choices on the outcomes you care about:

Ultimately this simulation also produces a ranked list of the expected value of COVID19 interventions in your situation, as illustrated to the right:

Why risk analysis and mitigation needs a facelift for situations like COVID19

COVID19, climate change, and several more of the “super wicked” unsolved problems share a few things in common. COVID-19 is invisible, and its effects play out through long chains of cause-and-effect through space and time.  So determining the interventions along this chain that will together produce the best effect is very complicated. In addition, the chains are exponential and nonlinear, involving “virtuous” and “vicious” cycles, and the dynamics within them are emergent. The situation changes rapidly as we learn more about how to reduce the likelihood of infection, how the virus spreads, and about the infection rate and incidence rate is in different geographical areas. 

We need solutions that take all of these factors into account.  But let’s face it: humans are not very good at navigating these kinds of situations; we need help. Those of us in data science and artificial intelligence know that we have a lot to offer. Yet there is a “missing chassis”: like a car, we need powerful technology “under the hood”, an easy-to-use “driver’s seat”, and this missing connecting layer, as shown here:

The good news is that, in the last 1,000 years, we’ve built the “engine parts” for a robust solution.  We have only now to to assemble them through this “chassis” integration framework that connects in two directions: connecting the “engine parts” together, and connecting the technology to non-technical decision makers and the general public (who have massive skin in the game, and so deserve to be included).

The picture above shows how this works.  Multiple technologies live “under the hood” where those who wish to use them don’t have to learn their innards.  They are accessed from a “driver’s seat” where the tech is translated into a form that is easy to use for decision makers.  I think that this should be an interactive and immersive visual video game-like simulation that can run on a browser, in Augmented Reality (AR) and Virtual Reality (VR), and on a mobile device.  And connecting these two is the decision intelligence layer, which acts as both a blueprint that all can understand as well as an integration framework defining how the pieces connect.

Below is a video that I made showing how the simulator looks as of a few weeks ago (the updated version is on the front page of this web site).

My goal is to inspire an international effort to integrate these technologies so that we can minimize COVID19 risk in all the places where we need to be in physical proximity: home, work, and more. We need many kinds of supporters; please drop me a line if you’d like to be part of this work.

A version of this post originally appeared on my Link blog.