Pandemic Control, Game Theory, and Machine Learning
Yao Xuan
Robert Balkin
Jiequn Han
Ruimeng Hu
Hector D. Ceniceros
Communicated by Notices Associate Editor Reza Malek-Madani
COVID-19 and Control Policies
The coronavirus disease 2019 (COVID-19) pandemic has brought an enormous impact on our lives. Based on data from the World Health Organization, as of May 2022, there have been more than 520 million confirmed cases of infection and more than 6 million deaths globally; In the United States, there have been more than 83 million confirmed cases of infection and more than one million cases of death. Needless to say, the economic impact has also been catastrophic, resulting in unprecedented unemployment and the bankruptcy of many restaurants, recreation centers, shopping malls, etc.
Control policies play a crucial role in the alleviation of the COVID-19 pandemic. For example, lockdown and work-from-home policies and mask requirements on public transport and public areas have been proved to be effective in stopping the spreading of COVID-19. On the other hand, governors also have to be aware of the economic activity loss due to these pandemic control policies. Therefore, a thorough understanding of the evolution of COVID-19 and the corresponding decision-making provoked by such a virus will be beneficial for future events and in other interconnected systems around the world.
Epidemiology
Epidemiology is the science of analyzing the distribution and determinants of health-related states and events in specified populations. It is also the application of this study to the control of health problems. Infectious diseases are one of this kind, including the ongoing novel coronavirus (COVID-19).
Since March 2020, when the World Health Organization declared the COVID-19 outbreak a global pandemic, epidemiologists have made tremendous efforts to understand how COVID-19 infections emerge and spread and how they may be prevented and controlled. Many epidemiological methods involve mathematical tools, e.g., using causal inference to identify causative agents and factors for its propagation, and molecular methods to simulate disease transmission dynamics.
The first epidemic model concerning epidemic spreading dates back to 1760 by Daniel Bernoulli Ber60. Since then, many papers have been dedicated to this field and, later on, to epidemic control. Among control strategies, the quarantine, firstly introduced in 1377 in Dubrovnik on Croatia’s Dalmatian Coast GB97, has proven a powerful component of the public health response to emerging and reemerging infectious diseases. However, quarantine and other measures for controlling epidemic diseases have always been controversial due to the potentially raised political, ethical, and socioeconomic issues. Such complication naturally calls for the inclusion of decision-making in epidemic control, as it helps to answer how to take optimal actions to balance public interest and individual rights. But not until recent years have there been some research studies in this direction. Moreover, when multiple authorities are involved in the decision-making process, it is challenging to analyze how to collectively or competitively make decisions due to the difficulty of solving this high-dimensional problem.
In this article, we focus on the decision-making development for the intervention of COVID-19, aiming to provide mathematical models and efficient numerical methods, and justifications for related policies that have been implemented in the past and explain how the authorities’ decisions affect their neighboring regions from a game theory viewpoint.
Mathematical models
In a classic, compartmental epidemiological model, each individual in a geographical region is assigned a label, e.g., Susceptible, Exposed, Infectious, Removed, Vaccinated. Different labels represent different status – S: those who are not yet infected; E: who have been infected but are not yet infectious themselves; I: who have been infected and are capable of spreading the disease to those in the susceptible category, R: who have been infected and then removed from the disease due to recovery or death, and V: who have been vaccinated and are immune to the infection. As COVID-19 progressed, it was learned that spread from asymptomatic cases was an important driving force. More refined models may further split I into mild-symptomatic/asymptomatic individuals who are in-home for recovery and serious-symptomatic ones that need hospitalization. We point to AZM$^{+}$20 which considers a similar problem in the optimal control setting, which includes asymptomatic individuals and the effect of impulses.
Individuals transit between these compartments, and the labels’ order in a model indicates the flow patterns between the compartments. For instance, in a simple SEIR model LHL87 (see also Figure 1a), a susceptible individual becomes exposed after close contact with infected individuals; exposed individuals become infectious after a latency period; and infected individuals become removed afterward due to recovery or death. Let $S(t)$,$E(t)$,$I(t)$ and $R(t)$ be the proportion of population of each compartment at time $t$, the following differential equations provide the mathematical model:
where $\beta$ is the average number of contacts per person per time, $\gamma$ describes the latent period when the person has been infected but not yet infectious, and $\lambda$ represents the recovery rate measuring the proportion of people recovered or dead from infected population.
Many infections, such as measles and chickenpox, confer long-term, if not lifelong, immunity, while others, such as influenza, do not. As evidenced by numerous epidemiological and clinical studies analyzing possible factors for COVID reinfections, COVID-19 falls precisely into the second category NBN22. Mathematically, this can be taken into account by adding a transition $I \to S$.
Though deterministic models such as 1 have received more attention in the literature, mainly due to their tractability, stochastic models have some advantages. The epidemic-spreading progress is by nature stochastic. Moreover, introducing stochasticity to the system could account for numerical and empirical uncertainties, and also provide probabilistic predictions, i.e., a range of possible scenarios associated with their likelihoods. This is crucial for understanding the uncertainties in the estimates.
One class of stochastic epidemic models uses continuous-time Markov chains, where the state process takes discrete values but evolves in continuous time and is Markovian. In a simple Stochastic SIS (susceptible-infectious-susceptible) model KL89 with a population of $N$ individuals, let $X_t$ be the number of infected individuals at time $t$,$\beta$ the rate of infected individuals infecting those susceptible, and $\lambda$ the rate that an infected individual recovers and becomes susceptible again. The transition probabilities among states $n$,$n+1$,$n-1$ are
Another way to construct a stochastic model is by introducing white noise $W_t$ in 1TBV05All08, which we shall mainly consider in this paper and describe in details in the later section.
Control of disease spread
After modeling how diseases are transmitted through a population, epidemiologists then design corresponding control measures and recommend health-related policies to the region planner.
In general, there are two types of interventions: pharmaceutical interventions (PIs), such as getting vaccinated and taking medicines, and nonpharmaceutical interventions (NPIs), such as requiring mandatory social distancing, quarantining infected individuals, and deploying protective resources. For the ongoing COVID-19, intervention policies that have been implemented include, but are not limited to, issuing lockdown or work-from-home policies, developing vaccines, and later expanding equitable vaccine distribution, providing telehealth programs, deploying protective resources and distributing free testing kits, educating the public on how the virus transmits, and focusing on surface disinfection.
Mathematically, this can be formulated as a control problem: the planner chooses the level of each policy affecting the transitions in 1 such that the region’s overall cost is minimized. Generally, NPIs help mitigate the spread by lowering the infection rate $\beta$, e.g., a lockdown or work-from-home policy $\ell (t)$ implemented at time $t$ modifies the transition to
meaning that only $(1-\theta \ell (t))$ of the original susceptible and infectious individuals can contact each other where $\theta$ describes the effectiveness of $\ell$AAL20 (see Figure 1b). PIs such as taking preventive medicines, if available, will also lower the infection rate $\beta$, while using antidotes will increase the recovery rate $\lambda$. The modeling of vaccinations is more complex. Depending on the target disease, it may reduce $\beta$ (less chance to be infected) or increase $\lambda$ (faster recovery). It may even create a new compartment “Vaccinated” in which individuals cannot be infected and which is an absorbing state if lifelong immunity is gained.
A region planner, taking into account the interventions’ effects on the dynamics 1, decides on policy by weighing different costs. These costs may include the economic loss due to decrease in productivity during a lockdown, the economic value of life due to death of infected individuals, and other social-welfare costs due to the aforementioned measurements.
Game-theoretic SEIR Model
Game theory studies the strategic interactions among rational players and has applications in all fields of social science, computer science, financial mathematics, and epidemiology. A game is noncooperative if players cannot form alliances or if all agreements need to be self-enforcing. Nash equilibrium is the most common kind of self-enforcing agreement Nas51, in which a collective strategy emerges from all players in the game to which no one has an incentive to deviate unilaterally.
Nowadays, as the world is more interconnected than ever before, one region’s epidemic policy will inevitably influence the neighboring regions. For instance, in the US, decisions made by the governor of New York will affect the situation in New Jersey, as so many people travel daily between the two states. Imagine that both state governors make decisions representing their own benefits, take into account others’ rational decisions, and may even compete for the scarce resources (e.g., frontline workers and personal protective equipment). These are precisely the features of a noncooperative game. Computing the Nash equilibrium from such a game will provide valuable, qualitative guidance and insights for policymakers on the impact of specific policies.
We now introduce a multi-region stochastic SEIR model XBH$^{+}$22 to capture the game features in epidemic control. We give an illustration for two regions in Figure 1c. Each region’s population is divided into four compartments: Susceptible, Exposed, Infectious, and Removed. Denote by $S^n_t, E^n_t, I^n_t, R^n_t$ the proportion of the population in the four compartments of the region $n$ at time $t$. They satisfy the following stochastic differential equations (SDEs), which have included interventions (PIs and NPIs), stochastic factors, and game features,
where $n \in \mathcal{N} \coloneq \{1, 2, \ldots , N\}$ is the collection of $N$ regions, $W_t$ with different superscripts indicate white noise for a compartment in a specific region, $\pmb{\ell }_t \equiv (\ell ^1_t, \ldots , \ell ^N_t)$ and $\mathbf{h}_t \equiv (h^1_t, \ldots , h^N_t)$ are NPIs and PIs chosen by the region planners at time $t$. The planner of region $n$ minimizes its region’s cost within a period $[0,T]$:
In 2, $\beta ^{nk}$ denotes the average number of contacts of infected people in region $k$ with susceptible individuals in region $n$ per time unit. Although some regions may not be geographically connected, the transmission between the two is still possible due to air travel, but is still less intensive than the transmission within the region, i.e., $\beta ^{nk}>0$ and $\beta ^{nn} \gg \beta ^{nk}$ for all $k \neq n$. The decision for NPIs of region $n$’s planner is given by $\ell ^n_t \in [0, 1]$. In particular, it represents the fraction of the population under NPIs (such as social distancing) at time $t$. We assume that those under interventions cannot be infected. However, the policy may only be partially effective as essential activities (food production and distribution, health, and basic services) have to continue. We use $\theta \in [0,1]$ to measure this effectiveness. The transition rate under policy $\mbfscrl$ thus become $\beta ^{nk} S^n_t I^k_t (1-\theta \ell ^n_t)(1-\theta \ell ^k_t)$. The case $\theta = 1$ means the policy is fully effective. One can also view $\theta$ as the level of public compliance.
The planner of region $n$ also makes the decision $h^n_t \in [0,1]$. This represents the effort, at time $t$, that the planner puts into PIs. We refer to this term, $h^n_t$, as the health policy. It will influence the vaccination availability $v(\cdot )$ and the recovery rate $\lambda (\cdot )$ of this model. $v(h^n_t)$ denotes the vaccination availability of region $n$ at time $t$. In this model, we assume that once vaccinated, the susceptible individuals $v(h^n_t)S^n_t$ become immune to the disease, and join the removed category $R^n_t$. This assumption is not very consistent with COVID-19 but reasonable for a short-term decision-making problem. We model it as an increasing function of $h^n_t$, and if the vaccine has not yet been developed, we can define $v(x) = 0$ for $x \leq \overline{h}$.
E
In 3, $\gamma$ describes the latent period when the person is infected but is not yet infectious. It is the inverse of the average latent time and we assume $\gamma$ to be identical across all regions. The transition between $E^n$ and $I^n$ is proportional to the fraction of exposed individuals, i.e., $\gamma E^n_t$.
I and R
In 4 and 5, $\lambda (\cdot )$ represents the recovery rate. For the infected individuals, a fraction $\lambda (h^n)I^n$ (including both death and recovery from the infection) joins the removed category $R^n$ per time unit. The rate is determined by the average duration of infection $D$. We model the duration and the recovery rate related to the health policy $h^n_t$ decided by its planner.
The more effort put into the region (e.g., expanding hospital capacity and creating more drive-thru testing sites), the more clinical resources the region will have and the more resources will be accessible by patients, which could accelerate recovery and slow down death. The death rate, denoted by $\kappa (\cdot )$, is crucial for computing the cost of the region $n$.
Cost
In 6, each region planner faces four types of cost. One is the economic activity loss due to the lockdown policy, where $w$ is the productivity rate per individual, and $P^n$ is the population of the region $n$. The second one is due to the death of infected individuals. Here, $\kappa$ is the death rate which we assume for simplicity to be constant, and $\chi$ denotes the economic cost of each death. The hyperparameter $a$ describes how planners weigh deaths and infections as compared to other costs. The third one is the in-patient cost, where $p$ is the hospitalization rate, and $c$ is the cost per in-patient per day. The last term $\eta (h^n_t)^2$ quantifies the grants for health policies. We choose a quadratic form so that the function is concave in $h^n_t$. This is to account for the law of diminishing marginal utility: the marginal utility from each additional unit declines as investment increases. All costs are discounted by an exponential function $e^{-rt}$, where $r$ is the risk-free interest rate, to take into account the time preference. Note that region $n$’s cost depends on all regions’ policies $(\pmb{ \ell }, \mathbf{h})$, as $\{I^k, k \neq n\}$ appearing in the dynamics of $S^n$. Thus we write it as $J^n(\pmb{\ell }, \mathbf{h})$. The above model 2–5 is by no doubt a prototype, and one can generalize it by considering reinfections (adding transmission from $R^n$ to $S^n$), asymptomatic population (adding asymptomatic compartment $A^n$), different control policy for $S^n$ and $I^n$ (using $\ell _S$ and $\ell _I$ in 2–3), different fatality rates for young and elder population (introducing $\kappa _Y$ and $\kappa _E$ in 6).
Nash equilibria and the HJB system
As explained above, the interaction between region planners can be viewed as a noncooperative game, when Nash equilibrium is the notion of optimality.
Under proper conditions, the NE is obtained by solving $N$-coupled Hamilton–Jacobi–Bellman (HJB) equations via dynamic programming CD18, Section 2.1.4. To simplify the notation, we concatenate the states into a vector form $\mathbf{X}_t \equiv [\mathbf{S}_t, \mathbf{E}_t, \mathbf{I}_t]^{\operatorname {T}}\equiv [S_t^1, \cdots , S_t^N, E_t^1, \cdots , E_t^N, I_t^1, \cdots , I_t^N]^{\operatorname {T}}\in \mathbb{R}^{3N}$, and denote its dynamics by
For the sake of simplicity, we omit the actual definition of $b$,$f^n$, and $\Sigma$ and refer XBH$^{+}$22 for further details. Let $V^n(t, \mathbf{x})$ be the minimized cost defined in 6 if the system starts at $\mathbf{X}_t = x$. Then, $V^n$,$n = 1, \ldots , N$ solves
Solving for the NE of the game is equivalent to solving the $N$-coupled HJB equations of dimension $(3N+1)$ defined in Equation 7. Due to the high-dimensionality, this is a formidable numerical challenge. We overcome this through a deep learning methodology we call Enhanced Deep Fictitious Play, being broadly motivated by the method of fictitious play introduced by Brown Bro51.
Deep Learning. Deep learning leverages a class of computational models composed of multiple processing layers to learn representations of data with multiple levels of abstraction LBH15. Deep neural networks are effective tools for approximating unknown functions in high-dimensional space. In recent years, we have witnessed noticeable success in a marriage of deep learning and computational mathematics to solve high-dimensional differential equations. Specifically, deep neural networks show strong capability in solving stochastic control and games HJE18HL22. Below, we use a simple example to illustrate how a deep neural network is determined for function approximation.
Suppose we would like to approximate a map $y=f(x)$ by a neural network $\mathcal{NN}(x,\mathbf{w})$ in which one seeks to obtain appropriate parameters of the network, $\mathbf{w}$, through a process called training. This consists of minimizing a loss function that measures the discrepancies between the approximation and true values over the so-called training set $\{x_i\}_{i=1}^N$. Such a loss function has the general form
where $\mathcal{R}(\mathbf{w})$ is a regularization term on the parameters. The first term $L_i(f(x_i),\mathcal{NN}(x_i, \mathbf{w}))$ ensures that the predictions of $\mathcal{NN}(x_i, \mathbf{w})$ match approximately the true value $f(x_i)$ on the training set $\{x_i\}_{i=1}^N$. Here, $L_i$ could be a direct distance like the $L^p$ norm or error terms derived from some complex simulations associated with $f(x_i)$ and $\mathcal{NN}(x_i, \mathbf{w})$. The hyperparameter $\lambda$ characterizes the relative importance between the two terms in $L(\mathbf{w})$. To find an optimal set of parameters $\mathbf{w}^\ast$, one solves the problem of minimizing $L(\mathbf{w})$ by the stochastic gradient descent (SGD) method BCN18. Regarding the architecture of $\mathcal{NN}(x, \mathbf{w})$, there is a wide variety of choices depending on the problem, for example fully connected neural networks, convolutional neural networks, recurrent neural networks, and transformers. In this work, we chose fully connected neural networks to approximate the solution and constructed the loss function by simulating the backward differential equations corresponding to the HJB equations.
Note that the HJB system 7 is difficult to solve due to the high-dimensionality of the $N$-coupled equations. What if we could decouple the system to $N$ separate equations, each of which is easier to solve? This is the central idea of fictitious play, where we update our approximations to the optimal policies of each player iteratively stage by stage. In each stage, instead of updating the approximations of all the players together by solving the giant system, we do it separately and parallelly. Each player solves for her own optimal policy assuming that the other players are taking their approximated optimal strategies from the last stage. Let us denote the optimal policy and corresponding value function of the single player $n$ in stage $m$ as $\alpha ^{n, m}$ and $V^{n, m}$, respectively, and the collection of these two quantities for all the players as $\mbfalpha^{m}=(\alpha ^{1,m},\dots ,\alpha ^{N,m})$ and $\mathbf{V}^{m}=(V^{1,m},\dots ,V^{N,m})$. Finally, let us denote the optimal policies and corresponding value functions for all the players except for player $n$ as $\mbfalpha^{-n,m}=(\alpha ^{1,m},\dots ,\alpha ^{n-1,m},\alpha ^{n+1,m},\dots ,\alpha ^{N,m})$ and $\mathbf{V}^{-n,m}=(V^{1,m},\dots ,V^{n-1,m},V^{n+1,m},\dots ,V^{N,m})$, where $\alpha ^{n,m}$ is a concatenation of lockdown policies and vaccination policies, $i.e., (\ell ^{n,m},h^{n,m})$. At stage $m+1$, we can solve for the optimal policy and value function of player $n$ given other players are taken the known policies $\mbfalpha^{-n,m}$ and the corresponding value $\mathbf{V}^{-n,m}$. The logic of fictitious play is shown in Figure 2, where players iteratively decide optimal policies in stage $m+1$, based on other players’ optimal policies in stage $m$. This is slightly different than the usual simultaneous fictitious play, where the belief is described by the time average of past play and the distinction is further discussed in HH20.
The Enhanced Deep Fictitious Play (DFP) algorithm we have designed, built from the Deep Fictitious Play (DFP) algorithm HH20, reduces time cost from $\mathcal{O}(M^2)$ to $\mathcal{O}(M)$ and memory cost from $\mathcal{O}(M)$ to $\mathcal{O}(1)$, with $M$ as the total number of fictitious play iterations.
We illustrate one stage of enhanced deep fictitious play in Figure 3. At the $(m+1)^{th}$ stage, given the optimal policies $\mbfalpha^{m}$ at the previous stage, for $n = 1, \ldots , N$, the algorithm solves the following partial differential equations (PDEs),
For simplicity of notations, we omit the stage number $m$ in the superscript in the following discussions. The solution to Equation 8 is approximated by solving the equivalent backward stochastic differential equations (BSDEs) using neural networks HJE18: