Edit or run this notebook
3.9 ms
Package initialization
7.1 μs
23.8 s

Probabilistic Modelling with Turing.jl

David Widmann (@devmotion )

Uppsala University, Sweden

Julia User Group Munich, 5 July 2021

48.7 ms

About me

3.6 ms

SIR model

Classical mathematical model of infectious diseases (e.g., influenza or COVID-19):

dSdt(t)=βS(t)I(t)/NdIdt(t)=βS(t)I(t)/NγI(t)dRdt(t)=γI(t)

with infection rate β>0, recovery/death rate γ>0, and constant total population N=S(t)+I(t)+R(t)=const.

12.6 ms
sir! (generic function with 1 method)
54.8 μs

S(0): 990 I(0): 10 R(0): 0

β: 0.5 γ: 0.25

144 ms
30.7 s

Reproduction number: R0=β/γ 2.0

15.0 μs

Inference problem

Model can be used to make predictions or reason about possible interventions

But...

We do not know parameters such as

  • transmission rate β

  • recovery rate γ

  • initial number of infected individuals I(0)

We might just know

  • population size N

  • initial number of removed individuals R(0)=0

  • noisy (i.e., slightly incorrect) number of newly infected individuals for some days

18.4 μs

Probabilistic modelling

Idea: model uncertainty of unknown parameters with probability distributions

Bayesian approach

Model uncertainty with conditional probability

pΘ|Y(θ|y)

of unknown parameters θ given observations y.

E.g.,

  • θ: unknown parameters β, γ, and I(0) that we want to infer

  • y: number of newly infected individuals

Bayes' theorem

Conditional probability pΘ|Y(θ|y) can be calculated as

pΘ|Y(θ|y)=pY|Θ(y|θ)pΘ(θ)pY(y)

Workflow

  • Choose model pY|Θ(y|θ)

    • e.g., describes how daily cases of newly infected individuals depend on parameters

  • Choose prior pΘ(θ)

    • should incorporate initial beliefs and knowledge about θ

    • e.g., β and γ are positive and I(0) is a natural number

  • Compute

    pΘ|Y(θ|y)=pY|Θ(y|θ)pΘ(θ)pY(y)=pY|Θ(y|θ)pΘ(θ)pY|Θ(y|θ)pΘ(θ)dθ

3.2 s

Example: Coin flip

Coin with unknown probability of heads

  • k-times head in n coinflips (observation)

  • unknown probability p of heads (parameter)

13.6 μs

Likelihood

pY|Θ(k|p)=Binom(k;n,p)

n: 10 p: 0.5

38.5 ms
802 ms

Prior

We choose a Beta distribution as prior:

pΘ(p)=Beta(p;α,β)

Note

Without any further information, α=β=1 is a natural choice: without any observation, every possible value of p is equally likely.

α: 1 β: 1

9.7 ms
619 ms

Posterior

For these choices of likelihood and prior, Bayes' theorem yields

pΘ|Y(p|k)=Beta(α+k,β+nk)

Note

Families of prior distributions for which the posterior is in the same family are called conjugate priors.

4.4 μs

true p: 0.5 n: 50

98.5 μs
coinflip_k
25
161 ms

α: 1 β: 1

90.1 μs
500 ms

⚠️ Issue

Often it is not possible to compute pΘ|Y(θ|y) exactly

5.1 μs

Discrete approximation

Idea: Approximate pΘ|Y(θ|y) with a weighted mixture of point measures

pΘ|Y(|y)iwiδθi()

where wi>0 and iwi=1.

This implies that

E(Θ|Y=y)iwiθi

and more generally

E(ϕ(Θ)|Y=y)iwiϕ(θi)

9.8 μs

number of samples: 500

55.1 μs
1.9 s

The approximation can be constructed with e.g.

  • importance sampling

  • sequential Monte Carlo (SMC)

  • Markov Chain Monte Carlo (MCMC)

12.2 μs

Standalone samplers

One main design of the Turing ecosystem is to have many smaller packages that

  • can be used independently of Turing

  • are supported by Turing

For MCMC, currently we have e.g.

Additionally, Turing supports also DynamicHMC.jl.

18.7 μs

AdvancedMH.jl

AdvancedMH.jl contains an implementation of the Metropolis-Hastings algorithm.

3.3 μs

We have to define the unnormalized log density, i.e., the sum of the log-likelihood and the log prior, as a function of the unknown parameter.

6.8 μs
logdensity_coinflip
#1 (generic function with 1 method)
11.1 μs

The model is fully specified by the log density function.

3.0 μs
model_coinflip_mh
5.6 ms

We use a Metropolis-Hastings algorithm with a random walk proposal.

2.8 μs
sampler_coinflip_mh
68.0 ns

AbstractMCMC.jl

AbstractMCMC.jl defines an interface for MCMC algorithms.

The interface is supposed to make it easy to implement MCMC algorithms. The default definitions provide users with

  • progress bars,

  • support for user-provided callbacks,

  • support for thinning and discarding initial samples,

  • support for sampling with a custom stopping criterion,

  • support for sampling multiple chains, serially or in parallel with multiple threads or multiple processes,

  • an iterator and a transducer for sampling Markov chains.

18.0 μs

The main user-facing API is sample.

3.4 μs
811 ms

Some common keyword arguments:

  • discard_initial (default: 0): number of initial samples that are discarded

  • thinning (default: 1): factor by which samples are thinned

  • chain_type (default: Any): type of the returned chain

  • callback (default: nothing): callback that is called after every sampling step

  • progress (default: AbstractMCMC.PROGRESS[] which is true initially): toggles progress logging

15.0 μs
iterationchainparam_1lp
Int64Int64Float64Float64
1
1
1
0.628946
-3.90739
2
2
1
0.628946
-3.90739
3
3
1
0.468516
-2.28612
4
4
1
0.468516
-2.28612
5
5
1
0.468516
-2.28612
6
6
1
0.468516
-2.28612
7
7
1
0.509164
-2.1952
8
8
1
0.509164
-2.1952
9
9
1
0.509164
-2.1952
10
10
1
0.509164
-2.1952
711 ms

AbstractMCMC.jl allows you to sample multiple chains with the following three algorithms:

  • MCMCSerial(): sample in serial (no parallelization)

  • MCMCThreads(): parallel sampling with multiple threads

  • MCMCDistributed(): parallel sampling with multiple processes

7.2 μs
coinflip_mh
iterationchainplp
Int64Int64Float64Float64
1
1
1
0.535545
-2.31347
2
2
1
0.535545
-2.31347
3
3
1
0.55383
-2.47826
4
4
1
0.430189
-2.67898
5
5
1
0.47345
-2.25739
6
6
1
0.610073
-3.42876
7
7
1
0.416558
-2.89294
8
8
1
0.406132
-3.08382
9
9
1
0.412865
-2.95782
10
10
1
0.49555
-2.18878
more
4.2 s

The iterator interface allows us to e.g. plot the samples in every step:

2.9 μs
4.1 s

MCMCChains.jl

MCMCChains.jl contains tools for analyzing and visualizing MCMC chains.

2.9 μs
12.4 s
2.8 s
parametersmeanstdnaive_semcseessrhat
SymbolFloat64Float64Float64Float64Float64Float64
1
:p
0.501525
0.0708991
0.00129444
0.00135053
1473.82
1.00004
3.3 s

Posterior predictive check

It is often useful to generate simulated data using samples from the posterior distribution and compare them to the original data.

(see, e.g., "Statistical Rethinking" (section 3.3.2 - model checking) and the documentation)

2.3 ms
coinflip_check_mh
486 ms
2.4 s

Probabilistic programming

Source: Lecture slides

Developing probabilistic models and inference algorithms is a time-consuming and error-prone process.

  • Probabilistic model written as a computer program

  • Automatic inference (integral part of the programming language)

Advantages:

  • Fast development of models

  • Expressive models

  • Widely applicable inference algorithms

21.0 μs

Turing

Turing is a probabilistic programming language (PPL).

Other PPLs

There are many other PPLs such as Stan, Birch, Gen, or Soss.

General design

  • Probabilistic models are implemented as a Julia function

  • One may use any Julia code inside of the model

  • Random variables and observations are declared with the ~ operator:

    @model function mymodel(x, y)
        ...
        # random variable `a` with prior distribution `dist_a`
        a ~ dist_a
    
        ...
    
        # observation `y` with data distribution `dist_y`
        y ~ dist_y
        ...
    end
  • PPL is implemented in DynamicPPL.jl, including @model macro

  • Turing.jl integrates and reexports different parts of the ecosystem such as the PPL, inference algorithms, and tools for automatic differentiation

13.7 μs

Coinflip model

A possible Turing model:

7.5 μs
Main.workspace2.coinflip
247 μs

coinflip is a Julia function: it creates a model of type DynamicPPL.Model that stores

  • the name of the model,

  • the generative function of the model (i.e., the function above that declares how to run the model),

  • and the arguments of the model and their default values.

8.3 μs
119 ns

We can look up the documentation of coinflip:

4.0 μs
coinflip(n, k; α=1, β=1)

Create a probabilistic model of n coinflips where head shows up k times and the prior distribution of probability p of heads is Beta(α, β).

109 ms

And we can inspect how the @model macro rewrites the function definition:

3.1 μs
quote
    $(Expr(:meta, :doc))
    function conflip(n, k; α::Real, β::Real)
        #= /home/david/Documents/Projects/github/Talks/2021/07/Turing/notebook.jl#==#6c9181b4-a157-4afc-862f-6f603a8023e4:1 =#
        var"##evaluator#2137" = ((__model__::DynamicPPL.Model, __varinfo__::DynamicPPL.AbstractVarInfo, __context__::DynamicPPL.AbstractContext, n, k, α::Real, β::Real)->begin
                    begin
                        #= /home/david/Documents/Projects/github/Talks/2021/07/Turing/notebook.jl#==#6c9181b4-a157-4afc-862f-6f603a8023e4:1 =#
                        #= /home/david/Documents/Projects/github/Talks/2021/07/Turing/notebook.jl#==#6c9181b4-a157-4afc-862f-6f603a8023e4:2 =#
                        begin
                            var"##vn#2129" = (AbstractPPL.VarName){:p}()
                            var"##inds#2130" = ()
                            var"##isassumption#2131" = begin
                                    let var"##vn#2132" = (AbstractPPL.VarName){:p}()
                                        if !((DynamicPPL.inargnames)(var"##vn#2132", __model__)) || (DynamicPPL.inmissings)(var"##vn#2132", __model__)
                                            true
                                        else
                                            p === missing
                                        end
                                    end
                                end
                            if var"##isassumption#2131"
                                p = (DynamicPPL.tilde_assume!)(__context__, (DynamicPPL.unwrap_right_vn)((DynamicPPL.check_tilde_rhs)(Beta(α, β)), var"##vn#2129")..., var"##inds#2130", __varinfo__)
                            else
                                (DynamicPPL.tilde_observe!)(__context__, (DynamicPPL.check_tilde_rhs)(Beta(α, β)), p, var"##vn#2129", var"##inds#2130", __varinfo__)
                            end
                        end
                        #= /home/david/Documents/Projects/github/Talks/2021/07/Turing/notebook.jl#==#6c9181b4-a157-4afc-862f-6f603a8023e4:3 =#
                        begin
                            var"##vn#2133" = (AbstractPPL.VarName){:k}()
                            var"##inds#2134" = ()
                            var"##isassumption#2135" = begin
                                    let var"##vn#2136" = (AbstractPPL.VarName){:k}()
                                        if !((DynamicPPL.inargnames)(var"##vn#2136", __model__)) || (DynamicPPL.inmissings)(var"##vn#2136", __model__)
                                            true
                                        else
                                            k === missing
                                        end
                                    end
                                end
                            if var"##isassumption#2135"
                                k = (DynamicPPL.tilde_assume!)(__context__, (DynamicPPL.unwrap_right_vn)((DynamicPPL.check_tilde_rhs)(Binomial(n, p)), var"##vn#2133")..., var"##inds#2134", __varinfo__)
                            else
                                (DynamicPPL.tilde_observe!)(__context__, (DynamicPPL.check_tilde_rhs)(Binomial(n, p)), k, var"##vn#2133", var"##inds#2134", __varinfo__)
                            end
                        end
                        #= /home/david/Documents/Projects/github/Talks/2021/07/Turing/notebook.jl#==#6c9181b4-a157-4afc-862f-6f603a8023e4:4 =#
                        return (; p, k)
                    end
                end)
        return (DynamicPPL.Model)(:conflip, var"##evaluator#2137", (DynamicPPL.namedtuple)(NamedTuple{(:n, :k, :α, :β), Tuple{Core.Typeof(n), Core.Typeof(k), Core.Typeof(α), Core.Typeof(β)}}, (n, k, α, β)), NamedTuple())
    end
end
434 ms

Since coinflip is a regular Julia function, we can also extend it. E.g., we can allow users to specify the results of the coin flips as a vector of true (heads) and false (tail):

3.5 μs
coinflip (generic function with 2 methods)
61.6 μs
1.7 μs

Inference

We choose the same parameters as above: n= 50, k= 25, α= 1, β= 1

45.9 μs
coinflip_model
5.4 ms

If you call the model without arguments, it is executed and all variables are sampled from the prior.

3.0 μs
42.8 ms

Algorithm:

95.2 ms
66.0 ns
coinflip_turing
iterationchainplp
Int64Int64Float64Float64
1
101
1
0.528507
-2.2682
2
111
1
0.513181
-2.20418
3
121
1
0.431461
-2.66103
4
131
1
0.467184
-2.29473
5
141
1
0.486704
-2.20449
6
151
1
0.398265
-3.24384
7
161
1
0.519854
-2.22625
8
171
1
0.471834
-2.26626
9
181
1
0.41144
-2.98365
10
191
1
0.468481
-2.28635
more
12.2 s

Analysis

3.4 μs
169 ms
361 ms
parametersmeanstdnaive_semcseessrhatess_per_sec
SymbolFloat64Float64Float64Float64Float64Float64Float64
1
:p
0.498296
0.0679105
0.00123987
0.00106426
2842.3
0.9998
182.902
322 ms

Posterior predictive check

Similar to above, we can check if the posterior predictions fit the observed data.

4.0 μs
coinflip_check_turing
215 ms
9.2 ms

In more complicated models it can be tedious to rewrite how to sample the observations explicitly. Fortunately, we can use the following feature of Turing:

If the left hand side of a ~ expression is missing, it is sampled even if it is an argument to the model.

3.1 μs
211 ms

generated_quantities can be used to compute/sample an array of the return values of a model for each set of samples in the chain:

3.4 μs
coinflip_return_turing
1000×3 Matrix{NamedTuple{(:p, :k), Tuple{Float64, Int64}}}:
 (p = 0.528507, k = 26)  (p = 0.42211, k = 18)   (p = 0.466682, k = 24)
 (p = 0.513181, k = 23)  (p = 0.487452, k = 27)  (p = 0.474066, k = 22)
 (p = 0.431461, k = 25)  (p = 0.576841, k = 34)  (p = 0.411804, k = 18)
 (p = 0.467184, k = 25)  (p = 0.610048, k = 28)  (p = 0.445647, k = 21)
 (p = 0.486704, k = 23)  (p = 0.531082, k = 27)  (p = 0.462252, k = 22)
 (p = 0.398265, k = 17)  (p = 0.548944, k = 28)  (p = 0.462252, k = 19)
 (p = 0.519854, k = 26)  (p = 0.3966, k = 13)    (p = 0.592647, k = 35)
 ⋮                                               
 (p = 0.529214, k = 21)  (p = 0.541318, k = 33)  (p = 0.491337, k = 28)
 (p = 0.513032, k = 20)  (p = 0.462376, k = 24)  (p = 0.376187, k = 27)
 (p = 0.437821, k = 25)  (p = 0.612827, k = 30)  (p = 0.48598, k = 22)
 (p = 0.499246, k = 29)  (p = 0.575813, k = 27)  (p = 0.360154, k = 14)
 (p = 0.470747, k = 17)  (p = 0.408261, k = 18)  (p = 0.384433, k = 21)
 (p = 0.599063, k = 30)  (p = 0.420366, k = 22)  (p = 0.494838, k = 26)
6.6 s
coinflip_check_turing2
66.9 μs
20.6 ms
204 μs

Automatic differentiation

Documentation

Algorithms such as Hamiltonian Monte Carlo (HMC) or no-U-turn sampler (NUTS) require the gradient of the log density function as well.

Turing computes the gradient automatically with automatic differentiation (AD). Different backends and algorithms are supported:

15.5 μs

If not set explicitly in the sampler, Turing uses the currently active global AD backend. It can be set with

  • setadbackend(:forwarddiff),

  • setadbackend(:tracker),

  • setadbackend(:reversediff), or

  • setadbackend(:zygote).

Alternatively, the backend can be set explicitly in the sampler:

8.7 μs
25.4 s

Rule of thumb

Use forward-mode AD for models with few parameters and reverse-mode AD for models with many parameters or linear algebra operations.

If you use reverse-mode AD, in particular with Tracker or Zygote, you should avoid loops and use vectorized operations.

2.0 μs

Bayesian inference of SIR model

Based on Simon Frost's example

Setting

SIR model:

dSdt(t)=βS(t)I(t)/NdIdt(t)=βS(t)I(t)/NγI(t)dRdt(t)=γI(t)

We assume that

  • S(0)=990, I(0)=10, and R(0)=0 at day 0,

  • β=0.5, and

  • γ=0.25.

14.2 μs
158 ms

We observe the number of new cases for days 1 to 40 with some Poisson measurement error:

4.8 μs
501 ms
671 ms

Turing model

We define a probabilistic model of the observations, the initial proportion of infected individuals i0, and parameters β and γ. We use uniform priors for i, β and γ. All other parameters fixed.

4.6 μs
139 μs

Info

The initial state of the SIR model has to be compatible with automatic differentiation if a gradient-based sampler such as HMC or NUTS is used. This is ensured with the type parameter T which Turing sets to a suitable type for the sampler automatically.

2.0 μs

Inference

3.1 μs
51.2 ms
18.6 ms
parametersmeanstdnaive_semcseessrhatess_per_sec
SymbolFloat64Float64Float64Float64Float64Float64Float64
1
:i₀
0.0114602
0.00220059
4.01772e-5
4.09611e-5
2772.04
1.00003
21.0922
2
0.478066
0.0343876
0.000627829
0.000631054
2918.91
0.99957
22.2097
3
0.227447
0.029688
0.000542027
0.000558474
2952.89
0.999492
22.4683
712 μs
56.7 s

Posterior predictive check

2.7 μs
sir_return
1000×3 Matrix{NamedTuple{(:i₀, :β, :γ, :y), Tuple{Float64, Float64, Float64, Vector{Union{Missing, Float64}}}}}:
 (i₀ = 0.0117984, β = 0.470022, γ = 0.229061, y = [8.0, 9.0, 9.0, 11.0, 18.0, 18.0, 23.0, 21.0, 25.0, 40.0  …  8.0, 9.0, 3.0, 4.0, 6.0, 5.0, 3.0, 2.0, 3.0, 1.0])    …  (i₀ = 0.0154555, β = 0.403928, γ = 0.180931, y = [2.0, 11.0, 5.0, 15.0, 19.0, 16.0, 19.0, 24.0, 33.0, 27.0  …  10.0, 5.0, 1.0, 7.0, 12.0, 7.0, 5.0, 6.0, 8.0, 2.0])
 (i₀ = 0.0111037, β = 0.482378, γ = 0.225529, y = [5.0, 6.0, 15.0, 13.0, 18.0, 23.0, 20.0, 24.0, 36.0, 41.0  …  9.0, 11.0, 5.0, 3.0, 2.0, 0.0, 0.0, 1.0, 2.0, 4.0])     (i₀ = 0.0111962, β = 0.471755, γ = 0.226175, y = [7.0, 5.0, 6.0, 11.0, 5.0, 23.0, 17.0, 24.0, 32.0, 26.0  …  5.0, 6.0, 3.0, 4.0, 4.0, 2.0, 1.0, 2.0, 2.0, 4.0])
 (i₀ = 0.0108824, β = 0.486026, γ = 0.230159, y = [2.0, 7.0, 10.0, 15.0, 16.0, 18.0, 24.0, 30.0, 22.0, 29.0  …  5.0, 9.0, 4.0, 5.0, 3.0, 5.0, 1.0, 0.0, 3.0, 3.0])      (i₀ = 0.0111547, β = 0.486378, γ = 0.234458, y = [11.0, 12.0, 9.0, 11.0, 19.0, 23.0, 23.0, 25.0, 38.0, 31.0  …  10.0, 7.0, 3.0, 4.0, 3.0, 1.0, 0.0, 2.0, 0.0, 1.0])
 (i₀ = 0.01451, β = 0.447743, γ = 0.209684, y = [9.0, 11.0, 17.0, 14.0, 22.0, 18.0, 20.0, 37.0, 35.0, 38.0  …  6.0, 6.0, 9.0, 1.0, 4.0, 2.0, 1.0, 3.0, 0.0, 1.0])       (i₀ = 0.00966814, β = 0.483003, γ = 0.219866, y = [6.0, 8.0, 14.0, 8.0, 15.0, 22.0, 26.0, 37.0, 25.0, 38.0  …  5.0, 3.0, 1.0, 4.0, 5.0, 4.0, 4.0, 1.0, 1.0, 3.0])
 (i₀ = 0.00815737, β = 0.523331, γ = 0.255746, y = [5.0, 13.0, 7.0, 8.0, 9.0, 23.0, 24.0, 18.0, 32.0, 42.0  …  5.0, 3.0, 7.0, 4.0, 2.0, 4.0, 4.0, 2.0, 3.0, 1.0])       (i₀ = 0.0134974, β = 0.453871, γ = 0.217568, y = [5.0, 8.0, 5.0, 7.0, 19.0, 18.0, 25.0, 35.0, 28.0, 44.0  …  13.0, 3.0, 10.0, 6.0, 5.0, 4.0, 1.0, 0.0, 1.0, 3.0])
 (i₀ = 0.0100787, β = 0.491488, γ = 0.234026, y = [3.0, 15.0, 6.0, 10.0, 21.0, 21.0, 26.0, 25.0, 24.0, 39.0  …  10.0, 8.0, 5.0, 4.0, 2.0, 3.0, 4.0, 2.0, 2.0, 2.0])  …  (i₀ = 0.010865, β = 0.493458, γ = 0.252121, y = [12.0, 6.0, 8.0, 13.0, 9.0, 11.0, 29.0, 28.0, 29.0, 38.0  …  3.0, 4.0, 5.0, 6.0, 0.0, 4.0, 2.0, 4.0, 1.0, 1.0])
 (i₀ = 0.0097824, β = 0.507822, γ = 0.253211, y = [6.0, 7.0, 8.0, 14.0, 17.0, 25.0, 24.0, 24.0, 28.0, 21.0  …  11.0, 6.0, 4.0, 4.0, 6.0, 1.0, 2.0, 1.0, 3.0, 1.0])      (i₀ = 0.0108123, β = 0.474993, γ = 0.226182, y = [8.0, 5.0, 9.0, 9.0, 9.0, 11.0, 19.0, 24.0, 24.0, 43.0  …  12.0, 10.0, 1.0, 5.0, 2.0, 3.0, 8.0, 3.0, 3.0, 1.0])
 ⋮                                                                                                                                                                   ⋱  
 (i₀ = 0.0106781, β = 0.477503, γ = 0.218882, y = [3.0, 10.0, 11.0, 12.0, 14.0, 12.0, 24.0, 29.0, 34.0, 31.0  …  4.0, 6.0, 3.0, 2.0, 3.0, 3.0, 3.0, 5.0, 3.0, 1.0])     (i₀ = 0.0102792, β = 0.497104, γ = 0.242215, y = [6.0, 12.0, 16.0, 7.0, 7.0, 18.0, 17.0, 25.0, 29.0, 30.0  …  4.0, 5.0, 6.0, 3.0, 4.0, 3.0, 3.0, 2.0, 3.0, 1.0])
 (i₀ = 0.009452, β = 0.506246, γ = 0.25407, y = [4.0, 4.0, 12.0, 9.0, 13.0, 18.0, 21.0, 18.0, 38.0, 29.0  …  9.0, 9.0, 3.0, 3.0, 3.0, 2.0, 2.0, 2.0, 0.0, 2.0])      …  (i₀ = 0.0111689, β = 0.478732, γ = 0.230093, y = [6.0, 5.0, 9.0, 16.0, 10.0, 18.0, 33.0, 23.0, 29.0, 36.0  …  5.0, 4.0, 5.0, 0.0, 9.0, 2.0, 0.0, 2.0, 2.0, 1.0])
 (i₀ = 0.0123274, β = 0.435584, γ = 0.198006, y = [15.0, 4.0, 5.0, 11.0, 8.0, 16.0, 26.0, 22.0, 28.0, 35.0  …  17.0, 5.0, 3.0, 3.0, 7.0, 3.0, 5.0, 4.0, 2.0, 5.0])      (i₀ = 0.00997878, β = 0.489763, γ = 0.241924, y = [3.0, 7.0, 15.0, 13.0, 13.0, 10.0, 22.0, 25.0, 31.0, 36.0  …  9.0, 3.0, 8.0, 3.0, 3.0, 7.0, 2.0, 1.0, 3.0, 1.0])
 (i₀ = 0.0191635, β = 0.407819, γ = 0.178192, y = [8.0, 11.0, 7.0, 14.0, 19.0, 21.0, 28.0, 29.0, 48.0, 42.0  …  10.0, 8.0, 7.0, 3.0, 6.0, 5.0, 5.0, 1.0, 3.0, 0.0])     (i₀ = 0.0117381, β = 0.447, γ = 0.200144, y = [6.0, 7.0, 6.0, 12.0, 9.0, 15.0, 23.0, 23.0, 24.0, 22.0  …  7.0, 4.0, 6.0, 3.0, 5.0, 5.0, 3.0, 1.0, 1.0, 1.0])
 (i₀ = 0.0126501, β = 0.498931, γ = 0.240358, y = [8.0, 7.0, 9.0, 11.0, 23.0, 19.0, 31.0, 27.0, 39.0, 41.0  …  6.0, 7.0, 3.0, 9.0, 1.0, 2.0, 4.0, 1.0, 2.0, 0.0])       (i₀ = 0.0119046, β = 0.457044, γ = 0.211689, y = [3.0, 7.0, 7.0, 13.0, 15.0, 21.0, 23.0, 25.0, 35.0, 34.0  …  7.0, 8.0, 4.0, 5.0, 3.0, 5.0, 5.0, 4.0, 2.0, 0.0])
 (i₀ = 0.0131974, β = 0.447402, γ = 0.208132, y = [4.0, 5.0, 7.0, 12.0, 19.0, 15.0, 23.0, 25.0, 26.0, 43.0  …  12.0, 7.0, 10.0, 2.0, 1.0, 6.0, 1.0, 3.0, 5.0, 0.0])     (i₀ = 0.012124, β = 0.462968, γ = 0.222125, y = [3.0, 12.0, 11.0, 12.0, 12.0, 25.0, 21.0, 18.0, 38.0, 35.0  …  8.0, 8.0, 9.0, 4.0, 4.0, 4.0, 3.0, 0.0, 2.0, 2.0])
6.4 s
sir_check
155 ms
177 ms

COVID-19 replication study

Links: Blog post, Github repo

4.3 ms

Additional resources

13.9 μs
Loading...
ii
Loading...