MarkovChainMonteCarlo
Top-level class and methods
CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCWrapper — Typestruct MCMCWrapper{AVV<:(AbstractVector), AV<:(AbstractVector)}Top-level class holding all configuration information needed for MCMC sampling: the prior, emulated likelihood and sampling algorithm (DensityModel and Sampler, respectively, in AbstractMCMC's terminology).
Fields
prior::EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution:ParameterDistributionobject describing the prior distribution on parameter values.observations::AbstractVector: [outputdim x Nsamples] matrix, of given observation data.encoded_observations::AbstractVector: Vector of observations describing the data samples to actually used during MCMC sampling (that have been transformed into a space consistent with emulator outputs).log_posterior_map::AbstractMCMC.AbstractModel:AdvancedMH.DensityModelobject, used to evaluate the posterior density being sampled from.mh_proposal_sampler::AbstractMCMC.AbstractSampler: Object describing a MCMC sampling algorithm and its settings.sample_kwargs::NamedTuple: NamedTuple of other arguments to be passed toAbstractMCMC.sample().
CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCWrapper — MethodMCMCWrapper(
mcmc_alg::CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCProtocol,
observation::Union{AbstractMatrix, AbstractVector},
prior::EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution,
em::CalibrateEmulateSample.Emulators.Emulator;
init_params,
burnin,
kwargs...
)
Constructor for MCMCWrapper which performs the same standardization (SVD decorrelation) that was applied in the Emulator. It creates and wraps an instance of EmulatorPosteriorModel, for sampling from the Emulator, and MetropolisHastingsSampler, for generating the MC proposals.
mcmc_alg:MCMCProtocoldescribing the MCMC sampling algorithm to use. Currently implemented algorithms are:RWMHSampling: Metropolis-Hastings sampling from a vanilla random walk with fixed stepsize.pCNMHSampling: Metropolis-Hastings sampling using the preconditioned Crank-Nicholson algorithm, which has a well-behaved small-stepsize limit.BarkerSampling: Metropolis-Hastings sampling using the Barker proposal, which has a robustness to choosing step-size parameters.
obs_sample: Vector (for one sample) or matrix with columns as samples from the observation. Can, e.g., be picked from an Observation struct usingget_obs_sample.prior:ParameterDistributionobject containing the parameters' prior distributions.emulator:Emulatorto sample from.stepsize: MCMC step size, applied as a scaling to the prior covariance.init_params: Starting parameter values for MCMC sampling.burnin: Initial number of MCMC steps to discard from output (pre-convergence).
StatsBase.sample — Functionsample([rng,] mcmc::MCMCWrapper, args...; kwargs...)
Extends the sample methods of AbstractMCMC (which extends StatsBase) to sample from the emulated posterior, using the MCMC sampling algorithm and Emulator configured in MCMCWrapper. Returns a MCMCChains.Chains object containing the samples.
Supported methods are:
sample([rng, ]mcmc, N; kwargs...)Return a
MCMCChains.Chainsobject containingNsamples from the emulated posterior.sample([rng, ]mcmc, isdone; kwargs...)Sample from the
modelwith the Markov chain Monte Carlosampleruntil a convergence criterionisdonereturnstrue, and return the samples. The functionisdonehas the signatureisdone(rng, model, sampler, samples, state, iteration; kwargs...)where
stateanditerationare the current state and iteration of the sampler, respectively. It should returntruewhen sampling should end, andfalseotherwise.sample([rng, ]mcmc, parallel_type, N, nchains; kwargs...)Sample
nchainsMonte Carlo Markov chains in parallel according toparallel_type, which may beMCMCThreads()orMCMCDistributed()for thread and parallel sampling, respectively.
CalibrateEmulateSample.MarkovChainMonteCarlo.get_posterior — Functionget_posterior(
mcmc::CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCWrapper,
chain::MCMCChains.Chains
) -> EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution
Returns a ParameterDistribution object corresponding to the empirical distribution of the samples in chain.
CalibrateEmulateSample.MarkovChainMonteCarlo.optimize_stepsize — Functionoptimize_stepsize(
rng::Random.AbstractRNG,
mcmc::CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCWrapper;
init_stepsize,
N,
max_iter,
target_acc,
sample_kwargs...
) -> Float64
Uses a heuristic to return a stepsize for the mh_proposal_sampler element of MCMCWrapper which yields fast convergence of the Markov chain.
The criterion used is that Metropolis-Hastings proposals should be accepted between 15% and 35% of the time.
See AbstractMCMC sampling API for background on our use of Turing.jl's AbstractMCMC API for MCMC sampling.
Sampler algorithms
CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCProtocol — Typeabstract type MCMCProtocolType used to dispatch different methods of the MetropolisHastingsSampler constructor, corresponding to different sampling algorithms.
CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocol — Typeabstract type AutodiffProtocolType used to dispatch different autodifferentiation methods where different emulators have a different compatability with autodiff packages
CalibrateEmulateSample.MarkovChainMonteCarlo.ForwardDiffProtocol — Typeabstract type ForwardDiffProtocol <: CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocolType to construct samplers for emulators compatible with ForwardDiff.jl autodifferentiation
CalibrateEmulateSample.MarkovChainMonteCarlo.ReverseDiffProtocol — Typeabstract type ReverseDiffProtocol <: CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocolType to construct samplers for emulators compatible with ReverseDiff.jl autodifferentiation
CalibrateEmulateSample.MarkovChainMonteCarlo.RWMHSampling — Typestruct RWMHSampling{T<:CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocol} <: CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCProtocolMCMCProtocol which uses Metropolis-Hastings sampling that generates proposals for new parameters as as vanilla random walk, based on the covariance of prior.
CalibrateEmulateSample.MarkovChainMonteCarlo.pCNMHSampling — Typestruct pCNMHSampling{T<:CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocol} <: CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCProtocolMCMCProtocol which uses Metropolis-Hastings sampling that generates proposals for new parameters according to the preconditioned Crank-Nicholson (pCN) algorithm, which is usable for MCMC in the stepsize → 0 limit, unlike the vanilla random walk. Steps are based on the covariance of prior.
CalibrateEmulateSample.MarkovChainMonteCarlo.BarkerSampling — Typestruct BarkerSampling{T<:CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocol} <: CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCProtocolMCMCProtocol which uses Metropolis-Hastings sampling that generates proposals for new parameters according to the Barker proposal.
CalibrateEmulateSample.MarkovChainMonteCarlo.MetropolisHastingsSampler — FunctionMetropolisHastingsSampler(
_::CalibrateEmulateSample.MarkovChainMonteCarlo.RWMHSampling{T<:CalibrateEmulateSample.MarkovChainMonteCarlo.AutodiffProtocol},
prior::EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution
) -> CalibrateEmulateSample.MarkovChainMonteCarlo.RWMetropolisHastings{AdvancedMH.RandomWalkProposal{false, _A}} where _A
Constructor for all Sampler objects, with one method for each supported MCMC algorithm.
Both currently implemented Samplers use a Gaussian approximation to the prior: specifically, new Metropolis-Hastings proposals are a scaled combination of the old parameter values and a random shift distributed as a Gaussian with the same covariance as the prior.
This suffices for our current use case, in which our priors are Gaussian (possibly in a transformed space) and we assume enough fidelity in the Emulator that inference isn't prior-dominated.
Emulated posterior (Model)
CalibrateEmulateSample.MarkovChainMonteCarlo.EmulatorPosteriorModel — FunctionEmulatorPosteriorModel(
prior::EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution,
em::CalibrateEmulateSample.Emulators.Emulator{FT<:AbstractFloat},
obs_vec::AbstractVector
) -> AdvancedMH.DensityModel{F} where F<:(CalibrateEmulateSample.MarkovChainMonteCarlo.var"#13#14"{EnsembleKalmanProcesses.ParameterDistributions.ParameterDistribution{PDType, CType, ST}, CalibrateEmulateSample.Emulators.Emulator{FT, VV}, <:AbstractVector{T}} where {PDType<:EnsembleKalmanProcesses.ParameterDistributions.ParameterDistributionType, CType<:EnsembleKalmanProcesses.ParameterDistributions.ConstraintType, ST<:AbstractString, FT<:AbstractFloat, VV<:(AbstractVector), T})
Factory which constructs AdvancedMH.DensityModel objects given a prior on the model parameters (prior) and an Emulator of the log-likelihood of the data given parameters. Together these yield the log posterior density we're attempting to sample from with the MCMC, which is the role of the DensityModel class in the AbstractMCMC interface.
Internals - MCMC State
CalibrateEmulateSample.MarkovChainMonteCarlo.MCMCState — Typestruct MCMCState{T, L<:Real} <: AdvancedMH.AbstractTransitionExtends the AdvancedMH.Transition (which encodes the current state of the MC during sampling) with a boolean flag to record whether this state is new (arising from accepting a Metropolis-Hastings proposal) or old (from rejecting a proposal).
Fields
params::Any: Sampled value of the parameters at the current state of the MCMC chain.log_density::Real: Log probability ofparams, as computed by the model using the prior.accepted::Bool: Whether this state resulted from accepting a new MH proposal.
CalibrateEmulateSample.MarkovChainMonteCarlo.accept_ratio — Functionaccept_ratio(chain::MCMCChains.Chains) -> Any
Fraction of MC proposals in chain which were accepted (according to Metropolis-Hastings.)