Distributed Calibration Tutorial Using Julia Workers
This example will teach you how to use ClimaCalibrate to parallelize your calibration with workers. Workers are additional processes spun up to run code in a distributed fashion. In this tutorial, we will run ensemble members' forward models on different workers.
The example calibration uses CliMA's atmosphere model, ClimaAtmos.jl
, in a column spatial configuration for 30 days to simulate outgoing radiative fluxes. Radiative fluxes are used in the observation map to calibrate the astronomical unit.
First, we load in some necessary packages.
using Distributed
import ClimaCalibrate as CAL
import ClimaAnalysis: SimDir, get, slice, average_xy
using ClimaUtilities.ClimaArtifacts
import EnsembleKalmanProcesses: I, ParameterDistributions.constrained_gaussian
Next, we add workers. These are primarily added by Distributed.addprocs
or by starting Julia with multiple processes: julia -p <nprocs>
.
addprocs
itself initializes the workers and registers them with the main Julia process, but there are multiple ways to call it. The simplest is just addprocs(nprocs)
, which will create new local processes on your machine. The other is to use SlurmManager
, which will acquire and start workers on Slurm resources. You can use keyword arguments to specify the Slurm resources:
addprocs(ClimaCalibrate.SlurmManager(nprocs), gpus_per_task = 1, time = "01:00:00")
For this example, we would add one worker if it was compatible with Documenter.jl:
addprocs(1)
We can see the number of workers and their ID numbers:
nworkers()
1
workers()
1-element Vector{Int64}:
1
We can call functions on the worker using remotecall
. We pass in the function name and the worker ID followed by the function arguments.
remotecall_fetch(*, 1, 4, 4)
16
ClimaCalibrate uses this functionality to run the forward model on workers.
Since the workers start in their own Julia sessions, we need to import packages and declare variables. Distributed.@everywhere
executes code on all workers, allowing us to load the code that they need.
@everywhere begin
output_dir = joinpath("output", "climaatmos_calibration")
import ClimaCalibrate as CAL
import ClimaAtmos as CA
import ClimaComms
end
output_dir = joinpath("output", "climaatmos_calibration")
mkpath(output_dir)
"output/climaatmos_calibration"
First, we need to set up the forward model, which take in the sampled parameters, runs, and saves diagnostic output that can be processed and compared to observations. The forward model must override ClimaCalibrate.forward_model(iteration, member)
, since the workers will run this function in parallel.
Since forward_model(iteration, member)
only takes in the iteration and member numbers, so we need to use these as hooks to set the model parameters and output directory. Two useful functions:
path_to_ensemble_member
: Returns the ensemble member's output directoryparameter_path
: Returns the ensemble member's parameter file as specified byEKP.TOMLInterface
The forward model below is running ClimaAtmos.jl
in a minimal column
spatial configuration.
@everywhere function CAL.forward_model(iteration, member)
config_dict = Dict(
"dt" => "2000secs",
"t_end" => "30days",
"config" => "column",
"h_elem" => 1,
"insolation" => "timevarying",
"output_dir" => output_dir,
"output_default_diagnostics" => false,
"dt_rad" => "6hours",
"rad" => "clearsky",
"co2_model" => "fixed",
"log_progress" => false,
"diagnostics" => [
Dict(
"reduction_time" => "average",
"short_name" => "rsut",
"period" => "30days",
"writer" => "nc",
),
],
)
#md # Set the output path for the current member
member_path = CAL.path_to_ensemble_member(output_dir, iteration, member)
config_dict["output_dir"] = member_path
#md # Set the parameters for the current member
parameter_path = CAL.parameter_path(output_dir, iteration, member)
if haskey(config_dict, "toml")
push!(config_dict["toml"], parameter_path)
else
config_dict["toml"] = [parameter_path]
end
#md # Turn off default diagnostics
config_dict["output_default_diagnostics"] = false
comms_ctx = ClimaComms.SingletonCommsContext()
atmos_config = CA.AtmosConfig(config_dict; comms_ctx)
simulation = CA.get_simulation(atmos_config)
CA.solve_atmos!(simulation)
return simulation
end
Next, the observation map is required to process a full ensemble of model output for the ensemble update step. The observation map just takes in the iteration number, and always outputs an array. For observation map output G_ensemble
, G_ensemble[:, m]
must the output of ensemble member m
. This is needed for compatibility with EnsembleKalmanProcesses.jl.
const days = 86_400
function CAL.observation_map(iteration)
single_member_dims = (1,)
G_ensemble = Array{Float64}(undef, single_member_dims..., ensemble_size)
for m in 1:ensemble_size
member_path = CAL.path_to_ensemble_member(output_dir, iteration, m)
simdir_path = joinpath(member_path, "output_active")
if isdir(simdir_path)
simdir = SimDir(simdir_path)
G_ensemble[:, m] .= process_member_data(simdir)
else
G_ensemble[:, m] .= NaN
end
end
return G_ensemble
end
Separating out the individual ensemble member output processing often results in more readable code.
function process_member_data(simdir::SimDir)
isempty(simdir.vars) && return NaN
rsut =
get(simdir; short_name = "rsut", reduction = "average", period = "30d")
return slice(average_xy(rsut); time = 30days).data
end
process_member_data (generic function with 1 method)
Now, we can set up the remaining experiment details:
- ensemble size, number of iterations
- the prior distribution
- the observational data
ensemble_size = 30
n_iterations = 7
noise = 0.1 * I
prior = constrained_gaussian("astronomical_unit", 6e10, 1e11, 2e5, Inf)
ParameterDistribution with 1 entries:
'astronomical_unit' with EnsembleKalmanProcesses.ParameterDistributions.Constraint{EnsembleKalmanProcesses.ParameterDistributions.BoundedBelow}[Bounds: (200000.0, ∞)] over distribution EnsembleKalmanProcesses.ParameterDistributions.Parameterized(Distributions.Normal{Float64}(μ=24.153036641203013, σ=1.1528837102037748))
For a perfect model, we generate observations from the forward model itself. This is most easily done by creating an empty parameter file and running the 0th ensemble member:
@info "Generating observations"
parameter_file = CAL.parameter_path(output_dir, 0, 0)
mkpath(dirname(parameter_file))
touch(parameter_file)
simulation = CAL.forward_model(0, 0)
Simulation
├── Running on: CPUSingleThreaded
├── Output folder: output/climaatmos_calibration/iteration_000/member_000/output_0000
├── Start date: 2010-01-01T00:00:00
├── Current time: 2.592e6 seconds
└── Stop time: 2.592e6 seconds
Lastly, we use the observation map itself to generate the observations.
observations = Vector{Float64}(undef, 1)
observations .= process_member_data(SimDir(simulation.output_dir))
1-element Vector{Float64}:
126.61408233642578
Now we are ready to run our calibration, putting it all together using the calibrate
function. The WorkerBackend
will automatically use all workers available to the main Julia process. Other backends are available for forward models that can't use workers or need to be parallelized internally. The simplest backend is the JuliaBackend
, which runs all ensemble members sequentially and does not require Distributed.jl
. For more information, see the Backends
page.
eki = CAL.calibrate(
CAL.WorkerBackend,
ensemble_size,
n_iterations,
observations,
noise,
prior,
output_dir,
)
EnsembleKalmanProcesses.EnsembleKalmanProcess{Float64, Int64, EnsembleKalmanProcesses.Inversion{Float64, Nothing, Nothing}, EnsembleKalmanProcesses.DataMisfitController{Float64, String}, EnsembleKalmanProcesses.NesterovAccelerator{Float64}, Vector{EnsembleKalmanProcesses.UpdateGroup}, Nothing}(EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}[EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.152987101423427 23.113430885124696 … 23.79546679184483 23.458185685100936]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.31427682048037 23.347563068750432 … 24.025946249964544 23.691072651283612]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.80636478408606 24.210431924006564 … 24.867456776185406 24.546624822370553]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.70717111890924 25.151639897476205 … 25.689312481402546 25.44579039228758]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.74904971998464 25.924328057681365 … 25.920322664390145 25.995009304649273]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.734658187099388 25.849007407032428 … 25.732352814072584 25.725174921666554]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.730953398181047 25.772176863449186 … 25.730545721829017 25.740533130267014]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.73035920781557 25.737569248320327 … 25.730542968395028 25.738219171728105])], EnsembleKalmanProcesses.ObservationSeries{Vector{EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}}}, EnsembleKalmanProcesses.FixedMinibatcher{Vector{Vector{Int64}}, String, Random.TaskLocalRNG}, Vector{String}, Vector{Vector{Vector{Int64}}}, Nothing}(EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}}[EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}}([[126.61408233642578]], LinearAlgebra.Diagonal{Float64, Vector{Float64}}[[0.1;;]], LinearAlgebra.Diagonal{Float64, Vector{Float64}}[[10.0;;]], ["observation"], UnitRange{Int64}[1:1])], EnsembleKalmanProcesses.FixedMinibatcher{Vector{Vector{Int64}}, String, Random.TaskLocalRNG}([[1]], "order", Random.TaskLocalRNG()), ["series_1"], Dict("minibatch" => 1, "epoch" => 8), [[[1]], [[1]], [[1]], [[1]], [[1]], [[1]], [[1]], [[1]]], nothing), 30, EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}[EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([39.85659408569336 0.674746036529541 … 2.6395413875579834 1.3445465564727783]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([55.02159118652344 1.0777240991592407 … 4.185087203979492 2.1422078609466553]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([147.12838745117188 6.052661895751953 … 22.520132064819336 11.85617733001709]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([120.67001342773438 39.749656677246094 … 116.44548034667969 71.56832122802734]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([131.20884704589844 186.22830200195312 … 184.7421112060547 214.46414184570312]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([127.48789978027344 160.21336364746094 … 126.8976058959961 125.09440612792969]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([126.54288482666016 137.41050720214844 … 126.44678497314453 128.99282836914062])], Dict("unweighted_loss" => [4670.485933358917, 10300.133069805008, 4942.669812914293, 316.712368418586, 209.54459157352468, 76.44473499127552, 3.083391347481049], "crps" => [51.1045324295476, 79.00540629997174, 44.94304483600181, 15.33005184257227, 12.822163257615856, 9.287203469559508, 3.218105330115158], "bayes_loss" => [46704.86963447758, 103001.76464618555, 49427.93469411587, 3168.8169747419633, 2097.181026516917, 766.1899028148296, 32.601015934072095], "unweighted_avg_rmse" => [140.6218388739818, 104.56975719045538, 76.70429818506042, 44.98374866992235, 35.46633857885997, 17.4441699663798, 5.932239023844401], "avg_rmse" => [3178.892440361788, 1885.9035122499151, 1516.5695308616353, 1002.6979935702684, 844.821281939361, 633.9333701012528, 229.8341166903074], "loss" => [46704.859333589164, 103001.33069805008, 49426.69812914293, 3167.12368418586, 2095.445915735247, 764.4473499127553, 30.83391347481049]), EnsembleKalmanProcesses.DataMisfitController{Float64, String}([7], 1.0, "stop"), EnsembleKalmanProcesses.NesterovAccelerator{Float64}([25.73113071825774 25.74528796113327 … 25.730962382206403 25.734608773778312], 0.20434762801820305), [2.9687223877832336e-6, 2.8039377252368694e-5, 2.3469034648092797e-5, 3.165369391036908e-5, 4.2033113780150634e-5, 7.465072461370624e-5, 0.0005679266685767938], EnsembleKalmanProcesses.UpdateGroup[EnsembleKalmanProcesses.UpdateGroup([1], [1], Dict("[1,...,1]" => "[1,...,1]"))], EnsembleKalmanProcesses.Inversion{Float64, Nothing, Nothing}(nothing, nothing, false, 0.0), Random.MersenneTwister(1234, (0, 1002, 0, 245)), EnsembleKalmanProcesses.FailureHandler{EnsembleKalmanProcesses.Inversion, EnsembleKalmanProcesses.SampleSuccGauss}(EnsembleKalmanProcesses.var"#failsafe_update#166"()), EnsembleKalmanProcesses.Localizers.Localizer{EnsembleKalmanProcesses.Localizers.SECNice, Float64}(EnsembleKalmanProcesses.Localizers.var"#13#14"{EnsembleKalmanProcesses.Localizers.SECNice{Float64}}(EnsembleKalmanProcesses.Localizers.SECNice{Float64}(1000, 1.0, 1.0))), 0.1, nothing, false)
This page was generated using Literate.jl.