Distributed Calibration Tutorial Using Julia Workers
This example will teach you how to use ClimaCalibrate to parallelize your calibration with workers. Workers are additional processes spun up to run code in a distributed fashion. In this tutorial, we will run ensemble members' forward models on different workers.
The example calibration uses CliMA's atmosphere model, ClimaAtmos.jl
, in a column spatial configuration for 30 days to simulate outgoing radiative fluxes. Radiative fluxes are used in the observation map to calibrate the astronomical unit.
First, we load in some necessary packages.
using Distributed
import ClimaCalibrate as CAL
import ClimaAnalysis: SimDir, get, slice, average_xy
using ClimaUtilities.ClimaArtifacts
import EnsembleKalmanProcesses: I, ParameterDistributions.constrained_gaussian
Next, we add workers. These are primarily added by Distributed.addprocs
or by starting Julia with multiple processes: julia -p <nprocs>
.
addprocs
itself initializes the workers and registers them with the main Julia process, but there are multiple ways to call it. The simplest is just addprocs(nprocs)
, which will create new local processes on your machine. The other is to use SlurmManager
, which will acquire and start workers on Slurm resources. You can use keyword arguments to specify the Slurm resources:
addprocs(ClimaCalibrate.SlurmManager(nprocs), gpus_per_task = 1, time = "01:00:00")
For this example, we would add one worker if it was compatible with Documenter.jl:
addprocs(1)
We can see the number of workers and their ID numbers:
nworkers()
1
workers()
1-element Vector{Int64}:
1
We can call functions on the worker using remotecall
. We pass in the function name and the worker ID followed by the function arguments.
remotecall_fetch(*, 1, 4, 4)
16
ClimaCalibrate uses this functionality to run the forward model on workers.
Since the workers start in their own Julia sessions, we need to import packages and declare variables. Distributed.@everywhere
executes code on all workers, allowing us to load the code that they need.
@everywhere begin
output_dir = joinpath("output", "climaatmos_calibration")
import ClimaCalibrate as CAL
import ClimaAtmos as CA
import ClimaComms
end
output_dir = joinpath("output", "climaatmos_calibration")
mkpath(output_dir)
"output/climaatmos_calibration"
First, we need to set up the forward model, which take in the sampled parameters, runs, and saves diagnostic output that can be processed and compared to observations. The forward model must override ClimaCalibrate.forward_model(iteration, member)
, since the workers will run this function in parallel.
Since forward_model(iteration, member)
only takes in the iteration and member numbers, so we need to use these as hooks to set the model parameters and output directory. Two useful functions:
path_to_ensemble_member
: Returns the ensemble member's output directoryparameter_path
: Returns the ensemble member's parameter file as specified byEKP.TOMLInterface
The forward model below is running ClimaAtmos.jl
in a minimal column
spatial configuration.
@everywhere function CAL.forward_model(iteration, member)
config_dict = Dict(
"dt" => "2000secs",
"t_end" => "30days",
"config" => "column",
"h_elem" => 1,
"insolation" => "timevarying",
"output_dir" => output_dir,
"output_default_diagnostics" => false,
"dt_rad" => "6hours",
"rad" => "clearsky",
"co2_model" => "fixed",
"log_progress" => false,
"diagnostics" => [
Dict(
"reduction_time" => "average",
"short_name" => "rsut",
"period" => "30days",
"writer" => "nc",
),
],
)
#md # Set the output path for the current member
member_path = CAL.path_to_ensemble_member(output_dir, iteration, member)
config_dict["output_dir"] = member_path
#md # Set the parameters for the current member
parameter_path = CAL.parameter_path(output_dir, iteration, member)
if haskey(config_dict, "toml")
push!(config_dict["toml"], parameter_path)
else
config_dict["toml"] = [parameter_path]
end
#md # Turn off default diagnostics
config_dict["output_default_diagnostics"] = false
comms_ctx = ClimaComms.SingletonCommsContext()
atmos_config = CA.AtmosConfig(config_dict; comms_ctx)
simulation = CA.get_simulation(atmos_config)
CA.solve_atmos!(simulation)
return simulation
end
Next, the observation map is required to process a full ensemble of model output for the ensemble update step. The observation map just takes in the iteration number, and always outputs an array. For observation map output G_ensemble
, G_ensemble[:, m]
must the output of ensemble member m
. This is needed for compatibility with EnsembleKalmanProcesses.jl.
const days = 86_400
function CAL.observation_map(iteration)
single_member_dims = (1,)
G_ensemble = Array{Float64}(undef, single_member_dims..., ensemble_size)
for m in 1:ensemble_size
member_path = CAL.path_to_ensemble_member(output_dir, iteration, m)
simdir_path = joinpath(member_path, "output_active")
if isdir(simdir_path)
simdir = SimDir(simdir_path)
G_ensemble[:, m] .= process_member_data(simdir)
else
G_ensemble[:, m] .= NaN
end
end
return G_ensemble
end
Separating out the individual ensemble member output processing often results in more readable code.
function process_member_data(simdir::SimDir)
isempty(simdir.vars) && return NaN
rsut =
get(simdir; short_name = "rsut", reduction = "average", period = "30d")
return slice(average_xy(rsut); time = 30days).data
end
process_member_data (generic function with 1 method)
Now, we can set up the remaining experiment details:
- ensemble size, number of iterations
- the prior distribution
- the observational data
ensemble_size = 30
n_iterations = 7
noise = 0.1 * I
prior = constrained_gaussian("astronomical_unit", 6e10, 1e11, 2e5, Inf)
ParameterDistribution with 1 entries:
'astronomical_unit' with EnsembleKalmanProcesses.ParameterDistributions.Constraint{EnsembleKalmanProcesses.ParameterDistributions.BoundedBelow}[Bounds: (200000.0, ∞)] over distribution EnsembleKalmanProcesses.ParameterDistributions.Parameterized(Distributions.Normal{Float64}(μ=24.153036641203013, σ=1.1528837102037748))
For a perfect model, we generate observations from the forward model itself. This is most easily done by creating an empty parameter file and running the 0th ensemble member:
@info "Generating observations"
parameter_file = CAL.parameter_path(output_dir, 0, 0)
mkpath(dirname(parameter_file))
touch(parameter_file)
simulation = CAL.forward_model(0, 0)
Simulation
├── Running on: CPUSingleThreaded
├── Output folder: output/climaatmos_calibration/iteration_000/member_000/output_0000
├── Start date: 2010-01-01T00:00:00
├── Current time: 2.592e6 seconds
└── Stop time: 2.592e6 seconds
Lastly, we use the observation map itself to generate the observations.
observations = Vector{Float64}(undef, 1)
observations .= process_member_data(SimDir(simulation.output_dir))
1-element Vector{Float64}:
126.61408233642578
Now we are ready to run our calibration, putting it all together using the calibrate
function. The WorkerBackend
will automatically use all workers available to the main Julia process. Other backends are available for forward models that can't use workers or need to be parallelized internally. The simplest backend is the JuliaBackend
, which runs all ensemble members sequentially and does not require Distributed.jl
. For more information, see the Backends
page.
eki = CAL.calibrate(
CAL.WorkerBackend,
ensemble_size,
n_iterations,
observations,
noise,
prior,
output_dir,
)
EnsembleKalmanProcesses.EnsembleKalmanProcess{Float64, Int64, EnsembleKalmanProcesses.Inversion{Float64, Nothing, Nothing}, EnsembleKalmanProcesses.DataMisfitController{Float64, String}, EnsembleKalmanProcesses.NesterovAccelerator{Float64}, Vector{EnsembleKalmanProcesses.UpdateGroup}, Nothing}(EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}[EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.152987101423427 23.113430885124696 … 23.79546679184483 23.458185685100936]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.314276809384843 23.3475630527547 … 24.02594623189308 23.69107263526273]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.806364765743115 24.210431895302587 … 24.867456745720258 24.54662479374919]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.707170632992735 25.151640212276256 … 25.689313291867546 25.44579050108312]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.749050075690878 25.924328176281474 … 25.92032321690414 25.99500927605184]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.73465849158283 25.849008722929923 … 25.732349332115845 25.72517396211104]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.730953693061558 25.772176612469863 … 25.730546596554536 25.740532121260692]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([25.730357562433504 25.737568836818504 … 25.730544386340878 25.73821872497533])], EnsembleKalmanProcesses.ObservationSeries{Vector{EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}, Nothing}}, EnsembleKalmanProcesses.FixedMinibatcher{Vector{Vector{Int64}}, String, Random.TaskLocalRNG}, Vector{String}, Vector{Vector{Vector{Int64}}}, Nothing}(EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}, Nothing}[EnsembleKalmanProcesses.Observation{Vector{Vector{Float64}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{LinearAlgebra.Diagonal{Float64, Vector{Float64}}}, Vector{String}, Vector{UnitRange{Int64}}, Nothing}([[126.61408233642578]], LinearAlgebra.Diagonal{Float64, Vector{Float64}}[[0.1;;]], LinearAlgebra.Diagonal{Float64, Vector{Float64}}[[10.0;;]], ["observation"], UnitRange{Int64}[1:1], nothing)], EnsembleKalmanProcesses.FixedMinibatcher{Vector{Vector{Int64}}, String, Random.TaskLocalRNG}([[1]], "order", Random.TaskLocalRNG()), ["series_1"], Dict("minibatch" => 1, "epoch" => 8), [[[1]], [[1]], [[1]], [[1]], [[1]], [[1]], [[1]], [[1]]], nothing), 30, EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}[EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([39.85659408569336 0.6747459769248962 … 2.639542579650879 1.3445465564727783]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([55.02159118652344 1.0777240991592407 … 4.185087203979492 2.1422078609466553]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([147.12844848632812 6.052639961242676 … 22.520036697387695 11.856183052062988]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([120.66987609863281 39.749656677246094 … 116.4455337524414 71.56832122802734]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([131.20887756347656 186.22796630859375 … 184.7427978515625 214.46417236328125]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([127.4879150390625 160.21388244628906 … 126.89643096923828 125.0943603515625]), EnsembleKalmanProcesses.DataContainers.DataContainer{Float64}([126.54335021972656 137.41050720214844 … 126.44683837890625 128.99267578125])], Dict("unweighted_loss" => [4670.486791699228, 10300.134904744209, 4942.659411099652, 316.71318547868094, 209.53741357874006, 76.44949397120729, 3.083145742491936], "crps" => [51.104534617805335, 79.00542237028108, 44.9429642101481, 15.330046754365624, 12.822096230015402, 9.287274855179087, 3.2181165212479224], "bayes_loss" => [46704.87821788108, 103001.7829956311, 49427.830675322795, 3168.825144667669, 2097.109248274669, 766.2374925626625, 32.59856026526784], "unweighted_avg_rmse" => [140.6218451538201, 104.56976419598796, 76.70428422565261, 44.98372178028027, 35.46630524396896, 17.444328117370606, 5.9321131388346355], "avg_rmse" => [3178.8925046485892, 1885.903575444548, 1516.569391960629, 1002.6972149500854, 844.8193885631255, 633.9360165501255, 229.83533254067797], "loss" => [46704.86791699228, 103001.34904744208, 49426.59411099653, 3167.1318547868095, 2095.3741357874005, 764.4949397120729, 30.831457424919357]), EnsembleKalmanProcesses.DataMisfitController{Float64, String}([7], 1.0, "stop"), EnsembleKalmanProcesses.NesterovAccelerator{Float64}([25.731129853494767 25.74528779632506 … 25.730963122588015 25.734608163782493], 0.20434762801820305), [2.968722267710181e-6, 2.803938470483871e-5, 2.3469021226932025e-5, 3.1653700719539884e-5, 4.203330218630939e-5, 7.465010133656632e-5, 0.0005679206598224707], EnsembleKalmanProcesses.UpdateGroup[EnsembleKalmanProcesses.UpdateGroup([1], [1], Dict("[1,...,1]" => "[1,...,1]"))], EnsembleKalmanProcesses.Inversion{Float64, Nothing, Nothing}(nothing, nothing, false, 0.0), Random.MersenneTwister(1234, (0, 1002, 0, 245)), EnsembleKalmanProcesses.FailureHandler{EnsembleKalmanProcesses.Inversion, EnsembleKalmanProcesses.SampleSuccGauss}(EnsembleKalmanProcesses.var"#failsafe_update#174"()), EnsembleKalmanProcesses.Localizers.Localizer{EnsembleKalmanProcesses.Localizers.SECNice, Float64}(EnsembleKalmanProcesses.Localizers.var"#13#14"{EnsembleKalmanProcesses.Localizers.SECNice{Float64}}(EnsembleKalmanProcesses.Localizers.SECNice{Float64}(1000, 1.0, 1.0))), 0.1, nothing, false)
This page was generated using Literate.jl.