API

Initial conditions

General

ClimaAtmos.InitialConditions.InitialConditionType
InitialCondition

A mechanism for specifying the LocalState of an AtmosModel at every point in the domain. Given some initial_condition, calling initial_condition(params) returns a function of the form local_state(local_geometry)::LocalState.

source
ClimaAtmos.InitialConditions.hydrostatic_pressure_profileFunction
hydrostatic_pressure_profile(; thermo_params, p_0, [T, θ, q_tot, z_max])

Solves the initial value problem p'(z) = -g * ρ(z) for all z ∈ [0, z_max], given p(0), either T(z) or θ(z), and optionally also q_tot(z). If q_tot(z) is not given, it is assumed to be 0. If z_max is not given, it is assumed to be 30 km. Note that z_max should be the maximum elevation to which the specified profiles T(z), θ(z), and/or q_tot(z) are valid.

source

Plane / Box

ClimaAtmos.InitialConditions.ConstantBuoyancyFrequencyProfileType
ConstantBuoyancyFrequencyProfile()

An InitialCondition with a constant Brunt-Vaisala frequency and constant wind velocity, where the pressure profile is hydrostatically balanced. This is currently the only InitialCondition that supports the approximation of a steady-state solution.

source

Sphere

Cases from literature

Helper

ClimaAtmos.InitialConditions.ColumnInterpolatableFieldType
ColumnInterpolatableField(::Fields.ColumnField)

A column field object that can be interpolated in the z-coordinate. For example:

cif = ColumnInterpolatableField(column_field)
z = 1.0
column_field_at_z = cif(z)
Warn

This function allocates and is not GPU-compatible so please avoid using this inside step! only use this for initialization.

source

Jacobian

ClimaAtmos.JacobianAlgorithmType
JacobianAlgorithm

A description of how to compute the matrix $∂R/∂Y$, where $R(Y)$ denotes the residual of an implicit step with the state $Y$. Concrete implementations of this abstract type should define 3 methods:

  • jacobian_cache(alg::JacobianAlgorithm, Y, atmos)
  • update_jacobian!(alg::JacobianAlgorithm, cache, Y, p, dtγ, t)
  • invert_jacobian!(alg::JacobianAlgorithm, cache, ΔY, R)

See Implicit Solver for additional background information.

source
ClimaAtmos.ManualSparseJacobianType
ManualSparseJacobian(
    topography_flag,
    diffusion_flag,
    sgs_advection_flag,
    sgs_entr_detr_flag,
    sgs_mass_flux_flag,
    sgs_nh_pressure_flag,
    approximate_solve_iters,
)

A JacobianAlgorithm that approximates the Jacobian using analytically derived tendency derivatives and inverts it using a specialized nested linear solver. Certain groups of derivatives can be toggled on or off by setting their DerivativeFlags to either UseDerivative or IgnoreDerivative.

Arguments

  • topography_flag::DerivativeFlag: whether the derivative of vertical contravariant velocity with respect to horizontal covariant velocity should be computed
  • diffusion_flag::DerivativeFlag: whether the derivatives of the grid-scale diffusion tendency should be computed
  • sgs_advection_flag::DerivativeFlag: whether the derivatives of the subgrid-scale advection tendency should be computed
  • sgs_entr_detr_flag::DerivativeFlag: whether the derivatives of the subgrid-scale entrainment and detrainment tendencies should be computed
  • sgs_mass_flux_flag::DerivativeFlag: whether the derivatives of the subgrid-scale mass flux tendency should be computed
  • sgs_nh_pressure_flag::DerivativeFlag: whether the derivatives of the subgrid-scale non-hydrostatic pressure drag tendency should be computed
  • approximate_solve_iters::Int: number of iterations to take for the approximate linear solve required when the diffusion_flag is UseDerivative
source
ClimaAtmos.AutoDenseJacobianType
AutoDenseJacobian([max_simultaneous_derivatives])

A JacobianAlgorithm that computes the Jacobian using forward-mode automatic differentiation, without making any assumptions about sparsity structure. After the dense matrix for each spatial column is updated, parallel_lu_factorize! computes its LU factorization in parallel across all columns. The linear solver is also run in parallel with parallel_lu_solve!.

To automatically compute the derivative of implicit_tendency! with respect to Y, we first create copies of Y, p.precomputed, and p.scratch in which every floating-point number is replaced by a dual number from ForwardDiff.jl. A dual number can be expressed as $Xᴰ = X + ε₁x₁ + ε₂x₂ + ... + εₙxₙ$, where $X$ and $xᵢ$ are floating-point numbers, and where $εᵢ$ is a hyperreal number that satisfies $εᵢεⱼ = 0$. If the $i$-th value in dual column state $Yᴰ$ is set to $Yᴰᵢ = Yᵢ + 1εᵢ$, where $Yᵢ$ is the $i$-th value in the column state $Y$, then evaluating the implicit tendency of the dual column state generates a dense representation of the Jacobian matrix $∂T/∂Y$. Specifically, the $i$-th value in the dual column tendency $Tᴰ = T(Yᴰ)$ is $Tᴰᵢ = Tᵢ + (∂Tᵢ/∂Y₁)ε₁ + ... + (∂Tᵢ/∂Yₙ)εₙ$, where $Tᵢ$ is the $i$-th value in the column tendency $T(Y)$, and where $n$ is the number of values in $Y$. In other words, the entry in the $i$-th row and $j$-th column of the matrix $∂T/∂Y$ is the coefficient of $εⱼ$ in $Tᴰᵢ$. The size of the dense matrix scales as $O(n^2)$, leading to very large memory requirements at higher vertical resolutions.

When the number of values in each column is very large, computing the entire dense matrix in a single evaluation of implicit_tendency! can be too expensive to compile and run. So, the dual number components are split into batches with a maximum size of max_simultaneous_derivatives, and we call implicit_tendency! once for each batch. That is, if the batch size is $s$, then the first batch evaluates the coefficients of $ε₁$ through $εₛ$, the second evaluates the coefficients of $εₛ₊₁$ through $ε₂ₛ$, and so on until $εₙ$. The default batch size is 32.

source

Internals

ClimaAtmos.parallel_lu_factorize!Function
parallel_lu_factorize!(device, matrices, ::Val{N})

Runs a parallel LU factorization algorithm on the specified device. If each slice matrices[1:N, 1:N, i] represents a matrix $Mᵢ$, this function overwrites it with the lower triangular matrix $Lᵢ$ and the upper triangular matrix $Uᵢ$, where $Mᵢ = Lᵢ * Uᵢ$. The value of N must be wrapped in a Val to ensure that it is statically inferrable, which allows the LU factorization to avoid dynamic local memory allocations.

The runtime of this algorithm scales as $O(N^3)$.

source
ClimaAtmos.parallel_lu_solve!Function
parallel_lu_solve!(device, vectors, matrices, ::Val{N})

Runs a parallel LU solver algorithm on the specified device. If each slice vectors[1:N, i] represents a vector $vᵢ$, and if each slice matrices[1:N, 1:N, i] represents a matrix $Lᵢ * Uᵢ$ that was factorized by parallel_lu_factorize!, this function overwrites the slice vectors[1:N, i] with $(Lᵢ * Uᵢ)⁻¹ * vᵢ$. The value of N must be wrapped in a Val to ensure that it is statically inferrable, which allows the LU solver to avoid dynamic local memory allocations.

The runtime of this algorithm scales as $O(N^2)$.

source