Arrays
MPI State Arrays
Storage for the state of a discretization.
ClimateMachine.MPIStateArrays.MPIStateArray
— TypeMPIStateArray{FT, DATN<:AbstractArray{FT,3}, DAI1, DAV,
DAT2<:AbstractArray{FT,2}} <: AbstractArray{FT, 3}
ClimateMachine.MPIStateArrays.begin_ghost_exchange!
— Functionbegin_ghost_exchange!(Q::MPIStateArray; dependencies = nothing)
Begin the MPI halo exchange of the data stored in Q
. A KernelAbstractions Event
is returned that can be used as a dependency to end the exchange.
ClimateMachine.MPIStateArrays.end_ghost_exchange!
— Functionend_ghost_exchange!(Q::MPIStateArray; dependencies = nothing)
This function blocks on the host until the ghost halo is received from MPI. A KernelAbstractions Event
is returned that can be waited on to indicate when the data is ready on the device.
ClimateMachine.MPIStateArrays.weightedsum
— Functionweightedsum(A[, states])
Compute the weighted sum of the MPIStateArray
A
. If states
is specified on the listed states are summed, otherwise all the states in A
are used.
A typical use case for this is when the weights have been initialized with quadrature weights from a grid, thus this becomes an integral approximation.
Buffers
ClimateMachine.MPIStateArrays.CMBuffers.CMBuffer
— TypeCMBuffer{T}(::Type{Arr}, kind, dims...; pinned = true)
CUDA/MPI buffer abstracts storage for MPI communication. The buffer is used for staging data and for MPI transfers. When running on:
- CPU – a single buffer is used for staging and MPI transfers can be initiated directly to/from it.
- CUDA – either:
- MPI is CUDA-aware: a single buffer on the device for staging and MPI transfers can be initiated directly to/from it, or
- MPI is not CUDA-aware: a double buffering scheme with the staging buffer on the device and a transfer buffer on the host
Arguments
T
: element typeArr::Type
: what kind of array to allocate forstage
kind::CMBufferKind
: eitherSingleCMBuffer
orDoubleCMBuffer
dims...
: dimensions of the array
Helpers
ClimateMachine.MPIStateArrays.show_not_finite_fields
— Functionshow_not_finite_fields(Q::MPIStateArray)
Prints a warning of which fields are not finite.
This is an expensive method, which calls mapreduce
on Q
.
ClimateMachine.MPIStateArrays.checked_wait
— Functionchecked_wait(device, event, progress = nothing, check = false)
If check
is false
, simply perform a wait(device, event, progress)
, otherwise, check for exceptions and synchronize with all other ranks, so that all throw an exception.