# Arrays

## MPI State Arrays

Storage for the state of a discretization.

ClimateMachine.MPIStateArrays.end_ghost_exchange!Function
end_ghost_exchange!(Q::MPIStateArray; dependencies = nothing)

This function blocks on the host until the ghost halo is received from MPI. A KernelAbstractions Event is returned that can be waited on to indicate when the data is ready on the device.

source
ClimateMachine.MPIStateArrays.weightedsumFunction
weightedsum(A[, states])

Compute the weighted sum of the MPIStateArray A. If states is specified on the listed states are summed, otherwise all the states in A are used.

A typical use case for this is when the weights have been initialized with quadrature weights from a grid, thus this becomes an integral approximation.

source

## Buffers

ClimateMachine.MPIStateArrays.CMBuffers.CMBufferType
CMBuffer{T}(::Type{Arr}, kind, dims...; pinned = true)

CUDA/MPI buffer abstracts storage for MPI communication. The buffer is used for staging data and for MPI transfers. When running on:

• CPU – a single buffer is used for staging and MPI transfers can be initiated directly to/from it.
• CUDA – either:
• MPI is CUDA-aware: a single buffer on the device for staging and MPI transfers can be initiated directly to/from it, or
• MPI is not CUDA-aware: a double buffering scheme with the staging buffer on the device and a transfer buffer on the host

Arguments

• T: element type
• Arr::Type: what kind of array to allocate for stage
• kind::CMBufferKind: either SingleCMBuffer or DoubleCMBuffer
• dims...: dimensions of the array
source

## Helpers

ClimateMachine.MPIStateArrays.checked_waitFunction
checked_wait(device, event, progress = nothing, check = false)

If check is false, simply perform a wait(device, event, progress), otherwise, check for exceptions and synchronize with all other ranks, so that all throw an exception.

source