APIs

ClimaComms.ClimaCommsModule
ClimaComms

Abstracts the communications interface for the various CliMA components in order to:

  • support different computational backends (CPUs, GPUs)
  • enable the use of different backends as transports (MPI, SharedArrays, etc.), and
  • transparently support single or double buffering for GPUs, depending on whether the transport has the ability to access GPU memory.
source

Loading

ClimaComms.@import_required_backendsMacro
ClimaComms.@import_required_backends

If the desired context is MPI (as determined by ClimaComms.context()), try loading MPI.jl. If the desired device is CUDA (as determined by ClimaComms.device()), try loading CUDA.jl.

source

Devices

ClimaComms.deviceFunction
ClimaComms.device()

Determine the device to use depending on the CLIMACOMMS_DEVICE environment variable.

Allowed values:

  • CPU, single-threaded or multi-threaded depending on the number of threads;
  • CPUSingleThreaded,
  • CPUMultiThreaded,
  • CUDA.

The default is CPU.

source
ClimaComms.array_typeFunction
ClimaComms.array_type(::AbstractDevice)

The base array type used by the specified device (currently Array or CuArray).

source
ClimaComms.allowscalarFunction
allowscalar(f, ::AbstractDevice, args...; kwargs...)

Device-flexible version of CUDA.@allowscalar.

Lowers to

f(args...)

for CPU devices and

CUDA.@allowscalar f(args...)

for CUDA devices.

This is usefully written with closures via

allowscalar(device) do
    f()
end
source
ClimaComms.@threadedMacro
@threaded device for ... end

A threading macro that uses Julia native threading if the device is a CPUMultiThreaded type, otherwise return the original expression without Threads.@threads. This is done to avoid overhead from Threads.@threads, and the device is used (instead of checking Threads.nthreads() == 1) so that this is statically inferred.

References

  • https://discourse.julialang.org/t/threads-threads-with-one-thread-how-to-remove-the-overhead/58435
  • https://discourse.julialang.org/t/overhead-of-threads-threads/53964
source
ClimaComms.@timeMacro
@time device expr

Device-flexible @time.

Lowers to

@time expr

for CPU devices and

CUDA.@time expr

for CUDA devices.

source
ClimaComms.@elapsedMacro
@elapsed device expr

Device-flexible @elapsed.

Lowers to

@elapsed expr

for CPU devices and

CUDA.@elapsed expr

for CUDA devices.

source
ClimaComms.@assertMacro
@assert device cond [text]

Device-flexible @assert.

Lowers to

@assert cond [text]

for CPU devices and

CUDA.@cuassert cond [text]

for CUDA devices.

source
ClimaComms.@syncMacro
@sync device expr

Device-flexible @sync.

Lowers to

@sync expr

for CPU devices and

CUDA.@sync expr

for CUDA devices.

An example use-case of this might be:

BenchmarkTools.@benchmark begin
    if ClimaComms.device() isa ClimaComms.CUDADevice
        CUDA.@sync begin
            launch_cuda_kernels_or_spawn_tasks!(...)
        end
    elseif ClimaComms.device() isa ClimaComms.CPUMultiThreading
        Base.@sync begin
            launch_cuda_kernels_or_spawn_tasks!(...)
        end
    end
end

If the CPU version of the above example does not leverage spawned tasks (which require using Base.sync or Threads.wait to synchronize), then you may want to simply use @cuda_sync.

source
ClimaComms.@cuda_syncMacro
@cuda_sync device expr

Device-flexible CUDA.@sync.

Lowers to

expr

for CPU devices and

CUDA.@sync expr

for CUDA devices.

source
Adapt.adapt_structureMethod
Adapt.adapt_structure(::Type{<:AbstractArray}, device::AbstractDevice)

Adapt a given device to a device associated with the given array type.

Example

Adapt.adapt_structure(Array, ClimaComms.CUDADevice()) -> ClimaComms.CPUSingleThreaded()
Note

By default, adapting to Array creates a CPUSingleThreaded device, and there is currently no way to conver to a CPUMultiThreaded device.

source

Contexts

ClimaComms.contextFunction
ClimaComms.context(device=device())

Construct a default communication context.

By default, it will try to determine if it is running inside an MPI environment variables are set; if so it will return a MPICommsContext; otherwise it will return a SingletonCommsContext.

Behavior can be overridden by setting the CLIMACOMMS_CONTEXT environment variable to either MPI or SINGLETON.

source
ClimaComms.graph_contextFunction
graph_context(context::AbstractCommsContext,
              sendarray, sendlengths, sendpids,
              recvarray, recvlengths, recvpids)

Construct a communication context for exchanging neighbor data via a graph.

Arguments:

  • context: the communication context on which to construct the graph context.
  • sendarray: array containing data to send
  • sendlengths: list of lengths of data to send to each process ID
  • sendpids: list of processor IDs to send
  • recvarray: array to receive data into
  • recvlengths: list of lengths of data to receive from each process ID
  • recvpids: list of processor IDs to receive from

This should return an AbstractGraphContext object.

source
Adapt.adapt_structureMethod
Adapt.adapt_structure(::Type{<:AbstractArray}, context::AbstractCommsContext)

Adapt a given context to a context with a device associated with the given array type.

Example

Adapt.adapt_structure(Array, ClimaComms.context(ClimaComms.CUDADevice())) -> ClimaComms.CPUSingleThreaded()
Note

By default, adapting to Array creates a CPUSingleThreaded device, and there is currently no way to conver to a CPUMultiThreaded device.

source

Logging

ClimaComms.OnlyRootLoggerFunction
OnlyRootLogger()
OnlyRootLogger(ctx::AbstractCommsContext)

Return a logger that silences non-root processes.

If no context is passed, obtain the default context via context.

source
ClimaComms.MPILoggerFunction
MPILogger(context::AbstractCommsContext)
MPILogger(iostream, context)

Add a rank prefix before log messages.

Outputs to stdout if no IOStream is given.

source
ClimaComms.FileLoggerFunction
FileLogger(context, log_dir; log_stdout = true, min_level = Logging.Info)

Log MPI ranks to different files within the log_dir.

The minimum logging level is set using min_level. If log_stdout = true, root process logs will be sent to stdout as well.

source

Context operations

ClimaComms.initFunction
(pid, nprocs) = init(ctx::AbstractCommsContext)

Perform any necessary initialization for the specified backend. Return a tuple of the processor ID and the number of participating processors.

source
ClimaComms.iamrootFunction
iamroot(ctx::AbstractCommsContext)

Return true if the calling processor is the root processor.

source
ClimaComms.nprocsFunction
nprocs(ctx::AbstractCommsContext)

Return the number of participating processors.

source
ClimaComms.abortFunction
abort(ctx::CC, status::Int) where {CC <: AbstractCommsContext}

Terminate the caller and all participating processors with the specified status.

source

Collective operations

ClimaComms.barrierFunction
barrier(ctx::CC) where {CC <: AbstractCommsContext}

Perform a global synchronization across all participating processors.

source
ClimaComms.reduceFunction
reduce(ctx::CC, val, op) where {CC <: AbstractCommsContext}

Perform a reduction across all participating processors, using op as the reduction operator and val as this rank's reduction value. Return the result to the first processor only.

source
ClimaComms.reduce!Function
reduce!(ctx::CC, sendbuf, recvbuf, op)
reduce!(ctx::CC, sendrecvbuf, op)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of the process. If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

source
ClimaComms.allreduceFunction
allreduce(ctx::CC, sendbuf, op)

Performs elementwise reduction using the operator op on the buffer sendbuf, allocating a new array for the result. sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

source
ClimaComms.allreduce!Function
allreduce!(ctx::CC, sendbuf, recvbuf, op)
allreduce!(ctx::CC, sendrecvbuf, op)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group. Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance. If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

source
ClimaComms.bcastFunction
bcast(ctx::AbstractCommsContext, object)

Broadcast object from the root process to all other processes. The value of object on non-root processes is ignored.

source

Graph exchange

ClimaComms.progressFunction
progress(ctx::AbstractGraphContext)

Drive communication. Call after start to ensure that communication proceeds asynchronously.

source
ClimaComms.finishFunction
finish(ctx::AbstractGraphContext)

Complete the communications step begun by start(). After this returns, data received from all neighbors will be available in the stage areas of each neighbor's receive buffer.

source