Getting Started

RootSolvers.jl is a Julia package for finding roots of nonlinear equations using robust, efficient, and GPU-capable numerical methods. It provides a simple, unified interface for a variety of classic root-finding algorithms, with flexible convergence criteria and solution reporting.


Installation

The package is registered in the Julia General registry.

Stable Release:

using Pkg
Pkg.add("RootSolvers")

Quick Start Example

using RootSolvers
# Find the root of x^2 - 100^2 using the secant method
sol = find_zero(x -> x^2 - 100^2, SecantMethod(0.0, 1000.0))
sol
CompactSolutionResults{Float64}:
โ”œโ”€โ”€ Status: converged
โ””โ”€โ”€ Root: 99.99999999994358

The numerical value of the root is contained in sol.root:

sol.root
99.99999999994358

How-to Guide

This guide shows the basic steps for solving a root-finding problem.

General Workflow

1. Define Your Function

Write your function as a Julia callable.

f(x) = x^3 - 2x - 5

2. Choose a Root-Finding Method

Pick a method and provide initial guesses. The type parameter (e.g., Float64) is often inferred automatically.

# For SecantMethod, provide two initial guesses
method = SecantMethod(1.0, 3.0)

3. (Optional) Set Tolerance and Solution Type

Customize the convergence criteria and the level of detail in the output.

# Stop when iterates are closer than 1e-6
tol = SolutionTolerance(1e-6)

# Request detailed output for debugging
soltype = VerboseSolution()

4. Call find_zero

All arguments after method are optional.

sol = find_zero(f, method, soltype, tol)
VerboseSolutionResults{Float64}:
โ”œโ”€โ”€ Status: converged
โ”œโ”€โ”€ Root: 2.094551481542313
โ”œโ”€โ”€ Error: -1.4832579608992091e-13
โ”œโ”€โ”€ Iterations: 8
โ””โ”€โ”€ History:
    โ”œโ”€โ”€ iter  1: x =        1, err = -6
    โ”œโ”€โ”€ iter  2: x =   1.5455, err = -4.4
    โ”œโ”€โ”€ iter  3: x =   1.8592, err = -2.292
    โ”œโ”€โ”€ iter  4: x =   2.2004, err = 1.252
    โ”œโ”€โ”€ iter  5: x =   2.0798, err = -0.1633
    โ”œโ”€โ”€ iter  6: x =   2.0937, err = -0.009452
    โ”œโ”€โ”€ iter  7: x =   2.0946, err = 7.904e-05
    โ”œโ”€โ”€ iter  8: x =   2.0946, err = -3.771e-08
    โ””โ”€โ”€ iter  9: x =   2.0946, err = -1.483e-13

5. Interpret Results

  • sol.converged: true if a root was found.
  • sol.root: The root value.
  • sol.err, sol.iter_performed, sol.root_history (available with VerboseSolution).

Specific Example: Newton's Method with a Provided Derivative

When using NewtonsMethod, you must provide a function that returns both the value f(x) and its derivative f'(x) as a tuple. This avoids the overhead of automatic differentiation and is highly efficient if you can provide an analytical derivative.

1. Define Function and Derivative

# This function finds the root of f(x) = x^2 - 4.
# It returns the tuple (f(x), f'(x)).
f_with_deriv(x) = (x^2 - 4, 2x)

2. Choose the Method and Call find_zero

# Provide a single initial guess for Newton's method
method = NewtonsMethod(1.0)

# The function f_with_deriv is passed to find_zero
sol = find_zero(f_with_deriv, method)
CompactSolutionResults{Float64}:
โ”œโ”€โ”€ Status: converged
โ””โ”€โ”€ Root: 2.000000000000002

Specific Example: Brent's Method for Robust Root Finding

Brent's method combines the bisection method, secant method, and inverse quadratic interpolation. It provides superlinear convergence while maintaining the robustness of bracketing methods.

1. Define Your Function

# This function finds the root of f(x) = x^3 - 2.
f(x) = x^3 - 2

2. Choose the Method and Call find_zero

# Provide a bracketing interval where f(x0) and f(x1) have opposite signs
method = BrentsMethod(-1.0, 2.0)  # f(-1) = -3, f(2) = 6

# Solve the root-finding problem
sol = find_zero(f, method)
CompactSolutionResults{Float64}:
โ”œโ”€โ”€ Status: converged
โ””โ”€โ”€ Root: 1.2599210498950593

Automatic Differentiation and Dual Number Support ๐Ÿ”„

RootSolvers.jl is fully compatible with automatic differentiation frameworks, making it suitable for integration into differentiable models and optimization problems. The package supports dual numbers (from ForwardDiff.jl and other AD packages) as input arguments, allowing gradients to flow through root-finding computations. Dual number support works on GPU arrays when using compatible AD frameworks.

Using RootSolvers in Differentiable Models

When your function f(x) accepts dual numbers, RootSolvers.jl can be used within larger differentiable computations:

using RootSolvers, ForwardDiff

# Create a function that uses root finding
function solve_and_evaluate(ฮธ)
    # ฮธ is a parameter that affects the root-finding problem
    f(x) = x^3 - ฮธ * x - 5
    sol = find_zero(f, SecantMethod(1.0, 3.0))
    return sol.root
end

# Compute the derivative with respect to ฮธ
ฮธ = 2.0
deriv = ForwardDiff.derivative(solve_and_evaluate, ฮธ)
println("Derivative: ", deriv)
Derivative: 0.187659649918941

This enables integration, for example, with derivative-based optimization algorithms, when an objective function may include a root finding problem.

High-Performance and GPU Computing ๐Ÿš€

RootSolvers.jl is designed for high-performance computing, supporting broadcasting over custom data structures and GPU acceleration. This makes it ideal for solving many problems in parallel.

Broadcasting with Abstract Types

The package works seamlessly with any abstract type that supports broadcasting, making it well-suited for scientific domains like climate modeling.

Example: Solving over a custom field type

using RootSolvers

# Example using regular arrays to represent a field grid
x0 = rand(10, 10)  # A 10x10 field of initial guesses
x1 = x0 .+ 1       # A second field of initial guesses

# Define a function that operates element-wise on the field
f(x) = x^2 - 2

# Solve the root-finding problem across the entire field
method = SecantMethod(x0, x1)
sol = find_zero.(f, method, CompactSolution()) # sol is an Array of structs
10ร—10 Matrix{RootSolvers.CompactSolutionResults{Float64}}:
 1.41421  1.41421  1.41421  1.41421  โ€ฆ  1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.41421  1.41421     1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.4142   1.41421     1.41421  1.41421  1.41421  1.41422
 1.41421  1.41421  1.41421  1.41421     1.41421  1.41421  1.41421  1.41421
 1.41422  1.41421  1.4142   1.41421     1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.41421  1.41422  โ€ฆ  1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.4142   1.41421     1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.41421  1.41421     1.41421  1.41421  1.41421  1.41421
 1.41421  1.41421  1.41421  1.41421     1.41422  1.41421  1.41421  1.41421
 1.41421  1.41421  1.41422  1.41422     1.41421  1.41421  1.41421  1.41421

Use getproperty.() to extract the fields from each struct in the array:

converged_field = getproperty.(sol, :converged)
root_field = getproperty.(sol, :root)

println("All converged: ", all(converged_field))
println("Root field shape: ", size(root_field))
All converged: true
Root field shape: (10, 10)

GPU Acceleration for Batch Processing

You can achieve significant speedups by running large batches of problems on a GPU.

GPU Backends

The following examples use 'CUDA.jl`, but similar results
can be achieved for different GPU backends with KernelAbstractions.jl.

GPU Usage Tips:

  • UseCompactSolution: Only CompactSolution is GPU-friendly. VerboseSolution is for CPU debugging only.
  • GPU-Compatible Function: Ensure your function f(x) uses only GPU-supported operations.
  • Minimize Data Transfer: Keep initial guesses and results on the GPU.

Broadcasting Example: 1 Million problems on the GPU

using RootSolvers, CUDA

# Create GPU arrays for batch processing
x0 = CUDA.fill(1.0f0, 1000, 1000)  # 1M initial guesses on GPU
x1 = CUDA.fill(2.0f0, 1000, 1000)  # Second initial guesses

# Define GPU-compatible function
f(x) = x^3 - x - 2

# Solve all problems in parallel using broadcasting
method = SecantMethod(x0, x1) # method = SecantMethod.(x0, x1) is also supported
sol = find_zero.(f, method, CompactSolution()) # broadcast launches kernel

# Results are on the GPU as an array of CompactSolutions
converged_field = map(sol_i -> sol_i.converged, sol)
root_field = map(sol_i -> sol_i.root, sol)

println("All converged: ", all(converged_field)) # Ouput: "All converged: true"
println("Root field shape: ", size(root_field)) # Output "Root field shape: (1000, 1000)"

Map Example: 1 Million problems on the GPU

using RootSolvers, CUDA

# Create GPU arrays for batch processing
x0 = CUDA.fill(1.0f0, 1000, 1000)  # 1M initial guesses on GPU
x1 = CUDA.fill(2.0f0, 1000, 1000)  # Second initial guesses

# Define GPU-compatible function
f(x) = x^3 - x - 2

# Solve all problems in parallel using map
const METHOD = SecantMethod
sol = map(x0, x1) do x0, x1 # map launches kernel
    find_zero(f, METHOD(x0, x1), CompactSolution())
end

# Results are on the GPU as an array of CompactSolutions
converged_field = map(sol_i -> sol_i.converged, sol)
root_field = map(sol_i -> sol_i.root, sol)

println("All converged: ", all(converged_field)) # Ouput: "All converged: true"
println("Root field shape: ", size(root_field)) # Output "Root field shape: (1000, 1000)"

Reference Tables

Available Root-Finding Methods

MethodRequirementsBest For
SecantMethod2 initial guessesNo derivatives, fast convergence
RegulaFalsiMethodBracketing intervalGuaranteed convergence
BisectionMethodBracketing intervalGuaranteed convergence, simple
BrentsMethodBracketing intervalSuperlinear convergence, robust
NewtonsMethodAD1 initial guess, differentiable fFastest, uses autodiff, robust step control
NewtonsMethod1 initial guess, f and f' providedAnalytical derivatives, robust step control

Available Tolerance Types

Tolerance TypeCriterionBest For
SolutionToleranceabs(xโ‚‚ - xโ‚)When you want iterates to stabilize
ResidualToleranceabs(f(x))When you want the function value near zero
RelativeSolutionToleranceabs((xโ‚‚ - xโ‚)/xโ‚)When root magnitude varies widely
RelativeOrAbsolute...Relative or AbsoluteRobust for both small and large roots

Available Solution Types

Solution TypeFeaturesBest For
CompactSolutionMinimal output, GPU-friendlyHigh-performance, GPU, memory efficiency
VerboseSolutionFull diagnostics, iteration historyDebugging, analysis, CPU

Advanced Features

FeatureDescriptionUse Cases
Dual Number SupportCompatible with automatic differentiationDifferentiable models, optimization, gradient-based learning
GPU AccelerationFull CUDA.jl support with broadcastingLarge-scale parallel processing, batch computations
Custom Field TypesWorks with any broadcastable typeScientific computing, climate modeling, custom data structures

Troubleshooting

  • If not converging, try different initial guesses or a bracketing method such as BrentsMethod.
  • Use VerboseSolution() to inspect the iteration history and diagnose issues.
  • Adjust the tolerance for stricter or looser convergence criteria.

Extending RootSolvers.jl

If you want to add custom root-finding methods, tolerance types, or solution formats, see the Developer Documentation for detailed guidance on extending the package.