FluxDiffUtils Documentation

This package provides utilities for flux differencing and computation of flux differencing Jacobians in terms of derivatives of flux functions. The code based in part on the preprint "Efficient computation of Jacobian matrices for entropy stable summation-by-parts schemes".

The routines are meant to be fairly general, but specialize depending on whether the operators are general arrays or SparseMatrixCSC (to capitalize on sparsity).

Example

using LinearAlgebra
using FluxDiffUtils
using Test

# make 3-field solution
u = collect(LinRange(-1,1,4))
U = (u,u,u)

# define a flux fS
avg(x,y) = @. .5*(x+y)
function fS(UL,UR)
    uL,vL,wL = UL
    uR,vR,wR = UR
    Fx = avg(uL,uR),avg(vL,vR),avg(wL,wR)
    Fy = avg(uL,uR),avg(vL,vR),avg(wL,wR)
    return SVector{3}(Fx),SVector{3}(Fy)
end

# jacobians w.r.t. (uR,vR)
dfS(UL,UR) = ([.5 0 0; 0 .5 0; 0 0 .5], [.5 0 0; 0 .5 0; 0 0 .5])
A_list = (A->A+A').(ntuple(x->randn(4,4),2)) # make symmetric to check formula

rhs = hadamard_sum(A_list,fS,U)

jac = hadamard_jacobian(A_list, dfS, U)
# jac = hadamard_jacobian(A_list, :sym, df, U) # optimized version

jac11_exact = sum((A->.5*(A + diagm(vec(sum(A,dims=1))))).(A_list))
@test norm(jac11_exact-jac[1,1]) < 1e-12

# converts tuple-block storage of jac to a global matrix
jac_global = blockcat(size(jac,2),jac)

Conventions

  • We assume grouped arguments for both fluxes and derivatives (e.g., FluxDiffUtils.jl expects fluxes of the form f(U,V) instead of f(u1,u2,v1,v2) for U=(u1,u2), V=(v1,v2)).
  • We assume the number of outputs from the flux matches the number of operators passed in. In other words, if f(uL,vL) has 2 outputs g,h, you should provide two matrices (A1, A2). hadamard_sum will compute sum(A1.*g + A2.*h, dims = 2) (hadamard_jacobian behaves similarly).
  • For Jacobian matrices, we assume derivatives of flux functions f(uL,uR) are taken with respect to the second argument uR.
  • Jacobians are returned as a StaticArray of arrays, and can be concatenated into a global matrix using blockcat(size(jac,2),jac). This assumes an ordering of degrees of freedom by fields first (e.g., [u1, u2, ..., uN, v1, v2, ..., vN, w1, w2, ..., wN] rather than [u1,v1,w1, u2,v2,w2, ...])
  • Jacobian computations can be made more efficient by specifying if the Hadamard product A.*F (where A is a discretization matrix and F is a flux matrix) is symmetric or skew-symmetric by setting hadamard_product_type to :sym or :skew. Otherwise, FluxDiffUtils.jl will split the matrix A into skew and symmetric parts and compute Jacobians for each.

Index

Functions

FluxDiffUtils.banded_function_evalsMethod
banded_function_evals(mat_fun::Fxn, U, Fargs...)

Computes block-banded matrix whose bands are entries of matrix-valued function evals (e.g., a Jacobian). Returns SMatrix whose blocks correspond to function components evaluated at values of U.

Example:

julia> mat_fun(U) = [U[1] U[2]; U[2] U[1]]
julia> U = (randn(10),randn(10))
julia> banded_function_evals(mat_fun,U)
source
FluxDiffUtils.blockcatMethod
blockcat(n, A)
blockcat(n, A::AbstractMatrix{SparseMatrixCSC{Ti,Tv}}) where {Ti,Tv}

Concatenates the n-by-n matrix of matrices A into a global matrix. If eltype(A) = SparseMatrixCSC, blockcat returns a global sparse matrix. Otherwise, blockcat returns a global dense array of type Array{eltype(first(A))}.

source
FluxDiffUtils.hadamard_jacobian!Method
hadamard_jacobian!(A::SMatrix{N,N},A_list,hadamard_product_type::Symbol,
                   dF::Fxn,U,Fargs...; skip_index=(i,j)->false) where {N,Nd,Fxn}

Mutating version of hadamard_jacobian. A = matrix for storing Jacobian output, with each entry storing a block of the Jacobian.

source
FluxDiffUtils.hadamard_jacobianMethod
hadamard_jacobian(A_list, dF::Fxn, U, ::Val{Nfields}, Fargs...;
                  skip_index=(i,j)->false) where {N,Nfields, Fxn}
hadamard_jacobian(A_list, hadamard_product_type, dF::Fxn, U, ::Val{Nfields},
                  Fargs...; skip_index=(i,j)->false) where {N,Fxn}

Version with static information Nfields as a value-type argument.

Example

julia> hadamard_jacobian(A_list, dF, U, Val(4)) # Nfields = 4
source
FluxDiffUtils.hadamard_jacobianMethod
hadamard_jacobian(A_list, dF::Fxn, U, Fargs...;
                  skip_index=(i,j)->false) where {N,Fxn}
hadamard_jacobian(A_list, hadamard_product_type, dF::Fxn, U,
                  Fargs...; skip_index=(i,j)->false) where {N,Fxn}

Computes Jacobian of the flux differencing term ∑_i sum(Ai.*Fi). Outputs array whose entries are blocks of the Jacobian matrix corresponding to components of flux vectors.

Inputs:

  • A_list = tuple of operators
  • hadamard_product_type = :skew, :sym. Specifies if Ai.*Fi is skew or symmetric.
  • dF = Jacobian of the flux function F(uL,uR) with respect to uR
  • U = solution at which to evaluate the Jacobian
  • Fargs = extra args for dF(uL,uR)
  • (optional) skip_index(i,j) = optional function to skip computation of (i,j)th entry
source
FluxDiffUtils.hadamard_sumMethod
hadamard_sum(A_list, F::Fxn, u, Fargs...;
             skip_index=(i,j)->false) where {N,T,Fxn}

computes ∑i sum(Ai.*Fi,dims=2) where (Fi)jk = F(uj,uk)[i]

Inputs

  • A_list: tuple (or similar container) of operators (A1,...,Ad)
  • F: flux function which outputs a length d container of flux vectors
  • u: collection of solution values (or arrays) at which to evaluate F
  • Fargs: extra arguments to F(ui,uj,getindex.(Fargs,i)...,getindex.(Fargs,j)...)
  • (optional) skip_index(i,j)==true skips computing fluxes for index (i,j)

Since this sums over rows of matrices, this function may be slow for column-major and sparse CSC matrices. If you are using column major/CSC storage, it will be faster to precompute transposes of A_list and pass them to hadamard_sum_ATr!.

source
FluxDiffUtils.hadamard_sum_ATr!Method
hadamard_sum_ATr!(rhs,ATr_list,F,u,Fargs...; skip_index=(i,j)->false)
hadamard_sum_ATr!(rhs,ATr_list::NTuple{N,SparseMatrixCSC},F,u,Fargs...) where {N}
hadamard_sum_ATr!(rhs,ATr::SparseMatrixCSC,F,u,Fargs...)

Same as hadamard_sum except that hadamard_sum_ATr!

  • assumes ATr_list contains transposed matrices.
  • accumulates the result into rhs

Specializes based on whether ATr_list contains SparseMatrixCSC or general arrays. The SparseMatrixCSC version works best if the matrices in ATr_list have distinct sparsity patterns.

source
FluxDiffUtils.hadamard_sum_ATr!Method
hadamard_sum_ATr!(rhs::TupleOrSVector{N},ATr_list::NTuple{D,Array},F::Fxn,u,Fargs...;
                  skip_index=(i,j)->false) where {N,D,Fxn}
hadamard_sum_ATr!(rhs::NTupleOrSVector{N},ATr::SparseMatrixCSC,F::Fxn,u,Fargs...) where {N,Fxn}

Zero-allocation dense/sparse versions if rhs has a statically inferrable length (e.g., is an NTuple or SVector)

source