Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss of accuracy in weights of large Gauss-Jacobi rules #130

Closed
pbeckman opened this issue Nov 11, 2023 · 9 comments
Closed

Loss of accuracy in weights of large Gauss-Jacobi rules #130

pbeckman opened this issue Nov 11, 2023 · 9 comments

Comments

@pbeckman
Copy link
Contributor

While using some large Gauss-Jacobi rules, I found that for highly singular integrands ($\alpha < -0.5$ or so), I'm losing digits in the resulting integrals. As a minimal example, consider
$$\int_{-1}^1 (1-x)^\alpha dx = \frac{2^{\alpha+1}}{\alpha+1}.$$
For a range of $n$'s, we approximate the above integral by summing the weights of an $n$-point Gauss-Jacobi rule.

using FastGaussQuadrature

a = -0.9
ns = 2 .^ (1:16)
I_true = 2^(a+1)/(a+1)

Is_quad = [sum(gaussjacobi(n, a, 0)[2]) for n in ns]
rel_errs = (I_true .- Is_quad) / I_true

which yields the following errors

n       relative error
----------------------
2       -1.657e-16
4       0.000e+00
8       0.000e+00
16      0.000e+00
32      0.000e+00
64      1.657e-16
128     -5.310e-13
256     7.170e-12
512     5.277e-12
1024    1.242e-11
2048    -1.904e-10
4096    1.459e-09
8192    2.030e-09
16384   -1.294e-08
32768   -4.663e-08
65536   6.886e-08

Is this expected behavior? Or perhaps related to this issue posted by @JamesCBremerJr?

Figure 4.5 in Hale and Townsend 2012 (on which I believe this routine is based?) shows 13 to 14 digits of accuracy in the weights for $\alpha = 2, \beta = -0.75$ up to $10^4$ nodes. But if we rerun the above code with $\alpha = -0.75$, we get only 10 digits in the integral for $n = 2^{16}$.

If we consider larger rules with $\alpha = -0.75$, we get only 8 digits for $n=2^{20}$. And if we take a harsher singularity, e.g. $\alpha = -0.99$, we get only 6 digits for $n=2^{16}$. So it looks like this loss of accuracy eventually (but long before any nodes coincide to machine precision) appears for any $\alpha < -0.5$, and becomes more severe as $\alpha \to -1$. Are such cases known limitations?

@pbeckman
Copy link
Contributor Author

pbeckman commented Nov 14, 2023

If we run the following MATLAB code, which is an equivalent test using the implementation of the Hale and Townsend 2012 algorithm in Chebfun, we see much better results, even using many nodes with a singularity $\alpha = -0.99$

a = -0.99;
ns = 2.^ (1:16);
I_true = 2^(a+1)/(a+1);

Is_quad = zeros(length(ns), 1);
for i=1:length(ns)
    [~, w] = jacpts(ns(i), a, 0);
    Is_quad(i) = sum(w);
end

rel_errs = (I_true - Is_quad) / I_true;
n	relative error
----------------------
2	-4.516e-15
4	1.552e-15
8	4.145e-13
16	7.042e-13
32	1.437e-12
64	7.099e-12
128	-3.387e-15
256	3.810e-15
512	0.000e+00
1024	-3.528e-15
2048	-1.411e-15
4096	-2.258e-15
8192	4.234e-16
16384	-7.056e-16
32768	4.234e-16
65536	1.694e-15

This seems to suggest an issue in the Julia implementation.

@pbeckman
Copy link
Contributor Author

pbeckman commented Nov 15, 2023

If we take the Chebfun weights as ground truth (given that they integrate $f \equiv 1$ to 15 digits) and plot the relative error between the Chebfun and FastGaussQuadrature.jl weights, we see that the error is highly concentrated near the endpoints. For example, with $\alpha = -0.99$ and $n = 2^{16},$ we have
jacobi_error_a99
Markers are sized and colored by error size for emphasis. This is a more exaggerated case of exactly what @JamesCBremerJr noted in the aforementioned issue.

@pbeckman
Copy link
Contributor Author

pbeckman commented Nov 28, 2023

The issue is that in the Newton iterations at gaussjacobi.jl:382, the recursive formula innerjacobi_rec is used instead of the boundary asymptotics, which are implemented as feval_asy2 in Chebfun. An equivalent feval_asy2 routine is not implemented in this package, so I now have a fork here with the low order terms in these asymptotics implemented. This gives us back 6 digits or so. See below.
jacobi_error_a99_new
However, to get 16 digit accuracy, we need the higher order terms. The higher order coefficients $A_m$ and $B_m$ in equation (3.24) of Hale and Townsend 2012 are computed numerically in Chebfun from recursive integro-differential formulae using Chebyshev interpolation, integration, and differentiation. But these routines are not readily available in FastGaussQuadrature.jl.

Do these Chebyshev routines exist elsewhere in JuliaApproximation? Or (at the risk of bloating this package) should we implement them here?

@hyrodium
Copy link
Collaborator

Thank you for the detailed comments!
PRs are always welcomed.

Do these Chebyshev routines exist elsewhere in JuliaApproximation?

I'm not much familiar with Chebyshev polynomials, but maybe FastChebInterp.jl helps here?

@dlfivefifty
Copy link
Member

It should all be doable with ClassicalOrthogonalPolynomials.jl but that depends on this package./.

@pbeckman
Copy link
Contributor Author

pbeckman commented Nov 30, 2023

Thanks for your responses!

Unless I'm mistaken, it doesn't appear that Chebyshev integration is currently implemented in ClassicalOrthogonalPolynomials.jl. As an alternative, ApproxFunOrthogonalPolynomials.jl has both Chebyshev integration and differentiation implemented. However, it is similarly a circular dependency with this package...

In my view, part of the value of FastGaussQuadrature.jl is that it is extremely lightweight, with relatively few dependencies. I can use ApproxFunOrthogonalPolynomials.jl to solve this issue, but it's a heavy (not to mention circular) dependency, so from a software engineering perspective it doesn't seem like a great path forward. It's a hefty price to pay for some basic Chebyshev routines on $N = 10$ points.

The alternative is to use some lookups and reimplement very barebones routines for the necessary computations. This is the path I've taken in my fork. I've saved the 10-point Chebyshev integration and differentiation matrices from Chebfun as constants in gaussjacobi.jl, and reimplemented only the very short barycentric interpolation routine. This is pretty painless and keeps the repository lightweight.

@dlfivefifty as a contributor to all the involved libraries, let me know if you think this is a reasonable solution. If so, I'll try to iron out the remaining numerical issues (all the existing package tests pass, but I still get ~11 digits on worst-case tests related to this issue, not 16) and submit a pull request.

@dlfivefifty
Copy link
Member

It certainly is implemented:

julia> using ClassicalOrthogonalPolynomials

julia> T = ChebyshevT()
ChebyshevT()

julia> cumsum(T*[1; zeros(∞)])
ChebyshevT() * [1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0    ]

It's done by applying an ∞-almost banded matrix:

https://github.com/JuliaApproximation/ClassicalOrthogonalPolynomials.jl/blob/bd5d352eabf7898879f0b3247d86dd8d8c80ccd0/src/classical/chebyshev.jl#L301

But I think this is overkill for what we need.

Here's the equivalent routine in ApproxFunOrthogonalPolynomials.jl:

https://github.com/JuliaApproximation/ApproxFunOrthogonalPolynomials.jl/blob/427faaf2296c0fe48c84741046e4aaf0dbe9da66/src/ultraspherical.jl#L27

This is more usable. I think the best option is to make a new package ChebyshevTransforms.jl that contains:

  1. The chebyshev transforms in https://github.com/JuliaApproximation/FastTransforms.jl/blob/master/src/chebyshevtransform.jl
  2. chebyshevintegration! and chebyshevdifferentiation!

FastChebInterp.jl could depend on these packages if they want. But I would avoid depending on that package directly as it introduces an alternative notion of a Chebyshev expansion as either ApproxFun.jl or ClassicalOrthogonalPolynomials.jl. So have a more "core" package like ChebyshevTransforms.jl that doesn't actually implement convenient structs makes sense here.

@pbeckman
Copy link
Contributor Author

pbeckman commented Dec 4, 2023

Ah, I see - I was looking for an integrate-like function instead of a cumsum.

Your ChebyshevTransforms.jl solution makes good sense to me. Given my lack of familiarity with the relevant interfaces in JuliaApproximation.jl, I've submitted a pull request in the meantime which resolves this issue and #58 in a way that's completely internal to this package. It uses saved integration and differentiation matrices from Chebfun as I mentioned above. It should be easy to substitute more elegant Chebyshev routines once the restructuring of the JuliaApproximation.jl package ecosystem that you suggest occurs. But I imagine you may want to engineer that yourself as the manager of those packages.

@hyrodium
Copy link
Collaborator

hyrodium commented Dec 9, 2023

Fixed by #131.

@hyrodium hyrodium closed this as completed Dec 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants