-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss of accuracy in weights of large Gauss-Jacobi rules #130
Comments
If we run the following MATLAB code, which is an equivalent test using the implementation of the Hale and Townsend 2012 algorithm in Chebfun, we see much better results, even using many nodes with a singularity a = -0.99;
ns = 2.^ (1:16);
I_true = 2^(a+1)/(a+1);
Is_quad = zeros(length(ns), 1);
for i=1:length(ns)
[~, w] = jacpts(ns(i), a, 0);
Is_quad(i) = sum(w);
end
rel_errs = (I_true - Is_quad) / I_true; n relative error
----------------------
2 -4.516e-15
4 1.552e-15
8 4.145e-13
16 7.042e-13
32 1.437e-12
64 7.099e-12
128 -3.387e-15
256 3.810e-15
512 0.000e+00
1024 -3.528e-15
2048 -1.411e-15
4096 -2.258e-15
8192 4.234e-16
16384 -7.056e-16
32768 4.234e-16
65536 1.694e-15 This seems to suggest an issue in the Julia implementation. |
If we take the Chebfun weights as ground truth (given that they integrate |
The issue is that in the Newton iterations at Do these Chebyshev routines exist elsewhere in JuliaApproximation? Or (at the risk of bloating this package) should we implement them here? |
Thank you for the detailed comments!
I'm not much familiar with Chebyshev polynomials, but maybe FastChebInterp.jl helps here? |
It should all be doable with ClassicalOrthogonalPolynomials.jl but that depends on this package./. |
Thanks for your responses! Unless I'm mistaken, it doesn't appear that Chebyshev integration is currently implemented in ClassicalOrthogonalPolynomials.jl. As an alternative, ApproxFunOrthogonalPolynomials.jl has both Chebyshev integration and differentiation implemented. However, it is similarly a circular dependency with this package... In my view, part of the value of FastGaussQuadrature.jl is that it is extremely lightweight, with relatively few dependencies. I can use ApproxFunOrthogonalPolynomials.jl to solve this issue, but it's a heavy (not to mention circular) dependency, so from a software engineering perspective it doesn't seem like a great path forward. It's a hefty price to pay for some basic Chebyshev routines on The alternative is to use some lookups and reimplement very barebones routines for the necessary computations. This is the path I've taken in my fork. I've saved the 10-point Chebyshev integration and differentiation matrices from Chebfun as constants in @dlfivefifty as a contributor to all the involved libraries, let me know if you think this is a reasonable solution. If so, I'll try to iron out the remaining numerical issues (all the existing package tests pass, but I still get ~11 digits on worst-case tests related to this issue, not 16) and submit a pull request. |
It certainly is implemented: julia> using ClassicalOrthogonalPolynomials
julia> T = ChebyshevT()
ChebyshevT()
julia> cumsum(T*[1; zeros(∞)])
ChebyshevT() * [1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … ] It's done by applying an ∞-almost banded matrix: But I think this is overkill for what we need. Here's the equivalent routine in ApproxFunOrthogonalPolynomials.jl: This is more usable. I think the best option is to make a new package ChebyshevTransforms.jl that contains:
FastChebInterp.jl could depend on these packages if they want. But I would avoid depending on that package directly as it introduces an alternative notion of a Chebyshev expansion as either ApproxFun.jl or ClassicalOrthogonalPolynomials.jl. So have a more "core" package like ChebyshevTransforms.jl that doesn't actually implement convenient structs makes sense here. |
Ah, I see - I was looking for an Your |
Fixed by #131. |
While using some large Gauss-Jacobi rules, I found that for highly singular integrands ($\alpha < -0.5$ or so), I'm losing digits in the resulting integrals. As a minimal example, consider
$$\int_{-1}^1 (1-x)^\alpha dx = \frac{2^{\alpha+1}}{\alpha+1}.$$ $n$ 's, we approximate the above integral by summing the weights of an $n$ -point Gauss-Jacobi rule.
For a range of
which yields the following errors
Is this expected behavior? Or perhaps related to this issue posted by @JamesCBremerJr?
Figure 4.5 in Hale and Townsend 2012 (on which I believe this routine is based?) shows 13 to 14 digits of accuracy in the weights for$\alpha = 2, \beta = -0.75$ up to $10^4$ nodes. But if we rerun the above code with $\alpha = -0.75$ , we get only 10 digits in the integral for $n = 2^{16}$ .
If we consider larger rules with$\alpha = -0.75$ , we get only 8 digits for $n=2^{20}$ . And if we take a harsher singularity, e.g. $\alpha = -0.99$ , we get only 6 digits for $n=2^{16}$ . So it looks like this loss of accuracy eventually (but long before any nodes coincide to machine precision) appears for any $\alpha < -0.5$ , and becomes more severe as $\alpha \to -1$ . Are such cases known limitations?
The text was updated successfully, but these errors were encountered: