-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Labels
help wantedExtra attention is neededExtra attention is needed
Description
This is related to #98 — Float64
precision seems to be baked into the package, whereas it would be more flexible (and more Julian) to use the precision of the arguments. For example, using BigFloat
in the following example only gives 16 digits of accuracy:
julia> using FiniteDifferences
julia> extrapolate_fdm(central_fdm(2, 1), sin, big"1.0")[1] - cos(big"1.0")
-6.71174699531887290713204926350530055924229547805016487900793424335727248883486e-17
In contrast, "manually" calling Richardson extrapolation with a 2nd-order finite-difference approximation gives 70 digits (in 10 iterations):
julia> using Richardson
julia> dsin,_ = extrapolate(big"0.1", rtol=0, power=2) do h
@show Float64(h)
(sin(1+h) - sin(1-h)) / 2h
end
Float64(h) = 0.1
Float64(h) = 0.0125
Float64(h) = 0.0015625
Float64(h) = 0.0001953125
Float64(h) = 2.44140625e-5
Float64(h) = 3.0517578125e-6
Float64(h) = 3.814697265625e-7
Float64(h) = 4.76837158203125e-8
Float64(h) = 5.960464477539063e-9
Float64(h) = 7.450580596923829e-10
(0.5403023058681397174009366074429766037323104206179222276700972553811007395485809, 3.557003874037244232691736616015521320518299017661431723628543167235436096570656e-70)
julia> dsin - cos(big"1.0")
3.44774105579695699101361233110829910923644073773066771188994182638266121185316e-70
So, someplace in FiniteDifferences is either hard-coding a tolerance (rather than using eps(float(x))
), or "contaminating" the calculation with an inexact Float64
literal.
cc @hammy4815
Metadata
Metadata
Assignees
Labels
help wantedExtra attention is neededExtra attention is needed