Efficient numerical differentiation by Richardson ...

35 downloads 0 Views 237KB Size Report
Sep 24, 2017 - The numerical differentiation based on the finite difference formulas is .... log. 10. |δf 0. '| x. Robustness Against Derivative Discontinuity.
2

Efficient numerical differentiation by Richardson extrapolation applied to forward difference formulas

3

Toshio Fukushima

1

4 5 6

7

National Astronomical Observatory of Japan / SOKENDAI, 2-21-1, Ohsawa, Mitaka, Tokyo 181-8588, Japan E-mail: [email protected]

Abstract We developed a numerical method to estimate the first and second order derivatives of a given function by using the Richardson extrapolation applied to the first and general second order forward difference formulas. By selecting the sequence of the test argument displacements as a power of a constant factor, setting the factor as small as 1/8, and adopting a combination of some termination conditions, we increased the cost performance of the method significantly more than that of Ridders’ method using the central difference formulas with the factor 1/2.

8

Keywords: adaptive method; forward difference formula; numerical

9

differentiation; Richardson extrapolation

10

1. Introduction

11

The numerical differentiation based on the finite difference formulas is notorious

12

for its unstable nature against the information loss in the argument evaluation

13

and the rounding-off error in the subtraction of computed function values [17,

14

p. 229]. Thus, its alternatives have been intensively investigated: the symbolic Preprint submitted to Elsevier

September 24, 2017

15

differentiation by means of the formula processors like Mathematica [23], the

16

automatic differentiation [1, 11, 16], the complex argument differences [22, 15],

17

and the differentiation by the integration [14, 18].

18

However, there still remains a necessity for the numerical differentiation. Be-

19

fore going further, we present below an example where the above alternative

20

schemes are not effective. Let us consider a function defined as a weakly sin-

21

gular integral as ∫ 1

f (x) =

−1

g(y) log |x − y|dy, (|x| < 1),

(1)

22

where g(y) is a sufficiently smooth function defined in the domain, |y| < 1. This

23

kind of integrals frequently appear in computing the gravitational or electrostatic

24

potential of an infinitely fine line segment [13]. In any case, the integral itself can

25

be accurately evaluated by the split quadrature method using the double exponen-

26

tial rule [3, 4, 6, 8, 9, 10]. Nevertheless, its derivative is not so.

27

In fact, all of the symbolic differentiation, the automatic differentiation, and

28

the complex argument difference finally reduce to the numerical quadrature of the

29

partially differentiated integrand written as f ′ (x) =

∫ 1 g(y) −1

x−y

dy, (|x| < 1).

(2)

30

The resulting integral is algebraically singular, and therefore not absolutely inte-

31

grable. This situation is caused by simply exchanging the order of the integration

32

with respect to y and the differentiation with respect to x, which is permissible 2

33

only if the integrand is uniformly convergent. Of course, one may obtain a mean-

34

ingful result if interpreting the integral as Cauchy’s principal value. However,

35

we must prepare a specially designed procedure to ensure the correctness of the

36

principal value, which is not an easy task.

37

Let us return to the numerical differentiation. Once Ridders [21] presented

38

an adaptive method to accurately approximate the first and second order deriva-

39

tives by applying the Richardson extrapolation to the central difference formulas.

40

Ridders’ method is so flexible and easily coded that it is regarded as the standard

41

method [17, Sect. 5.7]. None the less, it is also true that the method is significantly

42

time-consuming as seen in Figs 1 and 2.

43

Recently, we established a series of the numerical procedures to obtain the

44

gravitational or electromagnetic field of arbitrary mass density, electric charge, or

45

electric current distribution [4, 5, 6, 7, 8, 9, 10]. Among them, during the course of

46

the study of finite distributions [9, 10], we faced with the necessity to establish an

47

efficient procedure to obtain the gradient vector and the Hessian matrix of a scalar

48

potential function obtained by a certain numerical quadrature such as described in

49

the above.

50

In general, the potential function is continuous but not always differentiable,

51

especially at the boundaries of the finitely bounded distributions. In such cases,

52

Ridders’ method fails to provide correct answers as illustrated in Fig. 3. Also,

53

the potential functions are not always computed precisely. This is because the

54

numerical quadrature usually consists of millions of operations, and therefore not

55

only the truncation but also the round-off errors are prominent. In addition, the 3

log10|δfn’|

Comparison of Cost Performance: First Order Derivative 0 f1(x) = ex -4 f2(x) = 1/x Ridders f3(x) = log x -8 f4(x) = 1/sqrt(1+x2) -12 x=1 -16 -20

f2

new

-24

f4 f2

f1

-28

f3 f1

-32 4

6

8

10

12 14 Neval

16

18

20

Figure 1: ( Cost performance of)the numerical differentiation: the first order derivative. Plotted ′ ′ ′ ′ are δ f ≡ fnumerical − fana;ytical / fanalytical , the relative errors of the numerical differentiation, as a function of Neval , the number of function evaluations required in a log-log manner. Compared are√the derivatives of the four test functions, f1 (x) = ex , f2 (x) = 1/x, f3 (x) = log x, and f4 (x) = 1/ 1 + x2 , when x = 1 evaluated by Ridders’ method [21] and the new method in the quadruple precision environment.

4

log10|δfn"|

Comparison of Cost Performance: Second Order Derivative 0 f1(x) = ex -4 f2(x) = 1/x Ridders f3(x) = log x -8 f4(x) = 1/sqrt(1+x2) -12 x=1 -16 new -20 f2 f4 -24 f3 f1 -28 -32 4

6

8

10

12 14 Neval

16

18

20

22

Figure 2: Cost performance of the numerical differentiation: the second order derivative. Same as Fig. 1 but for the second order derivative.

5

log10|δf0’|

Robustness Against Derivative Discontinuity 2 0 -2 -4 -6 -8 -10 -12 -14 -16

f0(x) = x2 (x < 1) = 1/x (x > 1) Ridders

new

0

0.2 0.4 0.6 0.8

1 x

1.2 1.4 1.6 1.8

2

Figure Robustness against the derivative discontinuity. Plotted are the base-10 logarithm of ′ 3: δ f (x) as a function of x when the test function has a kink at x = 1 as f0 (x) = x2 when x < 1 0 and f0 (x) = 1/x otherwise. Compared are the results obtained by (i) dfridr, a double precision implementation of Ridders’ method [17, Sect. 5.4] with the initial test displacement set as 0.1 as recommended, and (ii) the new method in the double precision environment. Notice that the both of the methods almost reproduced the exact solution, f0′ (x) = x, when x < 1. This is because the test function is a quadratic polynomial there, and therefore the Richardson extrapolation converges within a few iteration of the extrapolation.

6

56

practical needs frequently demand not the ultimate accuracy but the high cost

57

performance. There are few procedures to respond to such requirements. Indeed,

58

dfridr, a double precision implementation of Ridders’ method [17] provides a

59

high precision first order derivative but at the cost of significantly large amount of

60

the computational labor [6].

61

In order to resolve these issues, we developed another method of the numerical

62

differentiation based on the Richardson extrapolation but employing the forward

63

difference formulas. The resulting method is not only robust against the derivative

64

discontinuity as observed in Fig. 3 but also of a higher cost performance than

65

Ridders’ method as already shown in Figs 1 and 2. This is a significant advance

66

in the methods of numerical differentiation.

67

Below, we explain the new method in Sect. 2, and provide the result of some

68

numerical experiments in Sect. 3. Also, Appendix lists sample Fortran codes as

69

implementations of the new method.

70

2. Method

71

2.1. Forward difference formulas

72

Consider evaluating the first and second order derivatives of a given function,

73

f (x). For this purpose, we combine the Richardson extrapolation [19, 20] and the

74

forward difference formulas [12, Sect. 2]. More specifically speaking, in comput-

75

ing the first order derivative, f ′ (x), we adopt, as a test function in the Richardson

7

76

extrapolation, the first order difference formula expressed as F1 (x, h) ≡

f (x + h) − f (x) , h

(3)

77

where h is the test displacement of the argument. Notice that h can be either posi-

78

tive or negative [12, Sect. 3]. Similarly, in computing the second order derivative,

79

f ′′ (x), we introduce a general three-step difference formula as F2 (x, h, β ) ≡

80

81

82

[ f (x + h) − f (x)] − β [ f (x + h/β ) − f (x)] (β − 1)h2 /(2β )

(4)

where β is a parameter satisfying the condition, β > 1. As for the sequence of the test argument displacements, we adopt a series of integer powers of a constant factor as h j ≡ h0 /β j , ( j = 1, 2, . . . ),

(5)

83

where h0 is the initial step size, which is another parameter. In general, β in

84

Eq. (4) and β in Eq. (5) can be different. Nevertheless, we dare equate them so as

85

to reduce the total computational cost roughly by a factor 2. This is a key point in

86

the present scheme.

87

2.2. Richardson extrapolation

88

Let us apply the Richardson extrapolation to the forward difference formulas pro-

89

vided in the previous subsection. As a first step, we set the initial column vector

8

90

of the extrapolation table as  ( )   F1 x, h j , T j0 = ( )   F x, h , β 2

j

( j = 0, 1, . . . ).

(6)

,

91

Next, for each stage of the sequence, we construct the corresponding row of the

92

table by Neville’s algorithm for the polynomial interpolation [2, Eq. (2.1.2.7b)] as

T jk = T j,k−1 +

T j,k−1 − T j−1,k−1 , (k = 1, 2, . . . , j), βk −1

(7)

93

where we replaced h j−k /h j , the original expression of the factor in the denomina-

94

tor, with β k after substituting the expression of h j in terms of β as h j = H β − j .

95

2.3. Termination of the extrapolation

96

In order to make any iterative scheme efficient, it is important to adopt a suit-

97

able condition to terminate its iteration. If the termination criterion is too loose,

98

the resulting achievement is of low performance. Meanwhile, if the criterion is

99

too strict, the computational cost unnecessarily increases. Also, the termination

100

condition of an adaptive method must be flexible with the specified error toler-

101

ance. Furthermore, it is desirable that the method returns an error estimate of the

102

computed derivatives.

103

Therefore, we adopt a following rather complicated procedure. First of all,

104

after the computation of the initial component of the table, T00 , we calculate the

9

105

model estimate of the target error as

εM = Cδ p T00 ,

(8)

106

where δ is the input relative error tolerance of the derivative computation. and C

107

and p are additional parameters to be tuned. Next, at the j-th stage of the extrap-

108

olation for j = 1, 2, . . . , we obtain a local error estimate defined as the minimum

109

increase of the table components along the same row sequence and along the same

110

diagonal sequence as ( ) εL = min T jk − T j,k−1 , T jk − T j−1,k−1 , 1≤k≤ j

(9)

111

and set the locally extrapolated value as TL = T jk when k is the index of the lo-

112

cal error estimate. Thirdly, we update the global error estimate defined as the

113

minimum of the local estimates as

εG = min (εP , εL ) ,

(10)

114

where εP is the previous value of εG . Also, we update the globally extrapolated

115

value as TG = TL , if εG ̸= εP . Fourthly, assuming the locally geometrical manner

116

of the convergence of the global error estimates, we compute an expected value of

117

the error in the next stage as

εX = εG2 /εP .

10

(11)

118

Then, we terminate the further construction of the extrapolation table and return

119

the best available values of the extrapolated value and the error estimate as fol-

120

lows: (i) if the expected global error estimate is less than the model estimate as

121

εX < εM , then we return TG as the best estimate of the derivative and εX as its

122

error estimate, (ii) else if the current global error estimate is less than the model

123

estimate as εG < εM , then we return TG as the best estimate of the derivative and

124

εX as its error estimate, (iii) else if there is no improvement in the global error

125

estimate as εG = εP , then we return TG as the best estimate of the derivative and

126

εG as its error estimate, (iv) else if the number of stages reached the maximum

127

allowable numbers as j = J, where J is another tunable parameter, then we return

128

TG as the best estimate of the derivative and εX as its error estimate.

129

2.4. Parameter tuning

130

The procedure descried in the previous subsections contain some tunable param-

131

eters: β , h0 , C, p, and J. First we adopted the value of β as large as β = 8 after

132

consulting with the results of numerical experiments as will be shown later. Next,

133

we enforced h0 to be an integer power of 2 as h0 = sign(H)2[log2 |H|]floor ,

(12)

134

where H is the input value of h0 . This enforcement is in order to reduce the bit

135

loss in the evaluation of test arguments as much as possible while respecting the

136

user specified value of H. Thirdly, we determined C and p so as to realize a good

137

fidelity of the achieved errors with the input error tolerance as will be seen later. 11

138

As a result, their values are determined as C = 1/2 and p = 1 for the first order

139

derivative and C = 5 and p = 1.00725 for the second order derivative, respec-

140

tively. Finally, we determined the optimal value of J from the cost performance

141

diagrams, Figs 1 and 2: (i) J = 4 for the first order derivative computation in the

142

double precision environment, (ii) J = 3 for the second order derivative computa-

143

tion in the double precision environment, (iii) J = 7 for the first order derivative

144

computation in the quadruple precision environment, and (iv) J = 6 for the second

145

order derivative computation in the quadruple precision environment, respectively.

146

Notice that the maximum number of the function evaluation, Neval is related to J as

147

Neval = J + 2 in the case of the first order derivative computation and Neval = J + 3

148

in the case of the second order derivative computation.

149

2.5. Comparison with Ridders’ method

150

The adopted forward difference formulas can share the base function values, f (x).

151

This results a further save of the function evaluations if the base function value

152

is repeatedly used, especially in the partial differentiation of a multivariate func-

153

tion. Therefore, it is wise to treat the base function value as an input variable in

154

designing the actual computer codes of the numerical differentiation.

155

Also, the functional form of the second order difference formula enables us

156

to re-use one of the previous function values, f (x + h/β ), in the next stage. As a

157

result, each stage of the new method requires only one function evaluation whether

158

the derivative to be computed is of the first or the second order. Since each stage

159

of the extrapolation eliminates one term of the Maclaurin series expansion [2],

12

160

the effective order of the new method is 1/1=1, namely one term removal by one

161

function evaluation.

162

163

On the other hand, Ridders’ method [21] uses the second order central difference formulas, f (x + h) − f (x − h) , 2h

(13)

f (x + h) − 2 f (x) + f (x − h) , h2

(14)

C1 (x, h) ≡ 164

C2 (x, h) ≡ 165

and adopts the Romberg sequence, hn = H/2n , for the sequence of the test argu-

166

ments. These formulas are even with respect to h so as to be expanded in terms of

167

even powers of h only. Meanwhile, they require two new function evaluations at

168

each stage of the extrapolation. Therefore, the effective order of Ridders’ method

169

becomes 2/2=1. This is the same as that of the new method.

170

3. Numerical experiments

171

3.1. Test functions

172

Let us examine the computational cost and performance of the new method. For

173

this purpose, we select the following four functions as the test functions: 1 1 f1 (x) ≡ ex , f2 (x) ≡ , f3 (x) ≡ log x, f4 (x) ≡ √ . x 1 + x2

(15)

174

They represent various cases of functional singularities: (i) an entire function,

175

f1 (x), (ii) a function with an algebraic singularity on the real line, f2 (x), (iii) a

176

function with a logarithmic singularity and a cut along the real line, f3 (x), and 13

177

(iv) a function with a pair of algebraic singularities in the complex plane, f4 (x).

178

3.2. Computational cost performance

179

We begin with the computational precision and labor. Figs 1 and 2 have already

180

compared the computational cost performance of the new and Ridders’ methods

181

for these functions evaluated at a reference point, x = 1. Obviously, the new

182

method is of the higher efficiency. The observed large difference is owing to the

183

difference in the reduction multiplier, namely whether β = 2 or β = 8. Indeed,

184

Figs 4 and 5 displaying the β -dependence of cost performance of the new method

185

do support this explanation.

186

One may think that the higher β is, the better cost performance becomes.

187

However, this optimistic expectation is betrayed by the existence of the round-off

188

errors. Refer to Fig. 5 illustrating clearly that the achievable precision decreases

189

when β increases, say larger than 16. This is the reason why we set β = 8, a

190

sufficiently large value resulting a good cost performance and yet keeping the

191

computational precision high.

192

3.3. Robustness against the errors of the function evaluation

193

Another important aspect of the practical algorithm is the robustness against the

194

contamination of the function values due to the truncation and/or round-off er-

195

rors. In order to confirm the toughness of the new method, we dare degraded the

196

function values by multiplying a random noise such that f ∗ (x) = f (x)(1 + ε R), 14

(16)

log10|δfn’|

Reduction Factor Dependence: First Order Derivative 0 f2(x) = 1/x, x = 1 -3 -6 -9 -12 -15 β=1.1 -18 1.2 1.4 -21 4 2 8 -24 64 -27 -30 -33 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Neval

Figure 4: Reduction factor dependence of the new method: the first order derivative. Same as Fig. 1 but of the case f2 (x) = 1/x for various values of β , the reduction factor, as β = 1.1, 1,2, 1.4, 2, 4, 8, 16, 32, and 64.

15

log10|δfn"|

Reduction Factor Dependence: Second Order Derivative 0 f2(x) = 1/x, x = 1 -3 -6 -9 β=1.1 -12 1.2 -15 1.4 -18 2 64 4 -21 8 -24 -27 -30 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Neval

Figure 5: Reduction factor dependence of the new method: the second order derivative. Same as Fig. 4 but for the second order derivative.

16

log10|δfn’|

Function Error Dependence: First Order Derivative 0 f2(x) = 1/x, x = 1 −6 -3 ε=10 -6 10−9−12 10 -9 10−15 -12 10−18 -15 10−21 -18 10−24 10−27 -21 10−30 -24 10−33 -27 10−36 -30 -33 4 5 6 7 8 9 10 11 Neval

Figure 6: Function error dependence of the new method: the first order derivative. Same as Fig. 1 but of the case f2 (x) = 1/x and β = 8 for various values of ε , the magnitude of the random error multiplied to the function value, as ε = 10−6 , 10−9 , · · · , 10−36 .

17

log10|δfn"|

Function Error Dependence: Second Order Derivative 0 f2(x) = 1/x, x = 1 ε=10−9 -3 10−12 -6 10−15 -9 10−18 10−21 -12 10−24 10−27 -15 10−30

-18 -21

10−33 10−36

-24 5

6

7

8

9

10

11

Neval

Figure 7: Function error dependence of the new method: the second order derivative. Same as Fig. 6 but for the second order derivative.

18

197

where ε is the magnitude of the noise and R is a uniform random variable taking

198

arbitrary value in the interval, −1 ≤ R ≤ 1. Figs 6 and 7 illustrate that the cost

199

performance of the new method is still good even for such degraded function

200

values. This is mainly due to the complicated termination conditions described in

201

the previous section.

202

3.4. Fidelity of the derivative accuracy

203

The practical usefulness of any adaptive method is the assurance of the quality of

204

the returned results when its specification is externally given. In the present case,

205

this means that the relative errors of the computed derivatives must not exceed

206

the input relative error tolerance, δ . We learn that this requirement is fulfilled by

207

appropriately adjusting two tunable parameters, C and p, described in the previous

208

section. In fact, their determined values given in Sect. 2.4 guaranteed the fidelity

209

conditions as illustrated in Figs 8 and 9.

210

3.5. Reliability of error estimate

211

Finally, we examine the reliability of the error estimate of the calculated deriva-

212

tives. Figs 10 and 11 comparing the estimated errors with the actual errors com-

213

puted by the analytical solutions for a variety of different conditions: the function

214

type, the function argument, the input relative error tolerance, and the function

215

evaluation error. The graphs indicate that, in average, the returned error estimates

216

are at the same level as those of the actual errors although there remains an ambi-

217

guity amounting ±2 digits or so.

19

log10|δfn’|

Fidelity of New Method: First Order Derivative 0 x -3 f1(x) = e f2(x) = 1/x -6 |δfn’| = δ f3(x) = log x -9 2 -12 f4(x) = 1/sqrt(1+x ) -15 x = 1 -18 -21 f f4 2 -24 f3 f1 -27 -30 -30 -27 -24 -21 -18 -15 -12 -9 -6 log10δ

-3

0

Figure 8: Fidelity of the new method: the first order derivative. Same as Fig. 1 but plotted as a function of δ , the input relative error tolerance of the derivative computation, in a log-log manner. The straight line indicates the 100 % fidelity, namely when |δ fn′ (x)| = δ .

20

Fidelity of New Method: Second Order Derivative 0 -6

f1(x) = ex f2(x) = 1/x

-9

f3(x) = log x

log10|δfn"|

-3

-12 -15

|δfn"| = δ

2 f4(x) = 1/sqrt(1+x )

f4

x=1

-18 f1

-21

f3

f2

-24 -27 -27 -24 -21 -18 -15 -12 log10δ

-9

-6

-3

0

Figure 9: Fidelity of the new method: the second order derivative. Same as Fig. 8 but for the second order derivative.

21

log10 ε1

Reliability of Error Estimation: First Order Derivative 0 -3 -6 -9 -12 -15 -18 -21 -24 -27 -30 -30 -27 -24 -21 -18 -15 -12 -9 log10|δfn’|

-6

-3

0

Figure 10: Reliability of the error estimation of the new method: the first order derivative. Same as Fig. 8 but plotted are the estimated errors of the calculated first order derivatives, ε1 , as a function of the actual errors, |δ fn′ (x)|, in a log-log manner. Overlapped are the results for a number of randomly chosen combinations of (i) the function type whether f1 (x) = ex , f2 (x) = 1/x, f3 (x) = √ log x, or f4 (x) = 1/ 1 + x2 , (ii) the argument x in the interval, 1/2 ≤ x ≤ 3/2, (iii) the base10 logarithm of the input relative error tolerance in the interval, −35 ≤ log10 δ ≤ −5, and (iv) the base-10 logarithm of the relative magnitude of the random error in function evaluation in the interval, −35 ≤ log10 ε ≤ −5. The thick straight line indicates the 100 % reliability, namely when |δ fn′ (x)| = ε1 , and two thin lines show the cases ε1 = 100|δ fn′ (x)| and ε1 = |δ fn′ (x)|/100.

22

Reliability of Error Estimation: Second Order Derivative 0 -3

log10 ε2

-6 -9 -12 -15 -18 -21 -24 -24

-21

-18

-15 -12 -9 log10|δfn"|

-6

-3

0

Figure 11: Reliability of the error estimation of the new method: the second order derivative. Same as Fig. 10 but for the second order derivative.

23

218

4. Conclusion

219

By replacing the central difference formulas in Ridders’ method [21] with the

220

forward difference formulas, we invented a new type of adaptive method to esti-

221

mate the numerical derivatives of the given function. The combination of a tactful

222

policy to terminate the extrapolation scheme and a relatively large step reduction

223

factor has raised the cost performance of the new method significantly higher than

224

that of Ridders’ method. Also, the new method is robust against the truncation

225

and round-off errors in the function evaluation and is capable of providing correct

226

answers even for piecewise continuous functions.

227

Appendix A. Sample Fortran programs

228

Here we present two Fortran functions as the implementations of the new method

229

in the quadruple precision environment. They can be transformed to the double

230

precision versions by translating the quadruple precision constants into the cor-

231

responding double precision ones and modifying the parameters JMAX1 = 8 and

232

JMAX2 = 7 as JMAX1 = 5 and JMAX2 = 4, respectively.

233

References

234

[1] M. Bartholomew-Biggs, S. Brown, B. Christianson, L. Dixon, Automatic dif-

235

236

237

ferentiation of algorithms, J. Comp. Appl. Math. 124 (2000) 171–190. [2] R. Bulirsch, J. Stoer, Introduction to Numerical Analysis, Springer, New York, 1991. 24

Table A.1: Quadruple precision Fortran function to estimate the first order derivative of a given function numerically. The program assumes that the base function value, fx = f (x), and the relative error tolerance, delta = δ , are externally provided. It returns the estimate as its value, qdfsid1 = f ′ (x), as well as its error estimate, err.

real*16 function qdfsid1(func,x,fx,h0,delta,err) integer JMAX1; real*16 func,x,fx,h0,delta,err,BETA,ERRMIN,LOG2 parameter (JMAX1=8,BETA=8.q0,ERRMIN=1.q-35) parameter (LOG2=0.69314718055994530941723212145818q0) integer j,kmin,k; real*16 T(JMAX1,JMAX1),hj,errM,errP,factor,errT,errX hj=2.q0**(floor(log(abs(h0))/LOG2)) if(h0.lt.0.q0) then hj=-hj endif T(1,1)=(func(x+hj)-fx)/hj; qdfsid1=T(1,1) errM=0.5q0*delta*abs(qdfsid1); errP=1.q-66; err=1.q99 do j=2,JMAX1 hj=hj/BETA; factor=BETA; kmin=1; T(1,j)=(func(x+hj)-fx)/hj do k=2,j T(k,j)=T(k-1,j)+(T(k-1,j)-T(k-1,j-1))/(factor-1.q0) factor=BETA*factor errT=max(abs(T(k,j)-T(k-1,j)),abs(T(k,j)-T(k-1,j-1))) if(errT.le.err) then kmin=k; err=errT; qdfsid1=T(k,j) endif enddo errX=err*err/errP if(errX.le.errM) then err=max(ERRMIN,errX); return endif if(err.le.errM) return if(kmin.eq.1) then err=errX; return endif errP=err enddo err=max(ERRMIN,errX) return; end

25

Table A.2: Quadruple precision Fortran function to estimate the second order derivative of a given function numerically.

real*16 function qdfsid2(func,x,fx,h0,delta,err) integer JMAX2,j,k,kmin; real*16 func,x,fx,h0,delta,err real*16 BETA,ERRMIN,LOG2,POW,BETAM1,COEFF parameter (JMAX2=7,BETA=8.q0,ERRMIN=1.q-35,POW=1.0725q0) parameter (BETAM1=BETA-1.q0,COEFF=BETAM1*BETA*0.5q0) parameter (LOG2=0.69314718055994530941723212145818q0) real*16 T(JMAX2,JMAX2),hj,fx2,fx1,errM,errP,factor,errT,errX hj=2.q0**(floor(log(abs(h0))/LOG2)) if(h0.lt.0.q0) then hj=-hj endif fx2=func(x+hj); hj=hj/BETA; fx1=func(x+hj) T(1,1)=((fx2-fx)-BETA*(fx1-fx))/(COEFF*hj*hj); qdfsid2=T(1,1) errM=5.q0*(delta**POW)*abs(T(1,1)); errP=1.q-66; err=1.q99 do j=2,JMAX2 hj=hj/BETA; fx2=fx1; fx1=func(x+hj); factor=BETA; kmin=1 T(1,j)=((fx2-fx)-BETA*(fx1-fx))/(COEFF*hj*hj) do k=2,j T(k,j)=T(k-1,j)+(T(k-1,j)-T(k-1,j-1))/(factor-1.q0) factor=BETA*factor errT=max(abs(T(k,j)-T(k-1,j)),abs(T(k,j)-T(k-1,j-1))) if(errT.le.err) then kmin=k; err=errT; qdfsid2=T(k,j) endif enddo errX=err*err/errP if(errX.le.errM) then err=max(ERRMIN,errX); return endif if(err.le.errM) return if(kmin.eq.1) then err=errX; return endif errP=err enddo err=max(ERRMIN,errX) return; end 26

238

239

[3] T. Fukushima, Computation of a general integral of Fermi-Dirac distribution by McDougall-Stoner method, Appl. Math. Comp. 238 (2014) 485–510.

240

[4] T. Fukushima, Numerical computation of gravitational field of infinitely thin

241

axisymmetric disc with arbitrary surface mass density profile and its applica-

242

tion to preliminary study of rotation curve of M33, Mon. Not. Royal Astron.

243

Soc. 456 (2016) 3702–3714.

244

[5] T. Fukushima, Mosaic tile model to compute gravitational field for infinitely

245

thin non axisymmetric objects and its application to preliminary analysis of

246

gravitational field of M74, Mon. Not. Royal Astron. Soc. 459 (2016) 3825–

247

3860.

248

249

[6] T. Fukushima, Numerical computation of gravitational field for general axisymmetric objects, Mon. Not. Royal Astron. Soc. 462 (2016) 2138–2176.

250

[7] T. Fukushima, Numerical integration of gravitational field for general three-

251

dimensional objects and its application to gravitational study of grand design

252

spiral arm structure, Mon. Not. Royal Astron. Soc. 463 (2016) 1500–1517.

253

[8] T. Fukushima, Numerical computation of electromagnetic field for general

254

static and axisymmetric current distribution, Comp. Phys. Comm. (2017)

255

http://dx.doi.org/10.1016/j.cpc.2017.08.007.

256

[9] T. Fukushima, Precise and fast computation of gravitational field of general

257

finite body and its application to gravitational study of asteroid Eros, Astron. J.

258

154 (2017) 145. 27

259

260

261

262

[10] T. Fukushima, Accurate computation of gravitational field of a tesseroid, J. Geodesy (2017) submitted. [11] A. Griewank, A. Walther, Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation (2nd ed), SIAM, Philadelphia, 2008.

263

[12] C. Jordan, Calculus of Finite Differences (2nd ed), Chelsea Publ., New York

264

[13] O.D. Kellogg, Foundations of Potential Theory, Springer, Berlin, 1929.

265

[14] C. Lanczos, Applied Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1956.

266

[15] J.R. Martins, P. Sturdza, J.J. Alonso, The complex-step derivative approxi-

267

268

269

mation, ACM Trans. Math. Softw. 29 (2003) 245–262. [16] R.D. Neidinger, Introduction to automatic differentiation and MATLAB object-oriented programming, SIAM Rev. 52 (2010) 545–563.

270

[17] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical

271

Recipes: the Art of Scientific Computing (3rd ed), Cambridge Univ Press,

272

Cambridge, 2007.

273

274

[18] S.K. Rangarajana, S.P. Purushothaman, Lanczos’ generalized derivative for higher orders, J. Comput. Appl. Math. 177 (2005) 461–465.

275

[19] L.F. Richardson, The approximate arithmetical solution by finite differences

276

of physical problems including differential equations with an application to the

277

stresses in a masonry dam, Phil. Trans. Royal Soc. A 210 (1911) 459–470.

28

278

279

280

281

282

283

284

285

[20] L.F. Richardson, J.A. Gaunt, The deferred approach to the limit, Phil. Trans. Royal Soc. A 226 (1927) 299–361. [21] C.J.F. Ridders, Accurate computation o F ′ (x) and F ′′ (x). Adv. Eng. Softw. 4 (1982) 75–76. [22] W. Squire, G. Trapp, Using complex variables to estimate derivatives of real functions, SIAM Rev. 40 (1998) 110–112. [23] S. Wolfram, The Mathematica Book (5th ed), Wolfram Research Inc./Cambridge Univ. Press, Cambridge, 2003.

29