A new efficient method for solving delay differential ... - Springer Link

0 downloads 0 Views 2MB Size Report
Jan 27, 2017 - Received: 3 November 2016 / Revised: 12 December 2016. Published ... where α is a given constant, yα = y(αx) and y = y(x). There have ... exact solutions of DDEs by many authors with various methods [5–10]. ..... x. 2. ) − 1030.53 cos(2x) + 46.662 cos(3x) + 11.7599 cos(4x) + 67115.4 + 10.0615x2 cos(4x).
Eur. Phys. J. Plus (2017) 132: 51 DOI 10.1140/epjp/i2017-11344-9

THE EUROPEAN PHYSICAL JOURNAL PLUS

Regular Article

A new efficient method for solving delay differential equations and a comparison with other methods Necdet Bildika and Sinan Denizb Department of Mathematics, Faculty of Art and Sciences, Manisa Celal Bayar University, 45040 Manisa, Turkey Received: 3 November 2016 / Revised: 12 December 2016 c Societ` Published online: 27 January 2017 –  a Italiana di Fisica / Springer-Verlag 2017 Abstract. In this paper, a new analytical technique, namely the optimal perturbation iteration method, is presented and applied to delay differential equations to find an efficient algorithm for their approximate solutions. Effectiveness of this method is tested by various examples of linear and nonlinear problems of delay differential equations. Obtained results reveal that optimal perturbation iteration algorithm is very effective, easy to use and simple to perform.

1 Introduction A delay differential equation (DDE) is a form of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at earlier times. Their systems are widely used to describe many scientific phenomena such as population dynamics, communication network model, economical systems, propagation and transport [1–4]. The general form of second-order DDEs can be given in closed form as F (y  , y  , y, yα , x) = 0,

(1)

where α is a given constant, yα = y(αx) and y = y(x). There have been many attempts to obtain approximate or exact solutions of DDEs by many authors with various methods [5–10]. Classical perturbation methods [11–13] are widely used to solve science and engineering problems and most of them are based on perturbation quantities which are small parameters in conditions or governing equations. Based on the existence of these small parameters, perturbation approximations usually have physical meanings. However, many nonlinear problems do not have such kind of physical perturbation parameter. Additionally, the sub-problems described by the original equation might be too complicated to solve or even might have no solutions. Therefore, one cannot always obtain efficient perturbation approximations for any nonlinear problems. In the past few decades, there has been much attention devoted to cope with these restrictions of perturbation methods. Many scientists have developed different traditional nonperturbation methods such as the homotopy decomposition method [14, 15], the auxiliary equation method [16,17], the homotopy perturbation method [18,19], the Taylor collocation method [20–22], the Sumudu transform method [23] and the Adomian decomposition method [24–27].Unfortunately, these methods have also some problems in dealing with nonlinear equations. The convergence regions of all these techniques are generally small according to the desired solution. The purpose of this work is to introduce the new optimal perturbation iteration method (OPIM) and to apply OPIM to different kinds of DDEs to obtain more approximate solutions. One of the advantages of OPIM when compared with other perturbation methods is that this technique does not need a small perturbation parameter. Illustrations also show that the proposed method is more accurate and effective than many other methods in the literature. a b

e-mail: [email protected] (corresponding author) e-mail: [email protected]

Page 2 of 11

Eur. Phys. J. Plus (2017) 132: 51

2 Description of the optimal perturbation iteration method Perturbation iteration method (PIM) has been recently constructed by Pakdemirli et al. [28]. It has been efficiently used to solve some strongly nonlinear systems and yields very accurate results [29–31]. In this section, we describe the basic concept of the OPIM based on the perturbation iteration technique by the following formulation: a) To minimize the amount of computational work, rewrite eq. (1) as F (y  , y  , y, yα , ε, x) = A + g(x) = 0,

x ∈ D,

(2)

where g(x) is a known function, ε is the auxiliary perturbation parameter, D is the domain, yα = y(αx) and 0 < α < 1. Moreover, (2) can be divided into A = L + N where L is the linear simpler part, which can be easily examined and N is the remaining part which is critical for OPIM. Bear in mind that we have a great freedom to choose the linear part L. b) Take one correction term from the perturbation expansion as yn+1 = yn + ε(yc )n .

(3)

Substituting (3) into (2) and expanding the remaining part (N ) in a Taylor series with first derivatives generates the following optimal perturbation iteration algorithm (OPIA): N + Ny (yc )n ε + Nyα ((yα )c )n + Ny (yc )n ε + Ny (yc )n ε + Nε ε = −L − g(x).

(4)

Note here that all derivatives and functions are computed at ε = 0. To construct the iterative scheme, the first correction term (yc )0 is obtained from (4) by using an initial guess y0 and prescribed condition(s). (4) may seem sophisticated at first, but it should be remembered that we use the general form of the differential equations of second order to express the presented technique. In fact, most well-known differential equations involve only some of the nonlinear terms y, y  , y  . So, (4) is transformed to some simple mathematical expressions in many problems. c) Use the following equation: (5) yn+1 = yn + Sn (yc )n to increase the accuracy of the results and effectiveness of the method. Here S0 , S1 , S2 , . . . are convergence control parameters which allow us to adjust the convergence. Proceeding for n = 0, 1, . . ., approximate solutions are found as y1 = y(x, S0 ) = y0 + S0 (yc )0 , y2 (x, S0 , S1 ) = y1 + S1 (yc )1 , .. . ym (x, S0 , . . . , Sm−1 ) = ym−1 + Sm−1 (yc )m−1 .

(6)

d) Substitute the approximate solution ym into (2) and the general problem results in the following residual: R(x, S0 , . . . , Sm−1 ) = A (ym (x, S0 , . . . , Sm−1 )) + g(x).

(7)

Clearly, when R(x, S0 , . . . , Sm−1 ) = 0, then the approximation ym (x, S0 , . . . , Sm−1 ) will be the exact solution. Generally such case will not arise for nonlinear equations, but the functional can be minimized as 

b

R2 (x, S0 , . . . , Sm−1 )dx,

J(S0 , . . . , Sm−1 ) =

(8)

a

where a and b are selected from the domain of the problem. Optimum values of S0 , S1 , . . . can be optimally defined from the conditions dJ dJ dJ = = ... = = 0. (9) dS0 dS1 dSm−1 The constants S0 , S1 , . . . can also be obtained from R(x0 , Si ) = R(x1 , Si ) = . . . = R(xm−1 , Si ) = 0,

i = 0, 1, . . . , m − 1,

(10)

where xi ∈ (a, b). Putting these constants into the last of eqs. (6), the approximate solution of order m is determined. For much more information about finding these constants please see [32].

Eur. Phys. J. Plus (2017) 132: 51

Page 3 of 11

3 Convergence of OPIM In order to examine the convergence of the presented method easily, we now consider the obtained solution in a different way. Let us assume that y 0 = H0 , Sn (yc )n = Hn+1 ,

(11)

then we have y 0 = H0 , y1 = y(x, S0 ) = y0 + S0 (yc )0 = H0 + H1 , y2 = y(x; S0 , S1 ) = y1 + S1 (yc )1 = H0 + H1 + H2 , .. . yn = y(x; S0 , . . . , Sn−1 ) = H0 + H1 + . . . + Hn .

(12)

Therefore, the n-th order approximate solution can also be represented in the following form: yn (x; S0 , . . . , Sn−1 ) = H0 (x) +

n+1 

Hj (x; S0 , . . . , Sm−2 ).

(13)

j=1

Theorem 1. Let B be a Banach space denoted with a suitable norm  ·  over which the series (13) is defined and assume that the initial function y0 = H0 remains inside the ball of the solution y(x). (13) converges if there exists β such that (14) Hn+1  ≤ β Hn  . Proof. We first define a sequence as A0 = H0 , A1 = H0 + H1 , A2 = H 0 + H1 + H2 , .. . An = H0 + H1 + H2 + . . . + Hn .

(15)

Now, we have to show that {An }∞ n=0 is a Cauchy sequence in B. Consider that An+1 − An  = Hn+1  ≤ β Hn  ≤ β 2 Hn−1  ≤ . . . ≤ β n+1 H0  .

(16)

For every n, k ∈ N, n ≥ k, we have An − Ak  = (An − An−1 ) + (An−1 − An−2 ) + . . . + (Ak+1 − Ak ) ≤ An − An−1  + An−1 − An−2  + . . . + Ak+1 − Ak  ≤ β n H0  + β n−1 H0  + . . . + β k+1 H0  =

1 − β n−k k+1 β H0  . 1−β

(17)

Since we also have 0 < β < 1, we can obtain from (17) lim An − Ak  = 0.

n,k→∞

(18)

Thus, {An }∞ n=0 is a Cauchy sequence in B and this implies that OPIM solution (13) is convergent. n Theorem 2. If the initial function y0 = H0 remains inside the ball of the solution y(x) then An = i=0 Hi also remains inside the ball of the solution.

Page 4 of 11

Eur. Phys. J. Plus (2017) 132: 51

Proof. Suppose that H0 ∈ Br (y),

(19)

where Br (y) = {H ∈ A| y − H < r} . ∞ is the ball of the solution y(x). By hypothesis y = limn→∞ An = i=0 Hi and from theorem 1, we have y − An  ≤ β n+1 H0  < H0  < r,

(20)

(21)

where β ∈ (0, 1) and n ∈ N. This completes the proof. It should be also noted that the selection of the initial guess function will directly affect the convergence of the solution to be obtained. However, there is no general theorem related to the choice of the initial function. ∞ Theorem 3. Assume that the obtained solution i=0 Hi is convergent to the solution y(x). If the truncated series k i=0 Hi is used as an approximation to the solution y(x) of problem (2), then the maximum error is given as Ek (x) ≤

β k+1 H0  . 1−β

(22)

Proof. From (17), we have An − Ak  ≤

1 − β n−k k+1 β H0  1−β

for n ≥ k. By knowing y(x) = lim An (x) = n→∞

we can write

and also it can be written as

∞ 

Hi

(23)

(24)

i=0

  k   1 − β n−k    β k+1 H0  Hi  ≤ y(x) −   1 − β i=0

(25)

  k    β k+1   Ek (x) = y(x) − H0  Hi  ≤   1−β i=0

(26)

since 1 − β n−k < 1. Here β is selected as β = max{βi , i = 0, 1, . . . , n}, where βi =

Hn+1  . Hn 

(27)

4 Examples In this section, we solve some important DDEs by using OPIM. Obtained results prove that OPIM gives better solutions when compared with many other methods in the literature. Example 1. Consider the following linear delay differential equation [33]: y  (x) + y

x 2

+ y 2 (x) = sin4 (x) + sin2

x 2

+ 8;

y(0) = 2, y  (0) = 0; x ≥ 0.

(28)

The exact solution of the problem is given as y(x) =

5 − cos 2x . 2

(29)

Eur. Phys. J. Plus (2017) 132: 51

Page 5 of 11

Equation (28) can be rewritten as x  x  + y 2 (x) = sin4 (x) + sin2 + 8. y  (x) + ε y 2 2

(30)

To get rid of unnecessary computational works, (30) can be decomposed as L + N + g(x) = 0,  x  x   + y 2 (x) +8 . N =ε y and g(x) = − sin4 (x) + sin2 2 2 For this problem,we do not have the term y  , y  in the remaining part N . Thus, (4) simplifies to

where

L = y  (x),

N + Ny (yc )n ε + Nyα ((yα )c )n + Nε ε = −L − g(x). Proceeding as mentioned in sect. 2, we can get the following algorithm: x x (yc ) n (x) = −(yn ) (x) − yn − yn 2 (x) + sin4 (x) + sin2 + 8, 2 2

(31) (32)

(33)

(34)

by the help of eqs. (3), (32), (33) and setting ε = 1. y0 = 2 may be selected as an initial function for iterations. Substituting y0 into eq. (34) yields a first-order problem x + 2, y(0) = y  (0) = 0, (yc ) n = sin4 (x) + sin2 (35) 2 from which the first correction term is obtained. Therefore, the first approximate solution becomes y1 = 2 +

S0  184x2 + 64 cos(x) + 16 cos(2x) − cos(4x) − 79 . 128

(36)

Note here that y1 does not represent the first correction term; rather it is the first approximate solution after the first iteration. Performing the required calculations in sect. 2, the second-order approximate solution reads as S0  184x2 + 64 cos(x) + 16 cos(2x) − cos(4x) − 79 y2 = 2 + 128

S1 − 291225600 + 678297600x2 + 235929600 cos(x) + 58982400 cos(2x) − 3686400 cos(4x) + 471859200 x + 766771200S0 cos(x) + 49766400S0 x2 − 240230400S0 x4 + 943718400S0 cos 2 − 921600S0 cos(2x) + 2764800S0 cos(4x) + 4394932703S02 − 121212000S02 x2 + 69772800S02 x4 − 32501760S02 x6 − 4331520000S02 cos(x) + 678297600S02 x2 cos(x) − 67161600S02 cos(2x) + 42393600S02 x2 cos(2x) + 3072000S02 cos(3x) + 763200S02 cos(4x) − 1712332800S0 + 662400S02 x2 cos(4x) − 2713190400S02 x sin(x) + 662400S02 x sin(4x) − 73728S02 cos(5x)

− 12800S02 cos(6x) + 225S02 cos(8x) − 169574400S02 x sin(x) cos(x) .

(37)

Using the same procedure, a higher number of iterations could be reached. Evidently, as the number of iterations increases, the approximate solution becomes more and more troublesome, which requires symbolic computer programs. Mathematica 9.0 is used to handle the complex calculations in this study. The residual is computed as R(x, S0 , S1 ) = A (y2 (x, S0 , S1 )) + g(x) x  x ; Si + y2 2 (x, Si ) − sin4 (x) − sin2 −8 = y2  (x; Si ) + y2 2 2

(38)

for the second order approximation. For a = 0 and b = 3, the functional (8) becomes 

3

R2 (x, S0 , S1 )dx.

J(S0 , S1 ) = 0

(39)

Page 6 of 11

Eur. Phys. J. Plus (2017) 132: 51

Fig. 1. Comparison between the third-order approximate solutions obtained by OPIM (•), VIM (), PIM () and the exact solution (–) for example 1.

The unknown parameters S0 , S1 result from the conditions dJ dJ = =0 dS0 dS1 as

S0 = −85.364921,

S1 = 0.983549.

(40)

(41)

By putting these values of S0 , S1 into eq. (37), the second-order approximate solution becomes x y2 (x) = −493.684x6 + 1102.56x4 − 1971.3x2 + 643.936x2 cos(2x) − 167.921 cos 2 − 1030.53 cos(2x) + 46.662 cos(3x) + 11.7599 cos(4x) + 67115.4 + 10.0615x2 cos(4x)  + 10303.x2 − 2575.74x sin(x) − 65972.1 cos(x) − 41211.9x sin(x) + 10.0615x sin(4x) − 1.11989 cos(5x) − 0.194425 cos(6x) + 0.00341763 cos(8x).

(42)

In order to get better results, one needs to keep iterating. Repeating the same steps, the next approximate solution can be obtained as x + 963.959 cos(2x) + 596.948x2 cos(2x) y3 (x) = 84753 − 1863.85x2 + 1009.92x4 − 324.451x6 + 142.476 cos 2 + 38.9874 cos(3x) + 15.94 cos(4x) − 18.0595x2 cos(4x) − 2.738377741 cos(5x) + 4.39203001445 cos(6x) − 5.46736 × 106 x8 cos(10x) − 1.82906 × 106 x6 cos(12x) − 1.6751 × 106 x2 cos(10x) − 50009.4x sin(x)  + 10106.x2 − 2865.95x sin(x) − 58984.9 cos(x) + 1.50166 × 107 x6 sin(6x) − 4.05453 × 106 x2 sin(7x) − 864109.x2 sin(10x) + 9.90938x sin(4x) + 2.6728 × 106 x sin(5x) − 4.67333 × 106 x sin(6x) − 619643x cos(9x) + 0.00848737 cos(8x) + 2.64757 × 106 x sin(8x).

(43)

Example 1 has been also solved by perturbation iteration technique [5] and variational iteration method [33]. Unfortunately, in [5] there are some typographical errors in eqs. (34)–(37). The term sin(x) should be sin(x/2). Figures 1 demonstrates some important comparisons between the solutions of OPIM and other methods. Apparently, approximate solutions obtained by OPIM are more accurate than those in [5, 33]. It is also observed from fig. 1 that the new OPIM solutions are valid in a larger region for this problem. Additionally, fig. 2 displays the absolute errors of OPIM solutions. It is clear and worth mentioning from fig. 2 that with the increase in the order of the approximation, the accuracy increases and the solution converges. Example 2. Consider the following nonlinear DDE [10, 34]: x − 1 = 0, y  (x) + 2y 2 2 which has the exact solution y(x) = sin x.

y(0) = 0, x ≥ 0,

(44)

Eur. Phys. J. Plus (2017) 132: 51

Page 7 of 11

Fig. 2. Absolute errors obtained from |yexact − yOP IM | for example 1. (a) Errors for third-order approximate solution. (b) Errors for fourth-order approximate solution.

Equation (44) can be reconsidered as L + N + g(x) = y  (x) + εy 2 where

L = y  (x),

x 2

− 1 = 0,

x

and g(x) = −1. 2 The terms y  , y  , y do not appear in the crucial part N . Thereby, (4) takes a very simple form N = εy 2

N + Nyα ((yα )c )n + Nε ε = −L − g(x).

(45)

(46)

(47)

Using eqs. (3), (46), (47) and setting ε = 2, OPIA is established as (yc ) n (x) = −(yn ) (x) − 2yn 2

x 2

+ 1.

(48)

y0 = x can be selected as a starting function. Putting y0 into eq. (48) reveals the first-order problem (yc ) n = −

x2 , 2

y(0) = 0

(49)

from which the first correction term is obtained as y1 = x −

S0 x3 . 6

Making relevant calculations, one can reach the following successive iterations:

S1 x3  2 4 S0 x3 − 5S0 x − 336S0 x2 − 6720S0 + 6720 , y2 = x − 6 40320

S1 x3  2 4 S0 x3 − 5S0 x − 336S0 x2 − 6720S0 + 6720 y3 = x − 6 40320  S2 −952219415347200x3 + 952219415347200S0 x3 + 47610970767360S0 x5 + 5713316492083200 − 708496588800S02 x7 + 952219415347200S1 x3 + 2125489766400S02 S1 x7 − 1842091130880S0 S1 x7

(50)

(51)

+ 47610970767360S1 x5 − 952219415347200S0 S1 x3 + +15006351360S02 S1 x9 − 13776322560S0 S12 x9 − 708496588800S02 S12 x7 + 1416993177600S0 S12 x7 + 708496588800S12 x7 − 715S04 S12 x15 + 443520S03 S12 x13 + 41932800S03 S12 x11 − 112379904S02 S12 x11  − 13776322560S02 S12 x9 − 95221941534720S0 S1 x5 − 41932800S03 S1 x11 .

(52)

Substituting the approximate solution y3 into (7) gives the residual R(x, S0 , S1 , S2 ) = L (y3 (x, S0 , S1 , S2 )) + N (y3 (x, S0 , S1 , S2 )) + g(x).

(53)

Page 8 of 11

Eur. Phys. J. Plus (2017) 132: 51

Fig. 3. Comparison between the third-order approximate solutions obtained by different methods and the exact solution (–) for example 2.

Fig. 4. Absolute errors of OPIM solutions for example 2. (a) Third-order approximate solution. (b) Fourth-order approximate solution.

Correspondingly, the functional regarding the domain between a = 0 and b = 5 is  5 R2 (x, S0 , S1 , S2 )dx. J(S0 , S1 , S2 ) =

(54)

0

Next, we follow the procedure presented in sect. 2 to obtain the unknown constants and we get S1 = −18.5207713652,

S0 = 24.0203622551,

S2 = 6.8054104457.

(55)

Therefore, the approximate solution of third order will be in the form y3 (x) = 6.665938894983069 × 10−13 x15 + 1.5912260484733409 × 10−10 x13 − 2.5040357365602213 × 10−8 x11 + 2.7556782518571276 × 10−6 x9 − 0.000198413x7 + 0.00833333x5 − 0.166667x3 + 1 · x.

(56)

This problem has been examined by many authors via the Adomian decomposition method (ADM), homotopy analysis method (HAM), optimal homotopy asymptotic method (OHAM) and perturbation iteration method (PIM) [5,6,10,34]. Figure 3 shows a comparison between the third-order approximate solutions of these methods and the presented technique. It is also clear from fig. 3 that obtained results by OPIM agree very well with the exact solutions of the problem and new solutions are valid in a larger region. Figure 4 is sketched to show the absolute errors of approximate solutions of OPIM. One can easily deduce that OPIM solutions are more accurate and effective when compared with other methods in the literature [5, 6, 10, 34]. Finally, we can say that this method may be preferred over the others for solving these types of equations. Example 3. Consider the fist-order linear DDE y  (x) + y(x) − which has the exact solution y(x) = e−x [35].

e−(x/5) 1 x y =− , 10 5 10

y(0) = 1, x ≥ 0

(57)

Eur. Phys. J. Plus (2017) 132: 51

Page 9 of 11

Fig. 5. Comparison between the fourth-order approximate solutions obtained by OPIM (•), VIM-TCM (), PIM () and the exact solution (–) for example 3.

Equation (57) can be rewritten as L + N + g(x) = y  (x) + y(x) − εy

x 5

=−

e−(x/5) , 10

(58)

−(x/5)

where L = y  (x) + y(x), N = −εy( x5 ), g(x) = e 10 and ε = 0.1 is supposed to be the perturbation parameter. Following the previous procedures in other examples, OPIA can be obtained in the following form:  x  e−(x/5) − . (59) (yc ) n (x) + (yc )n (x) = −(yn ) (x) − (yn )(x) + 0.1yn 5 10 By choosing the starting guess y0 = 1 and proceeding in a similar way, we get the following approximate results:  y1 (x) = 1 − 0.9S0 e−1.x 0.138889e0.8x + 1.e1.x − 1.13889 , (60)   −1.x 0.8x 1.x −1.x 0.8x 0.96x y2 (x) = 1 − 0.9S0 e 0.138889e 0.3125S0 e + 1.e − 1.13889 + 0.81S1 e − 0.0160751S0 e + 1.S0 e1.x  − 1.29642S0 − 0.154321e0.8x − 1.11111e1.x + 1.26543 , (61)   −1.x 0.8x 1.x −1.x 0.8x 0.96x y3 (x) = 1 − 0.9S0 e 0.138889e 0.3125S0 e + 1.e − 1.13889 + 0.81S1 e − 0.0160751S0 e + 1.S0 e1.x   − 1.29642S0 − 0.154321e0.8x − 1.11111e1.x + 1.26543 − 0.729S2 e−1.x −1.40604 + 0.171468e0.8x + 1.23457e1.x + 1.44047S0 − 0.347222S0 e0.8x + 0.0178612S0 e0.96x − 1.11111S0 e1.x + −0.347222S1 e0.8x + 0.0178612S1 e0.96x − 1.11111S1 e1.x + 0.527281S0 S1 e0.8x − 0.0540302S0 S1 e0.96x + 0.00180053S0 S1 e0.992x  +1.S0 S1 e1.x − 1.47505S0 S1 + 1.44047S1 . Putting the approximate solution y3 into (7) yields the residual and the functional J  3 R2 (x, S0 , S1 , S2 )dx, R(x, S0 , S1 , S2 ) = A (y3 (x, S0 , S1 )) + g(x)J(S0 , S1 , S2 ) =

(62)

(63)

0

for a = 0 and b = 3 in eq. (8). Using Mathematica, the following optimal values of Si are obtained: S0 = 1.053362,

S1 = −0.983011,

S2 = 0.201563.

(64)

Substituting (64) into (62), we have y3 (x) = 1 · e−1·x − 9.137243808510038 × 10−14 e−0.2x + 1.68432830796175 × 10−12 e−0.04x − 6.395485291678894 × 10−12 e−0.008x + 4.800126768309014 × 10−12 .

(65)

Higher iterations can be reached by using a computer program. This problem has been also investigated via the Taylor collocation method (TCM) [36], variational iteration method (VIM) [37] and perturbation iteration method (PIM) [5]. Figure 5 displays a comparison between the fourth-order approximate solutions of these methods and OPIM.One can easily deduce from fig. 5 that OPIM solutions agree very well with the exact solutions and also they are valid in a larger region. Absolute errors are shown in fig. 6 for different-order approximations of the presented method. Noticeably, the absolute errors of OPIM are smaller than those of the TCM [36], VIM [37] and PIM [5] even for the second-order approximation.

Page 10 of 11

Eur. Phys. J. Plus (2017) 132: 51

Fig. 6. Absolute errors obtained from |yexact − yOP IM | for example 3. (a) Errors for second-order approximate solution. (b) Errors for fourth-order approximate solution.

5 Conclusion The optimal perturbation iteration method is introduced and applied for the first time to obtain the new approximate solutions for delay differential equations. This technique is very simple, as it does not need any discretization like many other numerical methods. OPIM has been tested in various examples of linear and nonlinear initial value problems of delay differential equations and is seen to yield satisfactory results. Figures show that this method converges fast at even lower orders of approximations. The presented method also allows us to control the convergence by optimally determining the auxiliary parameters Sn . Consequently, we can say that this paper confirms the notion that the proposed method is powerful and effective for finding approximate solutions for differential equations which have great importance in many fields of science and engineering.

References 1. Yang Kuang, Delay Differential Equations: With Applications in Population Dynamics, in Mathematics in Science and Engineering, Vol. 191 (Academic Press, 1993). 2. M.Z. Liu, Dongsong Li, Appl. Math. Comput. 155, 853 (2004). 3. D.S. Li, M.Z. Liu, J. Harbin Inst. Technol. 32, 1 (2000). 4. Silviu-Iulian Niculescu, Delay Effects on Stability: A Robust Control Approach, in Lecture Notes in Control and Information Sciences, Vol. 269 (Springer Science & Business Media, 2001). 5. M. Mustafa Bah¸si, Mehmet C ¸ evik, J. Appl. Math. 2015, 139821 (2015). 6. N. Ratib Anakira, A.K. Alomari, Ishak Hashim, Math. Probl. Eng. 2013, 498902 (2013). 7. Fatemeh Shakeri, Mehdi Dehghan, Math. Comput. Modell. 48, 486 (2008). 8. F. Karako¸c, H. Bereketo˘ glu, Int. J. Comput. Math. 86, 914 (2009). 9. S. Sedaghat, Y. Ordokhani, Mehdi Dehghan, Commun. Nonlinear Sci. Numer. Simul. 17, 4815 (2012). 10. David J. Evans, K.R. Raslan, Int. J. Comput. Math. 82, 49 (2005). 11. Ali H. Nayfeh, Perturbation Methods (John Wiley & Sons, 2008). 12. E. John Hinch, Perturbation Methods (Cambridge University Press, 1991). 13. Jirair Kevorkian, Julian D. Cole, Perturbation Methods in Applied Mathematics, in Applied Mathematical Science, Vol. 34 (Springer Science & Business Media, 2013). 14. Abdon Atangana, Adem Kılı¸cman. Analytical solutions of boundary values problem of 2d and 3d poisson and biharmonic equations by homotopy decomposition method, in Abstract and Applied Analysis, Vol. 2013 (Hindawi Publishing Corporation, 2013). 15. Abdon Atangana, Samir Brahim Belhaouari, Math. Probl. Eng. 2013, 318590 (2013). ¨ s, Nonlinear Anal. Real World Appl. 23, 9 (2015). 16. Zehra Pınar, Turgut Ozi¸ ¨ 17. Zehra Pinar, Abhishek Dutta, Guido B´eny, Turgut Ozis, Appl. Math. Inf. Sci. 9, 2467 (2015). 18. Mehmet Giyas Sakar, Fatih Uludag, Fevzi Erdogan, Appl. Math. Model. 40, 6639 (2016). 19. Asma Ali Elbeleze, Adem Kılı¸cman, Bachok M. Taib, Math. Probl. Eng. 2013, 524852 (2013). 20. N. Bildik, S. Deniz, Sci. Iran. Trans. D: Comput. Sci. Eng. Electr. Eng. 22, 1052 (2015). 21. Sinan Deniz, Necdet Bildik, Int. J. Model. Optim. 4, 292 (2014). 22. Mehmet Sezer, Int. J. Math. Educ. Sci. Technol. 25, 625 (1994). 23. Hasan Bulut, Haci Mehmet Baskonus, Fethi Bin Muhammad Belgacem, The analytical solution of some fractional ordinary differential equations by the sumudu transform method, in Abstract and Applied Analysis, Vol. 2013 (Hindawi Publishing Corporation, 2013). 24. Hasan Bulut, Mahmut Erg¨ ut, Vedat Asil, Roza Horvath Bokor, Appl. Math. Comput. 153, 733 (2004).

Eur. Phys. J. Plus (2017) 132: 51 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37.

Page 11 of 11

Saeid Abbasbandy, Appl. Math. Comput. 145, 887 (2003). Necdet Bildik, Ali Konuralp, Int. J. Nonlinear Sci. Numer. Simul. 7, 65 (2006). Necdet Bildik, Ali Konuralp, Funda Orak¸cı Bek, Semih K¨ uc¸u ¨karslan, Appl. Math. Comput. 172, 551 (2006). Yi˘ git Aksoy, Mehmet Pakdemirli, Comput. Math. Appl. 59, 2802 (2010). ˙ Mehmet S ¸ enol, Ihsan Timu¸cin Dolap¸cı, Yi˘ git Aksoy, Mehmet Pakdemirli, Perturbation-iteration method for first-order differential equations and systems, in Abstract and Applied Analysis, Vol. 2013 (Hindawi Publishing Corporation, 2013). M. Khalid, Mariam Sultana, Faheem Zaidi, Fareeha Sami Khan, Int. J. Comput. Appl. 114, 1 (2015). ˙ Ihsan Timu¸cin Dolap¸cı, Mehmet S ¸ enol, Mehmet Pakdemirli, J. Appl. Math. 2013, 682537 (2013). N. Heri¸sanu, Vasile Marinca, Comput. Math. Appl. 60, 1607 (2010). Hongliang Liu, Aiguo Xiao, Lihong Su, J. Appl. Math. 2013, 634670 (2013). A.K. Alomari, Mohd Salmi Md Noorani, R. Nazar, Acta Appl. Math. 108, 395 (2009). E.H. Doha, A.H. Bhrawy, D. Baleanu, R.M. Hafez, Appl. Numer. Math. 77, 43 (2014). Mehmet Sezer, Salih Yal¸cinba¸s, Mustafa G¨ ulsu, Int. J. Comput. Math. 85, 1055 (2008). Abbas Saadatmandi, Mehdi Dehghan, Comput. Math. Appl. 58, 2190 (2009).