# Novel approach for solving higher-order differential equations with applications to the Van der Pol and Van der Pol–Duffing equations

## Abstract

### Background

Numerical methods are used to solve differential equations, but few are effective for nonlinear ordinary differential equations (ODEs) of order higher than one. This paper proposes a new method for such ODEs, based on Taylor series expansion. The new method is a second-order method for second-order ODEs, and it is equivalent to the central difference method, a well-known method for solving differential equations. The new method is also simple to implement for higher-order differential equations. The proposed technique was applied to solve the Van der Pol and Van der Pol–Duffing equations. It is stable over a wide range of nonlinearity and produces accurate and reliable results. For the self-excitation Van der Pol equation, the proposed technique was applied with different values of nonlinear damping.

### Results

The results were compared with those obtained using the ODE15s solver in MATLAB. The two sets of results showed excellent agreement. For the forced Van der Pol–Duffing equation, the proposed technique was applied with different values of exciting force amplitude and frequency. It was found that for certain conditions, the solution obtained using the proposed technique differed from that obtained using ODE15s.

### Conclusions

The solution obtained using the proposed technique showed good agreement with the solutions obtained using ODE45 and Runge–Kutta fourth order. The results show that the proposed approach is very simple to apply and produces acceptable error. It is a powerful and versatile tool for solving of high-order nonlinear differential equations accurately.

## 1 Background

Ordinary differential equations (ODEs) are used to model a broad range of problems, such as the motion of objects, the spread of diseases, and the behavior of financial markets. Many ODEs are nonlinear. Nonlinear ODEs can be difficult to solve analytically, so numerical methods are often used to approximate their solutions. However, there are few effective numerical methods for solving nonlinear ODEs, especially those of higher order. The greatest popular methods are Runge–Kutta [1, 2] and predictor–corrector methods based on Adams–Bashforth and Adams–Moulton methods [3, 4], or their variations [5,6,7,8].

Runge–Kutta methods are a family of numerical methods that are commonly utilized to solve ODEs. They are known for their accuracy and efficiency, and they can be used to solve both linear and nonlinear ODEs.

Predictor–corrector methods are another family of numerical methods that can be used to solve ODEs. They are based on using a predictor method to estimate the solution at the next time step, and then using a corrector method to improve the estimate. Predictor–corrector methods are often more accurate than Runge–Kutta methods, but they can also be more computationally expensive.

Adams–Bashforth and Adams–Moulton methods are two specific types of predictor–corrector methods. They are often used together, with Adams–Bashforth methods used for prediction and Adams–Moulton methods used for correction.

“Adams–Bashforth and Adams–Moulton” methods can be used to solve both linear and nonlinear ODEs. However, they are particularly well suited for solving nonlinear ODEs of order higher than one.

This paper proposes a new approach to solve higher-order differential equations emergent from initial value problems (IVPs). The new method is efficient and accurate and can be used to solve any nth-order nonlinear ordinary differential equation emergent from IVPs. This is a momentous advance in the study of numerical methods and could be used in a variety of applications, such as physics, engineering, and finance due to its ease of application.

The new approach has been developed to solve the differential equations of the self-excited and nonlinear oscillators, which are two types of nonlinear differential equations that arise in a variety of physical and engineering applications. These equations are known to be difficult to solve, especially for higher orders. The new approach is able to solve these equations with greater accuracy and efficiency in a simple manner than previous methods.

The solution of the free response of self-excited equation considering different damping effects and with different initial conditions as well as the forced response with different values of exciting force amplitude and frequency is obtained.

Results are compared with that obtained by the ODE15s algorithm for self-excitation Van der Pol oscillator and they are in excellent agreement. The absolute error is of order 10−3 or less. Results for forced nonlinear oscillator are compared with ODE15s, ODE45 in MATLAB, and Runge–Kutta fourth order. The comparison shows that the present technique and the Runge–Kutta fourth-order (RK4) method produce very similar results with minimum relative error compared to the other techniques.

## 2 Methodology of the proposed technique

The initial value problem can be written in a general form as:

$$f\left( {y^{\left( n \right)} \left( t \right), y^{{\left( {n - 1} \right)}} \left( t \right), \ldots ,\dot{y}\left( t \right), y\left( t \right), t} \right) = 0$$
(1)

Subject to the initial conditions

$$y\left( {t_{0} } \right) = a_{0} , y^{\prime}\left( {t_{0} } \right) = a_{1} , \ldots , y^{{\left( {n - 1} \right)}} \left( {t_{0} } \right) = a_{n - 1}$$
(2)

In Eq. (1), derivatives of the function $$f$$ can be denoted by either superscripts or dots. The function $$f$$ depends on the time variable t, the unknown function y, and its derivatives up to order n. The values of the first $$n - 1$$ derivatives of the function $$y$$ at the initial value $$t_{0}$$ (which is assumed to be 0 unless stated otherwise) are given by the constants $$a_{i}$$ (where $$i$$ = 1, 2, …, $$n - 1$$).

The new method for solving ODEs requires that the ODE must be rewritten so that the highest-order derivative term is alone on one side of the equation, and the lower-order derivative terms and all other terms are on the other side of the equation.

$$y^{\left( n \right)} \left( t \right) = g\left( {y^{{\left( {n - 1} \right)}} \left( t \right), y^{{\left( {n - 2} \right)}} \left( t \right), \ldots ,\dot{y}\left( t \right), y\left( t \right), t} \right)$$
(3)

Since the nth derivative at the initial condition $$a_{n}$$ is unknown, it can be found by substituting the initial conditions (2) into Eq. (3). Therefore, we will now describe a solution procedure to solve equations of the form (1), subject to initial conditions of the form (2).

The given text is a description of the new approach for solving ordinary differential equations (ODEs) using Taylor series expansion. The method begins by discretizing the domain of the variable $$t$$, i.e., dividing it into a number of subintervals of the length of $$\Delta t = \left( {T - t_{0} } \right)/N$$. Then, at each grid point $$t_{j}$$, the function $$y$$ and its derivatives up to order $$n - 1$$ are approximated using Taylor series expansion. Finally, the nth derivative is obtained using Eq. (3).

This process is repeated for $$j = 1$$ to $$N$$, to obtain the approximate values of $$y$$ and its derivatives at all grid points. The procedure of the numerical solution of ODEs using Taylor series expansion is as follows:

1. 1.

Find the nth derivative at the initial condition using Eq. (3).

2. 2.

Discretize the domain of the independent variable $$t$$.

3. 3.

At each grid point $$t_{j}$$, approximate $$y$$ and its derivatives up to order $$n - 1$$ using Taylor series expansion using Eq. (4).

4. 4.

Obtain the nth derivative using Eq. (3).

5. 5.

Repeat steps 3 and 4 for j = 1 to N.

This method is a powerful tool for solving ODEs numerically, but it is essential to remember that the accuracy of the solution is based on the number of subintervals N and the order of the Taylor series expansion n.

\begin{aligned} & y\left( {t_{j} } \right) = y\left( {t_{j - 1} } \right) + \dot{y}\left( {t_{j - 1} } \right) \Delta t + \ddot{y}\left( {t_{j - 1} } \right)\frac{{\Delta t^{2} }}{2!} + \cdots + \frac{{y^{{\left( {n - 1} \right)}} \left( {t_{j - 1} } \right) \Delta t^{{\left( {n - 1} \right)}} }}{{\left( {n - 1} \right)!}} + \frac{{y^{\left( n \right)} \left( {t_{j - 1} } \right) \Delta t^{\left( n \right)} }}{\left( n \right)!} \\ & \dot{y}\left( {t_{j} } \right) = \dot{y}\left( {t_{j - 1} } \right) + \ddot{y}\left( {t_{j - 1} } \right) \Delta t + \dddot y\left( {t_{j - 1} } \right)\frac{{\Delta t^{2} }}{2!} + \cdots + \frac{{y^{{\left( {n - 1} \right)}} \left( {t_{j - 1} } \right) \Delta t^{{\left( {n - 2} \right)}} }}{{\left( {n - 2} \right)!}} + \frac{{y^{\left( n \right)} \left( {t_{j - 1} } \right) \Delta t^{{\left( {n - 1} \right)}} }}{{\left( {n - 1} \right)!}} \\ & \quad \quad \cdot \\ & \quad \quad \cdot \\ & \quad \quad \cdot \\ & y^{{\left( {n - 2} \right)}} \left( {t_{j} } \right) = y^{{\left( {n - 2} \right)}} \left( {t_{j - 1} } \right) + y^{{\left( {n - 1} \right)}} \left( {t_{j - 1} } \right)\Delta t + y^{\left( n \right)} \left( {t_{j - 1} } \right)\Delta t^{2} /2! \\ & y^{{\left( {n - 1} \right)}} \left( {t_{j} } \right) = y^{{\left( {n - 1} \right)}} \left( {t_{j - 1} } \right) + y^{\left( n \right)} \left( {t_{j - 1} } \right)\Delta t \\ \end{aligned}
(4)

The new method expresses the derivatives of a function in a simplified manner using the Taylor expansion directly. This means that instead of using complex mathematical formulas, the derivatives can be calculated using a few simple steps.

For example, when solving a second-order differential equation, the first derivative of the function can be calculated using the central difference method. The central difference method can be used to approximate the first derivative of a function at a point, which can be useful for solving second-order differential equations. This method is based on the Taylor expansion of the function at two points, one slightly ahead of the point of interest and one slightly behind.

In the new method, the Taylor expansion for the first derivative is applied directly in simplified form. This means that the derivatives can be calculated more quickly and easily, and with less chance of error.

The Taylor expansion of the function y at time t + ∆t is:

$$y\left( {t + \Delta t} \right) = y\left( t \right) + \dot{y}\left( t \right)\Delta t + \frac{1}{2}\ddot{y}\left( t \right)\Delta t^{2}$$
(5)

The first derivative of y at time t + ∆t using the central difference method is:

$$\dot{y}\left( {t + \Delta t} \right) = \frac{{y\left( {t + 2\Delta t} \right) - y\left( t \right)}}{2\Delta t}$$
(6)

Substituting the Taylor expansion of y(t + 2∆t) into the central difference equation, we get:

$$\dot{y}\left( {t + \Delta t} \right) = \frac{{ y\left( t \right) + 2\dot{y}\left( t \right)\Delta t + 2\ddot{y}\left( t \right)\Delta t^{2} - y\left( t \right)}}{2\Delta t}$$

Simplifying, we get:

$$\dot{y}\left( {t + \Delta t} \right) = \dot{y}\left( t \right) + \ddot{y}\left( t \right)\Delta t$$
(7)

This is the expression for the first derivative of y at time t + ∆t using the new method. The two forms are equivalent, but the second form is simpler and easier to understand. It is also easier to implement in computer code. The simplified form of the central difference method is particularly useful for students and early-career professionals who are new to numerical methods.

## 3 The problem of Van der Pol oscillator

Van der Pol was a Dutch engineer who studied how things move. In 1927, he came up with a new equation to describe how electrical circuits oscillate. This equation is now used to study many different kinds of systems, including human hearts and the Earth’s climate. The Van der Pol oscillator is a special type of system that can oscillate on its own. This means that it does not need any external input to keep moving. Van der Pol oscillators are found in many different natural systems, and they are also used in many different technologies. It is a nonlinear differential equation, which means that there is no easy solution, but it can be solved numerically. The self-excited oscillator equation is a resourceful model that can be utilized to represent a broad range of oscillatory phenomena in many different fields [9,10,11,12].

The self-excited oscillator equation generally takes the form

$$\ddot{y} - \mu \left( {1 - y^{2} } \right)\dot{x} + y = f\left( t \right)$$
(8)

Equation (8) is a nonlinear inhomogeneous ordinary differential equation (ODE) with order 2 that describes the dynamics of the self-excited oscillator. It requires two initial conditions to solve. The parameter $$\mu$$ represents the strength of the oscillator damping. The right-hand side of Eq. (8) stands for the forced manner of the oscillator and is given as f(t) = Fcos(ωt), where F is the amplitude of the exciting force and ω is the frequency of the oscillating force. If f(t) = 0, then Eq. (8) reduces to a nonlinear homogeneous second-order independent ODE, which is Eq. (9).

$$\left\{ {\begin{array}{*{20}l} {\ddot{y} - \mu \left( {1 - y^{2} } \right)\dot{y} + y = 0} \hfill \\ {{\text{Subjected to}} \;y\left( {t_{0} } \right) = y_{0} \;{\text{and}}\;\dot{y}\left( {t_{0} } \right) = \dot{y}_{0} } \hfill \\ \end{array} } \right.$$
(9)

The unforced self-excited oscillator equation is a nonlinear differential equation that is given by Eq. (9). Truncated expansions can be utilized to find approximate solutions to this equation, although there are no known exact solutions [13].

Balthazar Van der Pol studied the case where a force is applied to his Eq. (8) with μ ≥ 1. He called this phenomenon “relaxation oscillations” [10, 14, 15].

Since then many people have tried to solve Eq. (9) using both analytical and numerical methods to see if it has a limit cycle [16]. Equation (9) has a well-developed theory of existence, uniqueness, and stability, as extensively discussed in [17]. Many authors have found the solution to Eq. (9), including in [18] using the collocation method and in [19] using MATLAB Ode15s and Ode45 built-in functions, and in [20] using the damped Fourier series method. Also, in [11] using predictor–corrector Adam–Bashforth–Moulton method, in [12] using restarted A domain decomposition method, and in [21] utilizing the segmenting recursion method.

The literature explores the Van der Pol oscillator through diverse methods, ranging from classical techniques like harmonic balance and Krylov–Bogoliubov–Mitropolsky to modern approaches like neural networks. Several studies tackle specific aspects: references [22] and [23] focus on different stiffness conditions, [24] introduces a new variant method, [25] utilizes a two-point block method, [26] leverages the KBMM method, [27] employs hybrid functions, and [28, 29] applies a meshfree neural network algorithm to a more complex variant of the oscillator. This showcases the breadth and ongoing development of techniques used to analyze and understand this important nonlinear system.

Advances in computational mathematics have made it possible to test the accuracy and performance of new or modified numerical methods for solving nonlinear differential equations by applying them to well-developed physical models, such as the unforced self-excited equation. This model is a classic test problem that is utilized to assess the efficiency and reliability of new methods [11, 20].

## 4 The problem of Van der Pol–Duffing oscillator

Van der Pol–Duffing forced oscillator equation is as follows:

$$\left\{ {\begin{array}{*{20}l} {\ddot{y} + \left( {\varepsilon + \mu y^{2} } \right)\dot{y} + \alpha y + \beta y^{3} = F \cos \left( {\omega t} \right)} \hfill \\ {{\text{Subjected to}}\;y\left( {t_{0} } \right) = y_{0} \;{\text{and}}\;\dot{y}\left( {t_{0} } \right) = \dot{y}_{0} } \hfill \\ \end{array} } \right.$$
(10)

The system is characterized by its linear and nonlinear damping coefficients (ε and μ), its linear and nonlinear stiffness coefficients (α and β), and the excitation amplitude (F) and frequency (ω). The initial displacement and velocity of the system are $$y_{0}$$ and $$\dot{y}_{0}$$, respectively.

When F = 0, (10) reduces into self-excited oscillator equation, when β = 0, and $$\varepsilon = - \mu ,$$ (10) reduces into Van der Pol oscillator equation; When μ = 0, (10) reduces into Duffing oscillator equation.

## 5 Application of the proposed technique

In this technique, the differential Eq. (10) is rearranged as follows:

$$\ddot{y}\left( t \right) = - \left( {\varepsilon + \mu y^{2} } \right)\dot{y} - \alpha y - \beta y^{3} + F\cos \left( {\omega t} \right)$$
(11)

By direct substitution of the initial conditions given in (10), the acceleration at starting point can be written as:

$$\ddot{y}\left( 0 \right) = - \left( {\varepsilon + \mu y^{2} \left( 0 \right)} \right)\dot{y}\left( 0 \right) - \alpha y\left( 0 \right) - \beta y^{3} \left( 0 \right) + F\cos \left( {\omega \times \left( 0 \right)} \right)$$
(12)

The approximate displacement function at a time $$t + \Delta t$$ is obtained using the first of (4):

$$y\left( {t + \Delta t} \right) = y\left( t \right) + \dot{y}\left( t \right)\Delta t + \frac{1}{2}\ddot{y}\left( t \right)\Delta t^{2}$$
(13)

The approximate velocity function at a time $$t + \Delta t$$ is obtained using (4):

$$\dot{y}\left( {t + \Delta t} \right) = \dot{y}\left( t \right) + \ddot{y}\left( t \right)\Delta t$$
(14)

The approximate acceleration function at a time $$t + \Delta t$$ is obtained using Eq. (11) as:

$$\ddot{y}\left( {t + \Delta t} \right) = - \left( {\varepsilon + \mu y^{2} \left( {t + \Delta t} \right)} \right)\dot{y}\left( {t + \Delta t} \right) - \alpha y\left( {t + \Delta t} \right) - \beta y^{3} \left( {t + \Delta t} \right) + F\cos \left( {\omega \times \left( {t + \Delta t} \right)} \right)$$
(15)

So, the first iteration is obtained from Eqs. (13) through (15) as:

$$y\left( {\Delta t} \right) = y\left( 0 \right) + \dot{y}\left( 0 \right)\Delta t + \frac{1}{2}\ddot{y}\left( 0 \right)\Delta t^{2}$$
(16)
$$\dot{y}\left( {\Delta t} \right) = \dot{y}\left( 0 \right) + \ddot{y}\left( 0 \right)\Delta t$$
(17)
$$\ddot{y}\left( {\Delta t} \right) = - \left( {\varepsilon + \mu y^{2} \left( 0 \right)} \right)\dot{y}\left( 0 \right) - \alpha y\left( 0 \right) - \beta y^{3} \left( 0 \right) + F\cos \left( 0 \right)$$
(18)

The recurrence formula can be written as:

$$y_{n} = y_{n - 1} + {\text{y}}_{n - 1} \Delta t + \frac{1}{2}\ddot{y}_{n - 1} \Delta t^{2}$$
(19)
$$\dot{y}_{n} = \dot{y}_{n - 1} + \ddot{y}_{n - 1} \Delta t$$
(20)
$$\ddot{y}_{n} = - \left( {\varepsilon + \mu y^{2} } \right)\dot{y}_{n} - \alpha y_{n} - \beta y_{n}^{3} + F\cos \left( {\omega \times \left( {\left( {n - 1} \right)\Delta t} \right)} \right)$$
(21)

The recurrence formula is simple, direct, and straightforward.

## 6 Results

Presented below are the results of this research.

## 7 Discussion

### 7.1 Free vibration

The recursive relations in Sect. 5 are developed to solve Van der Pol’s oscillator Eq. (9) with the non-exciting force for different values of the nonlinearity parameter $$\mu$$. The initial conditions are $$y\left( 0 \right)$$ = 2 and $$\dot{y}\left( 0 \right)$$ = 0.

The solution obtained using the present technique is compared with the solutions obtained using Euler’s method and the ODE15s solver in MATLAB. Figure 1 shows the comparison for $$\mu$$ = 20.

Figures 2, 3, 4, 5, 6 and 7 show the solutions for $$\mu$$ = 0.1, 1, 10, 20, 50, and 100, respectively. The present solution is compared with the solution obtained using the multistep reverse numerical integration algorithm ODE15s in MATLAB.

Tables 1, 2, 3, 4, 5 and 6 show the absolute error in the present solution compared to the ODE15s solution at a different time to t = 50 s for $$\mu$$ = 0.1, 1, 10, 20, 50, and 100, respectively. The error is of order 10−3 or less.

This shows that the present technique is a powerful and easy way to solve “Van der Pol’s” oscillator problem.

### 7.2 Forced vibration

The responses of the forced Van der Pol–Duffing oscillators are obtained using the proposed technique. Two examples of Van der Pol–Duffing forced oscillator Eq. (10) are presented.

Example 1 is with the following data $$\varepsilon = - \mu = 2, \alpha = - 1, \beta = 1, F = 1$$ and $$\omega = 0.7$$. The initial conditions are $$y\left( {t_{0} } \right) = 0.1$$ and $$\dot{y}\left( {t_{0} } \right) = - 0.2$$

The results are compared with that obtained using ODE15s and presented in Fig. 8. The results are in excellent agreement with that obtained by ODE15s.

Example 2 is with the following data $$\varepsilon = - \mu = 0.2, F = 0.53$$ and $$\omega = 1$$ with the same initial conditions as example 1. The results are presented in Fig. 9a. It was found that the results have a big difference from that obtained using ODE15s in the period from t = 30 to t = 35. This prompted me to compare the results of the present solution with the results of other solutions. The results of the present solution were compared with the results of ODE15s, RK4, and ODE45. It has been observed that all techniques are identical in the period from t = 0 to t = 8, as shown in Fig. 9b. The results in the periods from t = 10: 25, t = 25:35, and t = 35: 50 are shown in Fig. 9c–e, respectively. The big difference is in the period t = 25:35. Table 7 presents a comparison of Van der Pol–Duffing oscillator response obtained by different techniques and their relative errors compared to RK4. It is clear that the results of the present technique are in good agreement with RK4 over the whole range.

## 8 Conclusions

This paper presents a new technique to solve ordinary differential equations (ODEs) which is based on Taylor expansion. The technique is developed to solve nonlinear Van der Pol and Van der Pol–Duffing oscillators with damping effects under different initial conditions. The results are compared to three other well-known ODE solvers: ODE15s, ODE45, and fourth-order Runge–Kutta (RK4). The comparison shows that the new technique is just as accurate as the other three solvers, but it is simpler to understand and implement in computer code. The new technique is proved to be equivalent to the central difference method, but the authors argue that their simplified form of the central difference method is easier to use, especially for students and early-career professionals who are new to numerical methods. Finally, the authors conclude that their new technique is an accurate and efficient tool for solving nonlinear differential equations. They also note that their technique does not require transforming the higher-order differential equations to state space or predicting the future value of y at t + 2Δt to calculate y′ at t + Δt.

## Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

## References

1. Friedman M (1994) Fundamentals of computer numerical analysis. CRC Press, Boca Raton

2. Nakamura S (1993) Applied numerical methods in C. Prentice-Hall Inc., Hoboken

3. Atkinson K, Han W, Stewart DE (2011) Numerical solution of ordinary differential equations. Wiley, Hoboken

4. Morton KW, Mayers DF (2005) Numerical solution of partial differential equations: an introduction. Cambridge University Press, Cambridge

5. Horn MK (1983) Fourth-and fifth-order, scaled Rungs–Kutta algorithms for treating dense output. SIAM J Numer Anal 20(3):558–568

6. Nørsett SP, Wanner G (1981) Perturbed collocation and Runge–Kutta methods. Numer Math 38:193–208

7. Owren B, Zennaro M (1989) Continuous explicit Runge–Kutta methods. Computational ordinary differential equations. In: Institute of mathematics and its applications conference series, vol 39. London, pp 97–105‏

8. Burden RL, Faires JD, Burden AM (2015) Numerical analysis. Cengage Learning, Boston

9. Glass L, Mackey MC (1988) From clocks to chaos: the rhythms of life. Princeton University Press, Princeton

10. Tsatsos M (2006) Theoretical and numerical study of the Van der Pol equation. Dissertation, Thessaloniki

11. Chen J-H, Chen W-C (2008) Chaotic dynamics of the fractionally damped Van der Pol equation. Chaos Solitons Fractals 35:188–198

12. Vahidi AR, Azimzadeh Z, Mohammadifar S (2012) Restarted adomian decomposition method for solving Duffing–van der Pol equation. Appl Math Sci 6(11):499–507

13. Kudryashov NA (2018) Exact solutions and integrability of the Duffing–Van der Pol equation. Regul Chaotic Dyn 23(4):471–479

14. van der Pol B (1920) A theory of the amplitude of free and forced triode vibrations. Radio Review 1:701–710

15. van der Pol B (1926) On relaxation oscillations I. Philos Mag 2:978–992

16. Kubat C, Taþkin H (2013) Nonlinear dynamical behaviors of the physical processes: a comparison between crisp and fuzzy models. Math Theory Model 3:66–77

17. Howard P (2009) Analysis of ADE models

18. Mallon NJ (2002) Collocation: a method for computing periodic solutions of ordinary differential equations. DCT report 35

19. Bindel (2011) Lecture notes: introduction to scientific computing (CS 3220), Fall

20. Nicholson AF (1965) Periodic solutions of Van der Pol and Duffing equations. IEEE Trans Circuit Theory 12:595–597

21. Bendtsen C, Thomsen PG (1999) Numerical solution of differential algebraic equations. Technical report: IMM-REP-1999–8

22. Zhang C, Zeng Y (2014) A simple numerical method for Van der Pol–Duffing oscillator equation. In: 2014 International conference on mechatronics, control and electronic engineering (MCE-14). Atlantis Press

23. Mondal MAK, Molla MHU, Alam MS (2019) A new analytical approach for solving Van der Pol oscillator. Sci J Appl Math Stat 7(4):51–55

24. Altamirano GC, Núñez RAS (2020) Numerical solution of nonlinear third order Van der Pol oscillator. Int J Biol Phys Math 5:24–28

25. Mungkasi S, Widjaja D (2021) A numerical-analytical iterative method for solving an electrical oscillator equation. Telkomnika (Telecommun Comput Electron Control) 19(4):1218–1225

26. Rasedee AFN et al (2021) Numerical solution for Duffing–van Der Pol oscillator via block method. Adv Math Sci J

27. Salas AH et al (2022) Some novel approaches for analyzing the unforced and forced Duffing–Van der Pol oscillators. J Math. https://doi.org/10.1155/2022/2174192

28. Mohammadi M et al (2023) Numerical solutions of Duffing van der Pol equations on the basis of hybrid functions. Adv Math Phys. https://doi.org/10.1155/2023/4144552

29. Sahoo AK, Chakraverty S (2023) A neural network approach for the solution of Van der Pol–Mathieu–Duffing oscillator model. Evol Intell. https://doi.org/10.1007/s12065-023-00835-1

## Acknowledgements

The authors appreciate other colleagues and anonymous reviewers whose constructive contributions have made this paper worthwhile.

## Funding

The authors received no funding for any part of this research.

## Author information

Authors

### Contributions

Abdelrady Okasha Elnady prepared formulation of model and solution, and Ahmed Newir and Mohamed A. Ibrahim contributed to supervision and proof-reading.

## Ethics declarations

Not applicable.

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Elnady, A.O., Newir, A. & Ibrahim, M.A. Novel approach for solving higher-order differential equations with applications to the Van der Pol and Van der Pol–Duffing equations. Beni-Suef Univ J Basic Appl Sci 13, 29 (2024). https://doi.org/10.1186/s43088-024-00484-y