Exponential integrators are a class of numerical methods for the solution of ordinary differential equations, specifically initial value problems. This large class of methods from numerical analysis is based on the exact integration of the linear part of the initial value problem. Because the linear part is integrated exactly, this can help to mitigate the stiffness of a differential equation. Exponential integrators can be constructed to be explicit or implicit for numerical ordinary differential equations or serve as the time integrator for numerical partial differential equations. ## Contents * 1 Background * 2 Introduction * 3 Exponential Rosenbrock methods * 3.1 Stiff order conditions * 3.2 Convergence analysis * 3.3 Examples * 3.3.1 Second-order method * 3.3.2 Third-order methods * 4 Fourth-order ETDRK4 method of Cox and Matthews * 5 Applications * 6 See also * 7 Notes * 8 References * 9 External links ## Background Dating back to at least the 1960s, these methods were recognized by Certaine[1] and Pope.[2] As of late exponential integrators have become an active area of research, see Hochbruck and Ostermann (2010).[3] Originally developed for solving stiff differential equations, the methods have been used to solve partial differential equations including hyperbolic as well as parabolic problems[4] such as the heat equation. ## Introduction We consider initial value problems of the form, [math]\displaystyle{ u'(t) = L u(t) + N(u(t) ), \qquad u(t_0)=u_0, \qquad \qquad (1) }[/math] where [math]\displaystyle{ L }[/math] is composed of linear terms, and [math]\displaystyle{ N }[/math] is composed of the non-linear terms. These problems can come from a more typical initial value problem [math]\displaystyle{ u'(t) = f(u(t)), \qquad u(t_0)=u_0, }[/math] after linearizing locally about a fixed or local state [math]\displaystyle{ u^* }[/math]: [math]\displaystyle{ L = \frac{\partial f}{\partial u}(u^*); \qquad N = f(u) - L u. }[/math] Here, [math]\displaystyle{ \frac{\partial f}{\partial u} }[/math] refers to the partial derivative of [math]\displaystyle{ f }[/math] with respect to [math]\displaystyle{ u }[/math] (the Jacobian of f). Exact integration of this problem from time 0 to a later time [math]\displaystyle{ t }[/math] can be performed using matrix exponentials to define an integral equation for the exact solution:[3] [math]\displaystyle{ u(t) = e^{L t } u_0 + \int_{0}^{t} e^{ L (t-\tau) } N\left( u\left( \tau \right) \right)\, d\tau. \qquad (2) }[/math] This is similar to the exact integral used in the Picard–Lindelöf theorem. In the case of [math]\displaystyle{ N\equiv 0 }[/math], this formulation is the exact solution to the linear differential equation. Numerical methods require a discretization of equation (2). They can be based on Runge-Kutta discretizations,[5][6][7] linear multistep methods or a variety of other options. ## Exponential Rosenbrock methods Exponential Rosenbrock methods were shown to be very efficient in solving large systems of stiff ordinary differential equations, usually resulted from spatial discretization of time dependent (parabolic) PDEs. These integrators are constructed based on a continuous linearization of (1) along the numerical solution [math]\displaystyle{ u_n }[/math] [math]\displaystyle{ u'(t)=L_{n} u(t)+ N_n(u(t)), \qquad u(t_0)=u_0, \qquad (3) }[/math] where [math]\displaystyle{ L_{n}=\frac{\partial f}{\partial u}(u_n), N_n (u)=f(u)-L_{n} u. }[/math] This procedure enjoys the advantage, in each step, that [math]\displaystyle{ \frac{\partial N_n}{\partial u}(u_n)= 0. }[/math] This considerably simplifies the derivation of the order conditions and improves the stability when integrating the nonlinearity [math]\displaystyle{ N(u(t)) }[/math]. Again, applying the variation-of-constants formula (2) gives the exact solution at time [math]\displaystyle{ t_{n+1} }[/math] as [math]\displaystyle{ u(t_{n+1})= e^{h_n L_n}u(t_n) + \int_{0}^{h_n} e^{(h_n-\tau) L_n} N_n ( u(t_n+\tau )) d\tau. \qquad (4) }[/math] The idea now is to approximate the integral in (4) by some quadrature rule with nodes [math]\displaystyle{ c_i }[/math] and weights [math]\displaystyle{ b_i(h_n L_n) }[/math] ([math]\displaystyle{ 1\leq i\leq s }[/math]). This yields the following class of [math]\displaystyle{ s-stage }[/math] explicit exponential Rosenbrock methods, see Hochbruck and Ostermann (2006), Hochbruck, Ostermann and Schweitzer (2009): [math]\displaystyle{ U_{ni}= e^{c_i h_n L_n}u_n +h_n \sum_{j=1}^{i-1}a_{ij}(h_n L_n) N_n( U_{nj}), }[/math] [math]\displaystyle{ u_{n+1} = e^{h_n L_n}u_n + h_n \sum_{i=1}^{s}b_{i}(h_n L_n)N_n(U_{ni}) }[/math] with [math]\displaystyle{ u_n \approx u(t_n), U_{ni}\approx u(t_n +c_i h_n), h_n = t_{n+1}-t_n }[/math]. The coefficients [math]\displaystyle{ a_{ij}(z), b_i (z) }[/math] are usually chosen as linear combinations of the entire functions [math]\displaystyle{ \varphi_{k}(c_i z), \varphi_{k}(z) }[/math], respectively, where [math]\displaystyle{ \varphi_0(z) = e^z,\quad \varphi _{k}(z)=\int_{0}^{1} e^{(1-\theta )z} \frac{\theta ^{k-1}}{(k-1)!} d\theta , \quad k\geq 1. }[/math] These functions satisfy the recursion relation [math]\displaystyle{ \varphi _{k+1}(z)=\frac{\varphi _{k}(z)-\varphi_k(0)}{z}, \ k\geq 0. }[/math] By introducing the difference [math]\displaystyle{ D_{ni}=N_n (U_{ni})-N_n (u_n) }[/math], they can be reformulated in a more efficient way for implementation (see also [3]) as [math]\displaystyle{ U_{ni}= u_n + c_i h_n \varphi _{1} ( c_i h_n L_n)f(u_n) +h_n \sum_{j=2}^{i-1}a_{ij}(h_n L_n) D_{nj}, }[/math] [math]\displaystyle{ u_{n+1}= u_n + h_n \varphi _{1} ( h_n L_n)f(u_n) + h_n \sum_{i=2}^{s}b_{i}(h_n L_n) D_{ni}. }[/math] In order to implement this scheme with adaptive step size, one can consider, for the purpose of local error estimation, the following embedded methods [math]\displaystyle{ \bar{u}_{n+1}= u_n + h_n \varphi _{1} ( h_n L_n)f(u_n) + h_n \sum_{i=2}^{s} \bar{b}_{i}(h_n L_n) D_{ni}, }[/math] which use the same stages [math]\displaystyle{ U_{ni} }[/math] but with weights [math]\displaystyle{ \bar{b}_{i} }[/math]. For convenience, the coefficients of the explicit exponential Rosenbrock methods together with their embedded methods can be represented by using the so-called reduced Butcher tableau as follows: [math]\displaystyle{ c_2 }[/math] | | [math]\displaystyle{ c_3 }[/math] | [math]\displaystyle{ a_{32} }[/math] [math]\displaystyle{ \vdots }[/math] | [math]\displaystyle{ \vdots }[/math] | | [math]\displaystyle{ \ddots }[/math] [math]\displaystyle{ c_s }[/math] | [math]\displaystyle{ a_{s2} }[/math] | [math]\displaystyle{ a_{s3} }[/math] | [math]\displaystyle{ \cdots }[/math] | [math]\displaystyle{ a_{s,s-1} }[/math] | | [math]\displaystyle{ b_2 }[/math] | [math]\displaystyle{ b_3 }[/math] | [math]\displaystyle{ \cdots }[/math] | [math]\displaystyle{ b_{s-1} }[/math] | [math]\displaystyle{ b_s }[/math] | [math]\displaystyle{ \bar{b}_2 }[/math] | [math]\displaystyle{ \bar{b}_3 }[/math] | [math]\displaystyle{ \cdots }[/math] | [math]\displaystyle{ \bar{b}_{s-1} }[/math] | [math]\displaystyle{ \bar{b}_s }[/math] ### Stiff order conditions Moreover, it is shown in Luan and Ostermann (2014a)[8] that the reformulation approach offers a new and simple way to analyze the local errors and thus to derive the stiff order conditions for exponential Rosenbrock methods up to order 5. With the help of this new technique together with an extension of the B-series concept, a theory for deriving the stiff order conditions for exponential Rosenbrock integrators of arbitrary order has been finally given in Luan and Ostermann (2013).[9] As an example, in that work the stiff order conditions for exponential Rosenbrock methods up to order 6 have been derived, which are stated in the following table: [math]\displaystyle{ \begin{array}{ |c|c|c| } \hline No. & \text{Stiff order condition} & \text{Order} \\\ \hline 1&\sum_{i=2}^{s} b_i (Z)c^2_i=2\varphi_3 (Z) &3 \\\ \hline 2&\sum_{i=2}^{s} b_i (Z)c^3_i=6\varphi_4 (Z) &4 \\\ \hline 3&\sum_{i=2}^{s} b_i (Z)c^4_i=24\varphi_5 (Z) &5 \\\ 4&\sum_{i=2}^{s} b_i (Z)c_i K \psi _{3,i}(Z)=0 &5 \\\ \hline 5&\sum_{i=2}^{s}b_{i}(Z)c_i^5 = 120\varphi_6(Z) & 6\\\ 6&\sum_{i=2}^{s}b_{i}(Z) c_i^2 M \psi_{3,i}(Z)=0 & 6 \\\ 7&\sum_{i=2}^{s}b_{i}(Z) c_i K \psi_{4,i}(Z)=0 & 6\\\ \hline \end{array} }[/math] Here [math]\displaystyle{ Z, K, M.\ }[/math] denote arbitrary square matrices. ### Convergence analysis The stability and convergence results for exponential Rosenbrock methods are proved in the framework of strongly continuous semigroups in some Banach space. ### Examples All the schemes presented below fulfill the stiff order conditions and thus are also suitable for solving stiff problems. #### Second-order method The simplest exponential Rosenbrock method is the exponential Rosenbrock–Euler scheme, which has order 2, see, for example Hochbruck et al. (2009): [math]\displaystyle{ u_{n+1} =u_n + h_n \ \varphi_1(h_n L_n) f(u_n). }[/math] #### Third-order methods A class of third-order exponential Rosenbrock methods was derived in Hochbruck et al. (2009), named as exprb32, is given as: exprb32: 1 | | | | | [math]\displaystyle{ 2\varphi_3 }[/math] | 0 which reads as [math]\displaystyle{ U_{n2} =u_n + h_n \ \varphi_1( h_n L_n) f(u_n), }[/math] [math]\displaystyle{ u_{n+1} =u_n + h_n \ \varphi_1(h_n L_n) f(u_n)+ h_n \ 2\varphi_3(h_n L_n) D_{n2}, }[/math] where [math]\displaystyle{ D_{n2}=N_n (U_{n2})-N_n (u_n). }[/math] For a variable step size implementation of this scheme, one can embed it with the exponential Rosenbrock–Euler: [math]\displaystyle{ \hat{u}_{n+1} =u_n + h_n \ \varphi_1(h_n L_n) f(u_n). }[/math] ## Fourth-order ETDRK4 method of Cox and Matthews Cox and Matthews[5] describe a fourth-order method exponential time differencing (ETD) method that they used Maple to derive. We use their notation, and assume that the unknown function is [math]\displaystyle{ u }[/math], and that we have a known solution [math]\displaystyle{ u_n }[/math] at time [math]\displaystyle{ t_n }[/math]. Furthermore, we'll make explicit use of a possibly time dependent right hand side: [math]\displaystyle{ \mathcal{N} = \mathcal{N}( u, t ) }[/math]. Three stage values are first constructed: [math]\displaystyle{ a_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( u_n, t_n ) }[/math] [math]\displaystyle{ b_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( a_n, t_n + h/2 ) }[/math] [math]\displaystyle{ c_n = e^{ L h / 2 } a_n + L^{-1} \left( e^{Lh/2} - I \right) \left( 2 \mathcal{N}( b_n, t_n + h/2 ) - \mathcal{N}(u_n,t_n) \right) }[/math] The final update is given by, [math]\displaystyle{ u_{n+1} = e^{L h} u_n + h^{-2} L^{-3} \left\\{ \left[ -4 - Lh + e^{Lh} \left( 4 - 3 L h + (L h)^2 \right) \right] \mathcal{N}( u_n, t_n ) + 2 \left[ 2 + L h + e^{Lh} \left( -2 + L h \right) \right] \left( \mathcal{N}( a_n, t_n+h/2 ) + \mathcal{N}( b_n, t_n + h / 2 ) \right) + \left[ -4 - 3L h - (Lh)^2 + e^{Lh} \left(4 - Lh \right) \right] \mathcal{N}( c_n, t_n + h ) \right\\}. }[/math] If implemented naively, the above algorithm suffers from numerical instabilities due to floating point round-off errors.[10] To see why, consider the first function, [math]\displaystyle{ \varphi_1( z ) = \frac{ e^z-1 }{ z }, }[/math] which is present in the first-order Euler method, as well as all three stages of ETDRK4. For small values of [math]\displaystyle{ z }[/math], this function suffers from numerical cancellation errors. However, these numerical issues can be avoided by evaluating the [math]\displaystyle{ \varphi_1 }[/math] function via a contour integral approach [10] or by a Padé approximant.[11] ## Applications Exponential integrators are used for the simulation of stiff scenarios in scientific and visual computing, for example in molecular dynamics,[12] for VLSI circuit simulation,[13][14] and in computer graphics.[15] They are also applied in the context of hybrid monte carlo methods.[16] In these applications, exponential integrators show the advantage of large time stepping capability and high accuracy. To accelerate the evaluation of matrix functions in such complex scenarios, exponential integrators are often combined with Krylov subspace projection methods. ## See also * General linear methods ## Notes 1. ↑ (Certaine 1960) 2. ↑ (Pope 1963) 3. ↑ 3.0 3.1 3.2 (Hochbruck Ostermann) 4. ↑ (Hochbruck Ostermann) 5. ↑ 5.0 5.1 (Cox Matthews) 6. ↑ (Tokman 2006) 7. ↑ (Tokman 2011) 8. ↑ (Luan Ostermann) 9. ↑ (Luan Ostermann) 10. ↑ 10.0 10.1 (Kassam Trefethen) 11. ↑ (Berland 2007) 12. ↑ (Michels Desbrun) 13. ↑ (Zhuang 2014) 14. ↑ (Weng 2012) 15. ↑ (Michels 2014) 16. ↑ (Chao 2015) ## References * Berland, Havard; Owren, Brynjulf; Skaflestad, Bard (2005). "B-series and Order Conditions for Exponential Integrators". SIAM Journal on Numerical Analysis 43 (4): 1715–1727. doi:10.1137/040612683. * Berland, Havard; Skaflestad, Bard; Wright, Will M. (2007). "EXPINT-A MATLAB Package for Exponential Integrators". ACM Transactions on Mathematical Software 33 (1): 4–es. doi:10.1145/1206040.1206044. http://cds.cern.ch/record/848126. * Chao, Wei-Lun; Solomon, Justin; Michels, Dominik L.; Sha, Fei (2015). "Exponential Integration for Hamiltonian Monte Carlo". Proceedings of the 32nd International Conference on Machine Learning (ICML-15): 1142–1151. * Certaine, John (1960). "The solution of ordinary differential equations with large time constants". Mathematical methods for digital computers. Wiley. pp. 128–132. * Cox, S. M.; Matthews, P.C. (March 2002). "Exponential time differencing for stiff systems". Journal of Computational Physics 176 (2): 430–455. doi:10.1006/jcph.2002.6995. Bibcode: 2002JCoPh.176..430C. * Hochbruck, Marlis; Ostermann, Alexander (May 2010). "Exponential integrators". Acta Numerica 19: 209–286. doi:10.1017/S0962492910000048. Bibcode: 2010AcNum..19..209H. * Hochbruck, Marlis; Ostermann, Alexander (2005). "Explicit exponential Runge-Kutta methods for semilinear parabolic problems". SIAM Journal on Numerical Analysis 43 (3): 1069–1090. doi:10.1137/040611434. https://publikationen.bibliothek.kit.edu/1000042061/3153602. * Hochbruck, Marlis; Ostermann, Alexander (May 2005). "Exponential Runge–Kutta methods for parabolic problems". Applied Numerical Mathematics 53 (2–4): 323–339. doi:10.1016/j.apnum.2004.08.005. https://publikationen.bibliothek.kit.edu/1000042292/3168261. * Luan, Vu Thai; Ostermann, Alexander (2014a). "Exponential Rosenbrock methods of order five-construction, analysis and numerical comparisons". Journal of Computational and Applied Mathematics 255: 417–431. doi:10.1016/j.cam.2013.04.041. * Luan, Vu Thai; Ostermann, Alexander (2014c). "Explicit exponential Runge-Kutta methods of high order for parabolic problems". Journal of Computational and Applied Mathematics 256: 168–179. doi:10.1016/j.cam.2013.07.027. * Luan, Vu Thai; Ostermann, Alexander (2013). "Exponential B-series: The stiff case". SIAM Journal on Numerical Analysis 51 (6): 3431–3445. doi:10.1137/130920204. * Luan, Vu Thai; Ostermann, Alexander (2014b). Stiff order conditions for exponential Runge-Kutta methods of order five. 133–143. doi:10.1007/978-3-319-09063-4_11. ISBN 978-3-319-09062-7. * Luan, Vu Thai; Ostermann, Alexander (2016). "Parallel exponential Rosenbrock methods". Computers and Mathematics with Applications 71 (5): 1137–1150. doi:10.1016/j.camwa.2016.01.020. * Michels, Dominik L.; Desbrun, Mathieu (2015). "A Semi-analytical Approach to Molecular Dynamics". Journal of Computational Physics 303: 336–354. doi:10.1016/j.jcp.2015.10.009. Bibcode: 2015JCoPh.303..336M. * Michels, Dominik L.; Sobottka, Gerrit A.; Weber, Andreas G. (2014). "Exponential Integrators for Stiff Elastodynamic Problems". ACM Transactions on Graphics 33: 7:1–7:20. doi:10.1145/2508462. * Pope, David A (1963). "An exponential method of numerical integration of ordinary differential equations". Communications of the ACM 6 (8): 491–493. doi:10.1145/366707.367592. * Tokman, Mayya (October 2011). "A new class of exponential propagation iterative methods of Runge–Kutta type (EPIRK)". Journal of Computational Physics 230 (24): 8762–8778. doi:10.1016/j.jcp.2011.08.023. Bibcode: 2011JCoPh.230.8762T. * Tokman, Mayya (April 2006). "Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods". Journal of Computational Physics 213 (2): 748–776. doi:10.1016/j.jcp.2005.08.032. Bibcode: 2006JCoPh.213..748T. * Trefethen, Lloyd N.; Aly-Khan Kassam (2005). "Fourth-Order Time-Stepping for Stiff PDEs". SIAM Journal on Scientific Computing 26 (4): 1214–1233. doi:10.1137/S1064827502410633. * Zhuang, Hao; Weng, Shih-Hung; Lin, Jeng-Hau; Cheng, Chung-Kuan (2014). "MATEX". Proceedings of the 51st Annual Design Automation Conference on Design Automation Conference - DAC '14. pp. 1–6. doi:10.1145/2593069.2593160. ISBN 9781450327305. http://cseweb.ucsd.edu/~hazhuang/papers/dac14_matex.pdf. * Weng, Shih-Hung; Chen, Quan; Cheng, Chung-Kuan (2012). "Time-Domain Analysis of Large-Scale Circuits by Matrix Exponential Method With Adaptive Control". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 32 (8): 1180–1193. doi:10.1109/TCAD.2012.2189396. https://www.semanticscholar.org/paper/Time-Domain-Analysis-of-Large-Scale-Circuits-by-Weng-Chen/5bc2862638c718a405fb232ed8c280ad8b14660c. ## External links * integrators on GPGPUs * code for a meshfree exponential integrator * v * t * e Numerical methods for integration First-order methods| * Euler method * Backward Euler * Semi-implicit Euler * Exponential Euler Second-order methods| * Verlet integration * Velocity Verlet * Trapezoidal rule * Beeman's algorithm * Midpoint method * Heun's method * Newmark-beta method * Leapfrog integration Higher-order methods| * Exponential integrator * Runge–Kutta methods * List of Runge–Kutta methods * Linear multistep method * General linear methods * Backward differentiation formula * Yoshida Theory| * Symplectic integrator 0.00 (0 votes) Original source: https://en.wikipedia.org/wiki/Exponential integrator. Read more | Retrieved from "https://handwiki.org/wiki/index.php?title=Exponential_integrator&oldid=2439346" *[v]: View this template *[t]: Discuss this template *[e]: Edit this template