Assignment 2 - Explicit Numerical Solvers#

In the last exercise, we looked at the forward/explicit Euler method, which is a first order method. It is perhaps the simplest numerical integrator or solver one can come up with and it is quite effective in its own right. However, for complicated systems one may need to break out heavier machinery to perform calculations accurately enough. The forward Euler method is a first-order-in-time method, meaning that the local error it commits in each step is linear in the timestep.

When numerically solving systems in Matlab, one often uses ode45, which is an adaptive method combining a fourth and fifth order method. This means that if one halves the length of the timestep, the error typically decreases with a factor 16. With the right mathematical framework, one can develop methods that have any convergence order one could wish for, but the trade-off is that more accurate methods typically require more computation per timestep. In this exercise, we will work with second order methods and use tools that can at least in principle be extended to work with even higher order methods.

Problem 1 - Derivation of the consistency and order conditions#

In this problem, we will compute the conditions that a second order numerical method must satisfy for the method to actually be second order. This is essentially done through a Taylor expansion of the dynamical system.

Consider the system

(98)#\[\dot{y} = f(t, y)\]

A general two-stage explicit Runge-Kutta method can be compactly described with the Butcher table

(99)#\[\begin{split}\begin{array}{c|cc} 0 & &\\ c_2 & a_{21} & \\ \hline & b_1 & b_2 \end{array}\end{split}\]

This can also be written for a step from \(y_n = y(t_n)\) to \(y_{n+1} = y(t_n + h)\) in the equation form

(100)#\[ \begin{align}\begin{aligned}k_1 &= f(t_n, y_n)\\k_2 &= f(t_n + c_2 h, y_n + h a_{21} k_1)\\y_{n+1} &= y_n + h \left( b_1 k_1 + b_2 k_2 \right)\end{aligned}\end{align} \]

Tasks

  1. Find the Taylor expansion of \(y(t_n + h)\) to the second order in \(h\).

  2. Find the Taylor expansion of the 2-stage RK expression for \(y_{n+1}\).

  3. Find what the coefficients \(a_{21}, b_1, b_2\) and \(c_2\) must be for the Taylor expansions to agree to the second order.

  4. Verify that Heun’s method (also called modified/improved Euler) and the Midpoint method satisfy the conditions in task c.

Note

There is a lot of algebra involved with developing these expressions, so we will not try to do this by hand for higher order methods. However, if we wanted to, we could compute higher order terms in the Taylor expansions and get more accurate methods. Typically, we would need to calculate more stages, since we need additional terms to match the coefficients in the Taylor series of \(y(t_n + h)\).

Problem 2 - Stability of explicit methods#

An important concept in numerical simulations is the stability of the methods. A shorter timestep \(h\) yields more accurate simulations, but are there any limits to how large we can take the timestep to be? Phrased differently: when do our models break down? In order to analyze this, people developed A-stability, which tests the “absolute stability” of the method. To analyze this, we use the prototypical linear ordinary differential equation

(103)#\[\dot{y} = \lambda y, \quad \lambda \in \mathbb{C}\]

and see what conditions we must put on our timestep for any given \(\lambda\).

An explicit RK method will always define an update to get the next timestep, called the stability function, on the form

(104)#\[y_{n+1} = R(z) y_n, \quad z = h\lambda\]

and we way that the method is stable for \(z\) if it is the case that \(|R(z)| \leq 1\).

Tasks

  1. Find the stability function \(R(z)\) for explicit Euler, Heun’s method and the Midpoint method.

  2. Restrict \(\lambda\) to being only real, that is \(\lambda = a + 0i\). For which values of \(a\) is \(\lambda\) in the stability region for each method? Compute \(\lambda\) as a function of \(h\).

  3. Plot the stability functions for each method. Also plot the stability function for the fourth order classical Runge-Kutta method given in the note below. The plot should distinguish between inputs where \(|R(z)| \leq 1\) and \(|R(z)| > 1\).

Problem 3 - Simulating the damped nonlinear oscillator#

(107)#\[\ddot{\theta} + 2 \xi \omega_0 \dot{\theta} + \omega_0^2 \sin(\theta) = 0\]

We will use the parameters \(\omega_0 = 10\) and \(\xi = 0.02\), as well as the initial conditions \(\theta(0) = 1\) and \(\dot{\theta}(0) = 0\).

Also, note that we can transform a second order ODE to a system of first order ODEs. By setting \(y_1 = \theta\) and \(y_2 = \dot{\theta}\), we get the transformed system as

(108)#\[ \begin{align}\begin{aligned}\dot{y_1} &= y_2\\\dot{y_2} &= -2\xi \omega_0 y_2 + \omega_0^2 \sin(y_1)\end{aligned}\end{align} \]

with the initial conditions \(y_1(0) = \theta(0), y_2(0) = \dot{\theta}(0)\).

Tasks

  1. Implement Heun’s method and the Midpoint method.

  2. Obtain a reference solution.

  3. Verify the convergence orders of the implemented methods. For computing errors, use the 2-norm.

    Note

    We can do this by computing the numerical solution for different stepsizes, for instance \(h \in \{0.5, 0.25, 0.125, 0.0625\}\) (which is really \(h = 2^{-i}\) for \(i \in \{1, 2, 3, 4\}\)), and compare with our reference solution/analytical solution.

    These errors can be plotted in a log-log plot and we may then read of the order of convergence as the slope of the “error vs. stepsize”-curve.

Problem 4 (Optional) - Adaptive timestep control#

All the methods we have seen so far have been so-called “fixed step methods”. That is, the methods use just one timestep length over the entire simulation. However, with both a low-order and a high-order method (first and second order in our case), we can set up a simple predictor-corrector scheme to adaptively resize the simulation timestep during simulation.

Let \(y_{n+1}^{(1)}\) be the first order numerical approximation and \(y_{n+1}^{(2)}\) be the second order approximation. These can be computed, for instance, with explicit Euler and Heun’s method, but you are free to select the second order method. The crucial part is figuring out that if the two approximations are close, then we don’t gain much from using the second order method over the first order one. That, in turn, means that we can increase the timestep so that the second order method actually does some useful work for us. For this we need the relative error between the methods, given by

(110)#\[e = \frac{\|y_{n+1}^{(2)} - y_{n+1}^{(1)}\|}{\|y_{n+1}^{(2)}\|}\]

After this, we need a control law for adjusting the timestep. You may come up with your own, or you may use this one:

(111)#\[h_{new} = \alpha \cdot h \cdot \sqrt{\frac{\text{tol}}{e}}\]

Here, \(\alpha\) is a safety factor so that our resizing is not too wild, and \(\text{tol}\) is the tolerance level we set for when we accept a timestep. So if \(e \leq \text{tol}\) we accept the step, adjust the timestep for the next iteration with the formula above and move on. If not, we reduce the stepsize using the same formula and try the current step again. You may also want to limit the growth or shrinkage of the timestep, and maybe also the minimum and maximum allowed timestep.

Tasks

  1. Implement the predictor-corrector scheme and simulate the oscillator from problem 3.

  2. Make a plot showing the stepsize over time.

  3. Compare the total number of function evaluations between the adaptive scheme and the fixed step method you choose. Make sure that the comparison is fair, meaning that both methods should have a comparable final error.