Chapter 2: Second-order differential equations

Second order linear differential equations with constant coefficients are the bread and butter of any first course on differential equations. These type of equations are important for three reasons: (i) the same ideas that are used to solve this limited class of second-order ODEs can be extended to higher-order ODEs; (ii) such ODEs are encountered in all sorts of applications; and (iii) the theory for anything else gets a whole lot harder.

Recall that a linear second order differential equation is given by the form \begin{equation} \frac{d^2 y}{dx^2} + Q(x) \frac{dy}{dx} + R(x) y = f(x), \end{equation}

and if $f(x) \equiv 0$, then the equation is homogeneous.

Our introduction to the theory of these types of equations is split into two parts. This lecture, we will cover the mechanical process of solving such equations. We will take some things for granted (like how do you know these techniques yield the entire set of solutions), and this will be dealt with in the second lecture, which will focus more on a unified theory of such equations.

Before we begin, let us make a general statement about all linear homogeneous equations. We define \begin{equation} \mathscr{L}[y] = a_n(x) y^{(n)} + a_{n-1}(x) y^{(n-1)} + \ldots + a_1(x) y' + a_0(x) y = 0. \end{equation}

Notice that due to the linearity of differentiation, then if $y_1$ and $y_2$ are both solutions of $\mathscr{L}(\bullet) = 0$, then \[ \mathscr{L}[\alpha y_1 + \beta y_2] = \alpha \mathscr{L}[y_1] + \beta \mathscr{L}[y_2] = 0, \]

and thus linear combinations of $y_1$ and $y_2$ are also solutions.

Linear equations with constant coefficients

Consider now the case when the coefficients are all constant and the equation is homogeneous. \begin{equation} \label{eq:sec_ode} \frac{d^2 y}{dx^2} + Q \frac{dy}{dx} + R y = 0, \end{equation}

If you believe that $y$ is composed of the standard functions (like constants, polynomials, exponentials, sinusoidals), then it makes sense that a possible guess (an ansatz) for the solution is something of the form \begin{equation} y = C e^{rx}, \end{equation}

where $C$ and $r$ are constant. Substituting the constant into the ODE yields \begin{equation} r^2 \left[ Ce^{rx}\right] + Qr \left[C e^{rx}\right] + R \left[C e^{rx}\right] = 0, \end{equation}

or simplifying \begin{equation} r^2 + Qr + R = 0. \end{equation}

This is known as the characteristic equation of the ODE. Solving for $r$ then gives the two roots, $r = r_1$ and $r = r_2$, from the quadratic equation. \[ r_{1,2} = \frac{-Q \pm \sqrt{Q^2 - 4R}}{2}. \]

Recall from the quadratic equation that there are three possible cases: (i) two real roots, (ii) two complex roots, or (iii) one repeated root.

Two real roots

The simplest case is if there are two real roots. We know from the above that $c_1 y_1 = c_1 e^{r_1 x}$ and $c_2 y_2 = c_2 e^{r_1 x}$ are both solutions. It is also easily seen (from the linearity of taking derivatives) that \begin{equation} \label{eq:sec_gen} y = c_1 e^{r_1 x} + c_2 e^{r_2 x}, \end{equation}

must also be a solution, and once values of $y$ are specified at two points, then we can then solve for the coefficients, $c_1$ and $c_2$. In fact, it turns out that every solution of the ODE (\ref{eq:sec_ode}) must be of the form (\ref{eq:sec_gen}). The form (\ref{eq:sec_gen}) is furthermore called the general solution of the ODE, and the two functions, $y_1$ and $y_2$, are called the fundamental solutions of the ODE. We will be more specific about what constitutes a fundamental solution in the next lecture, but for the moment, it's enough to understand, intuitively, that $\{ y_1, y_2 \}$ form a basis for the vector space of solutions.

Example: Solve the differential equation \[ y'' + y' - 2y = 0, \]

subject to the two initial values \[ y(0) = 1 \quad \text{and} \quad y'(0) = 5. \]

Using the ansatz $y = e^{rx}$, we get the characteristic equation \[ r^2 + r - 2 = (r + 2)(r - 1) = 0 \Rightarrow r = -2, 1, \]

so the general solution is given by \[ y(x) = c_1 e^{-2 x} + c_2 e^{x}. \]

Applying the initial values yields the system of equations \[ \begin{matrix} c_1 & + & c_2 &= &1\\ -2 c_1 & + & c_2 &= &5 \end{matrix} \]

Solving gives $c_1 = -4/3$ and $c_2 = 7/3$, so the final solution is \[ y(x) = -\frac{4}{3}e^{-2x} + \frac{7}{3}e^{x}. \]

Two complex roots

Consider the case when the two roots of the characteristic equation are both complex (that is, they have a non-zero imaginary component). By the quadratic equation, we can write these roots as \begin{equation} r_{1,2} = -\frac{Q}{2} \pm i \frac{\sqrt{4R-Q^2}}{2} = r_a \pm i r_b. \end{equation}

Thus, the general form of the solution will be given by \begin{equation} \label{eq:sec_twocomp} y = c_1 e^{(r_a + ir_b) x} + c_2 e^{(r_a - ir_b) x} = c_1 e^{r_a x} e^{ir_b x} + c_2 e^{r_a x} e^{-ir_b x}. \end{equation}

This form of the solution can be somewhat misleading, because we suddenly have the appearance of imaginary numbers, even though thus far, you have likely assumed that our differential equations and the solution, $y(x)$, is always real. In fact, if you suppose that $y(x)$ describes some physical quantity (like distance), then the form of (\ref{eq:sec_twocomp}) can still output a real number (for real $x$), so long as $c_1$ and $c_2$ are, in general, complex numbers. But we can perhaps clear the air by using Euler's identity: \begin{equation} e^{i\theta} = \cos \theta + i \sin\theta. \end{equation}

Now, (\ref{eq:sec_twocomp}) gives \begin{equation} y = e^{r_a x} \biggl[ c_1 \left\{\cos(r_b x) + i \sin(r_b x) \right\} + c_2 \left\{\cos(r_b x) - i \sin(r_b x) \right\} \biggr], \end{equation}

where we have used the property that cosine/sine are even/odd functions. If we write $d_1 = c_1 + c_2$ and $d_2 = c_1 - c_2$, then we have \begin{equation} y = d_1 e^{r_a x} \cos (r_b x) + d_2 e^{r_a x} \sin (r_b x). \end{equation}

If $x$ is real, and $y(x)$ is real, then $d_1$ and $d_2$ must also be real. So we see that in the case when the characteristic equation gives two complex roots, then the general solution is simply a linear combination of a cosine and a sine. In practice, if you encounter a differential equation where all the quantities are real (including the boundary conditions), then you can skip right to the representation using cosines and sines, rather than going through (\ref{eq:sec_twocomp}).

Example: Solve the differential equation \[ y'' + 4y' + 5y = 0, \]

subject to the two initial values \[ y(0) = 1 \quad \text{and} \quad y'(0) = 5. \]

This time, the characteristic equation is \[ r^2 + 4r + 5 = 0 \Rightarrow r = -2 \pm i. \]

so that the general solution is given by \[ y = d_1 e^{-2x} e^{ix} + d_2 e^{-2x} e^{-ix}, \]

where $d_1, d_2 \in \mathbb{C}$. In terms of cosines and sines, \[ y = c_1 e^{-2x} \cos x + c_2 e^{-2c} \sin x, \]

now for $c_1, c_2 \in \mathbb{R}$. Applying the initial value $y(0) = 1$ gives $c_1 = 1$. With this, the condition $y'(0) = 0$ gives $c_2 = 7$, thus we conclude that \[ y(x) = e^{-2x} \cos x + 7 e^{-2x} \sin x. \]

Two repeated roots

When there are two repeated roots, and $r = -Q/2$, then we clearly have the one solution, $c_1 y_1 = c_1 e^{rx}$, but it's not clear how the second solution is found. Here, we need some subtle guesswork: if $c_1 y_1$ is a solution for any constant, $c_1$, then it might stand to reason that a more general solution can be written as $u(x) y_1(x)$ for some function $u(x)$ to be determined. Since \begin{align*} y' &= u' y_1 + y_1' u \\ y'' &= u'' y_1 + 2u' y_1' + y_1'' u, \end{align*}

then the ODE becomes \begin{equation} u \biggl[ y_1'' + Q y_1' + Ry_1 \biggr] + 2 u' y_1' + y_1 u'' + Q u' y_1 = 0. \end{equation}

Notice that the first bracketed terms form the original ODE, so this must be zero since $y_1$ is a solution. We are left with an equation for $u''$ and $u'$. If we let $v = u'$, then this becomes \begin{equation} y_1 v' + v' (Q y_1 + 2 y_1' ) = 0. \end{equation}

Notice how the order of the problem has been reduced by one (hence the name). Moreover, since $y_1 = e^{-Q x/2}$, then this becomes \begin{equation} v' = 0 \Rightarrow u = Cx. \end{equation}

Thus, the final solution is \begin{equation} y = c_1 e^{rx} + c_2 x e^{rx}. \end{equation}

In practice, when you solve second-order constant coefficient questions, you generally `remember ' that the second solution is given by $x$ times the first solution. However, the method we used to derive this result is important because it gives a standard way of finding additional solutions once one is known.

Example: Solve the initial value problem \begin{gather} 4y'' + 12y' + 9y = 0 \\ y(0) = 1 \quad \text{and} \quad y'(0) = -4. \end{gather}

The characteristic equation is given by \[ 4r^2 + 12r + 9 = (2r + 3)(2r + 3) = 0, \]

which gives the repeated root, $r = 3$. Thus we conclude that the solution is given by \[ y = c_1 e^{-3x/2} + c_2 x e^{-3x/2}. \]

Applying the initial values then gives $c_1 = 1$ and $c_2 = -5/2$, and so the final solution is \[ y = e^{-3x/2} - \frac{5}{2} x e^{-3x/2}. \]

Summary

We have thus proved the following theorem:

Theorem: Consider the ODE \[ y'' + Qy' + Ry = 0, \]

where $Q, R \in \mathbb{R}$. The characteristic equation is \[ r^2 + Qr + R = 0. \]

The general solution of the ODE follows three cases (where $c_1, c_2 \in \mathbb{R}$):

  • When the characteristic equation has two real and distinct roots,

\[ y = c_1 e^{r_1 x} + c_2 e^{r_2 x}. \]

  • When the characteristic equation has two complex conjugate roots given by $r = r_a \pm i r_b$,

\[ y = c_1 e^{r_a x} \cos (r_b x) + c_2 e^{r_a x} \sin (r_b x). \]

  • When the characteristic equation has two identical roots,

\[ y = c_1 e^{r x} + c_2 x e^{r x}. \]

Initial value and boundary value problem

Thus far, we have been somewhat vague about what conditions are necessary in order to solve for the two unknown constants, $c_1$ and $c_2$, which arise in the general solution of a second order ODE. These conditions generally fall into two classes: (i) initial value, and (ii) boundary value.

In the case of an initial value problem (IVP), we specify two conditions for the function, $y(x)$, at the initial point, $x = x_0$. Why are two conditions necessary (and/or sufficient)? Consider a Taylor series expansion of the solution about $x = x_0$ \[ y(x) = y(x_0) + y'(x_0)(x-x_0) + \frac{y''(x_0)}{2!}(x-x_0)^2 + \frac{y'''(x_0)}{3!}(x-x_0)^2 + \ldots \]

Notice that we have an expression for $y''(x_0)$ in terms of $y(x_0)$ and $y'(x_0)$ from the differential equation. Similarly, we have expressions for $y'''(x_0)$ and all the higher derivatives (because we can always differentiate the ODE). Thus, so long as we specify the two unknowns $y(x_0)$ and $y'(x_0)$, we can (informally) solve for all the terms of the Taylor series. This is a somewhat informal way of showing how two initial values on the ODE are enough to solve the problem.

One important theorem we shall prove in later weeks is the following:

Consider the initial value problem \begin{gather} y'' + P(x) y' + Q(x) y = R(x), \quad \text{for $x \geq x_0$}
y(x_0) = y_0 \quad \text{and} \quad y'(x_0) = y_0', \end{gather}

where $P$, $Q$, and $R$ are continuous on the open interval, $x \in I$. Then there is exactly one solution to the IVP, and moreover, this solution exists in the interval $I$.

Note that this theorem contains three important points: (i) a solution exists, (ii) it is unique, and moreover, this existence and uniqueness can be guaranteed within some interval, $I$. Why mention the interval, $I$? here is an example:

Example: Consider the initial value problem \begin{gather} (x-1)^2 y'' - 2y = 0, \\ y(0) = 1 \quad \text{and} \quad y'(0) = 1/2. \end{gather}

By substitution, we can verify that the solution of the IVP is given by \begin{equation} y = (x-1)^{1/2}. \end{equation}

However, this solution fails to exist at $x = 1$. This is what we mean when we say that the guarantee of existence/uniqueness of the initial value problem is a local statement.

In a boundary value problem (BVP), we would specify the value of $y$ (or its derivatives, or a combination thereof) at the endpoints of the interval, $x \in [a, b]$, and solve for $y$ within the interval. Boundary value problems are more difficult to work with, because they involve a global aspect of the solution. Here is an example.

Example: Consider the boundary value problem: \begin{equation} y'' + y = 0, \end{equation}

subject to one of the two possible boundary conditions: \begin{align} (i)& \qquad y(0) = 0 \text{\quad and \quad} y(\pi/2) = 1 \\ (ii)& \qquad y(0) = 0 \text{\quad and \quad} y(\pi) = 0. \end{align}

First, we solve for the general solution of the ODE. Using the ansatz $y = e^{rx}$ gives the characteristic equation \begin{equation} r^2 + 1 = 0 \Rightarrow r = \pm i. \end{equation}

The two fundamental solutions are given by $y_1 = e^{i x}$ and $y_2 = e^{-ix}$, so writing in terms of sinusoidals gives \begin{equation} y = c_1 \cos x + c_2 \sin x. \end{equation}

In both cases, we have the boundary condition $y(0) = 0$, and this gives $c_1 = 0$. However, applying the condition that $y(\pi/2) = 1$, gives $c_2 = 0$, so we are left with the trivial solution $y \equiv 0$. In the second case, if we apply $y(\pi) = 0$, then this is satisfied for all values of $c_2$ and we have infinitely many solutions.