The following questions came up today in review:

- What are the boundary conditions on the adjoint problem when the original problem has inhomogeneous boundary conditions like $y(0) + y'(0) = A$.
- Why (or why not) zero out the boundary conditions for the adjoint problem?
- Is there only one way of defining the adjoint boundary conditions?

We're going to clearly lay out the adjoint problem for the case of second-order BVPs. Let $D$ be the differential operator. Define \[ L \equiv a_2(x) D^2 + a_1(x) D + a_0(x), \]

so that our general differential equation is $Ly = f(x)$. Our complete second order BVP is \begin{gather*} Lu = f(x), \qquad a < x < b \\ B_1 u \equiv a_{11} u(a) + a_{12} u'(a) + b_{11} u(b) + b_{12} u'(b) = \gamma_1 \\ B_2 u \equiv a_{21} u(a) + a_{22} u'(a) + b_{21} u(b) + b_{22} u'(b) = \gamma_2. \end{gather*}

Note that this doesn't cover all possible boundary conditions, but seems to be sufficiently general for the interest of the course. We define the inner product as usual, setting \[ \langle u, v \rangle = \int_a^b u(x) v(x) \, \de{x}. \]

Start from $\langle w, Ly\rangle$ \[ \langle w, Lu\rangle = \int_a^b w(a_2 u'' + a_1 u' + u) \, \de{x} \]

and integrate by parts until all the derivatives are transferred to $w$. This gives \[ \langle w, Lu\rangle - \langle L^* w, u\rangle = J(u,w) \Bigr\rvert_a^b, \]

where we have defined **the formal adjoint**
\[
L^* = a_2 D^2 + (2a_2' - a_1)D + (a_2'' - a_1' + a_0),
\]

and the boundary terms \[ J(u, w) = a_2(wu' - uw') + (a_1 - a_2')uw. \]

When $L^* = L$, the problem is **formally self adjoint**.

We haven't said anything about the boundary conditions for the adjoint problem. Our general boundary conditions are \begin{align*} B_1 u &= \gamma_1 \\ B_2 u &= \gamma_2. \end{align*}

The **adjoint boundary conditions** are define in the following way: we set the boundary conditions of the original problem to be homogeneous:
\begin{align*}
B_1 u &= 0 \\
B_2 u &= 0,
\end{align*}

and using this, we select homogeneous boundary conditions on $w$ so that \[ J(u,w) \Bigr\rvert_a^b = 0. \]

Let us be more precise. Let $M$ denote the set of functions $u(x)$ that satisfies $B_1 u = 0$ and $B_2 u = 0$. Let $M^*$ denote functions $w(x)$ that forces $J(u,w) \rvert_a^b = 0$ once these boundary conditions are used. Then from $M^*$, we can pick two homogeneous boundary conditions $B_1^* w = 0$ and $B_2^* w = 0$.

Here is an example that may answer two common questions:

**Example**:

- Is the choice of adjoint boundary conditions unique? (No)
- For a formally self-adjoint problem, is the adjoint BCs the same as the original BCs? (No)

Take \begin{gather*} Lu = u'' \\ B_1 u = u'(0) - u(1) \\ B_2 u = u'(1). \end{gather*}

Integrating by parts gives \[ \int_0^1 wu'' \, \de{x} = \left[ wu' - w'u\right]_0^1 + \int_0^1 u w'' \, \de{x}. \]

Remember, to determine adjoint BCs, we set $B_1 u = 0$ and $B_2 u = 0$. The set $M^*$ consists of those functions $w$ which satisfy \[ -u(1)[w(0) + w'(1)] + u(0) w'(0) = 0. \]

This must hold for general values of $u(1)$ and $u(0)$. One choice of adjoint boundary conditions that characterizes $M^*$ is thus \begin{gather*} B_1^* w = w(0) + w'(1) = 0 \\ B_2^* w = w'(0) = 0. \end{gather*}

However, we could have equally chosen any linearly independent combination of these two conditions. For example, we can choose \[ B_3^* w = B_1^* + 2 B_2^* = w(0) + w'(1) + 2w'(0) = 0 \\ B_4^* w = B_1^* + B_2^* = w(0) + w'(1) + w'(0) = 0 \]

which still has the same effect of zeroing the boundary terms. This shows that the adjoint boundary conditions are not unique, and are certainly not the same as the original conditions, even if the problem is self adjoint.

We would say that the BVP $(L, B_1, B_2)$ is self-adjoint if $L^* = L$ and $M^* = M$. If this is the case, then we would be able to select $B_1^* = B_1$ and $B_2^* = B_2$ so that the boundary conditions of both problems are identical. However, note from the above example that we could have a self-adjoint problem, but we chose adjoint boundary conditions that were not identical!

Here is the strategy to choosing the adjoint boundary conditions for nth order BVPs. Suppose that you have the $n$ inhomogeneous boundary conditions: \begin{align*} B_1 u &= \gamma_1 \\ B_2 u &= \gamma_2 \\ \vdots &= \vdots \\ B_n u &= \gamma_n. \end{align*}

You will obtain the boundary terms $J(u, w)\rvert_a^b$ in the usual way by integration by parts. This will be some expression involving the $2n$ boundary terms, $u(a), u'(a), \ldots u^{(n-1)}(a)$ and $u(b), u'(b), \ldots u^{(n-1)}(b)$ – so basically, $2n$ degrees of freedom. Re-write the boundary terms now in terms of the $n$ quantities $B_1 u, B_2 u, \ldots, B_n u$. This leaves $n$ degrees of freedom. The resultant exprssion will be of the form \begin{align} \label{Jn} J(u, w)\rvert_a^b &= (B_1 u) (B_{2n}^* w) + \ldots + (B_n u) (B_{n+1}^* w) \\ & \qquad (B_{n+1} u) (B_{n}^* w) + \ldots + (B_{2n} u) (B_{1}^* w). \end{align}

Now, you choose the adjoint conditions on the second line so that $B_1^* w = 0$, $B_2^* = 0$, and so on.

We have shown how the adjoint problem (the operator and its boundary conditions) are defined. But at this point, you are probably wondering what effect does the inhomogeneous terms $\gamma_1$ and $\gamma_2$ have on the various problems. We have assumed that in setting $J(u, w)\rvert_a^b = 0$, that $B_1 u = 0$ and $B_2 u = 0$, so what effect does defining the adjoint problem in this way have on whatever applications we have in mind?

As a particular example, suppose that we wish to study the existence or uniqueness of solutions to the second-order BVP: \begin{gather} Lu = f \label{forced} \\ B_1 u = \gamma_1 \qquad B_2 u = \gamma_2, \end{gather}

on $a < x < b$. Along with this, we have the completely homogeneous problem: \begin{gather*} Lu = 0 \\ B_1 u = 0 \qquad B_2 u = 0. \end{gather*}

If this problem has only the trivial solution, then the forced problem \eqref{forced} has a unique solution. If it has nontrivial solutions, then there are either zero solutions to the forced problem, or infinitely many. In order to further explore which of these two possibilities it is, we have to define the homogeneous adjoint problem: \begin{gather*} L^* w = 0 \\ B_1^* w = 0 \qquad B_2^* u = 0, \end{gather*}

where remember that to obtain the adjoint boundary conditions, you use $B_1 u = 0$ and $B_2 u = 0$. Assuming that there is not a unique solution, then we shall consider $w$ to be one of the non-trivial solutions of this equation.

The usual manipulation gives \[ \langle w, Lu\rangle - \langle u, L^* w\rangle = J(u,w) \Bigr\rvert_a^b, \]

except now the big difference is that $J(u,v)\rvert_a^b$ is no longer zero because $B_1 u = \gamma_1$ and $B_2 u = \gamma_2$. From \eqref{Jn}, it would instead be something like \[ J(u,w) \Bigr\rvert_a^b = \gamma_1 (B_{4}^* w) + \gamma_2 (B_{3}^* w). \]

Excellent. Now we wish to study the forced problem \eqref{forced}. The usual manipulations: \begin{align*} Ly &= f \\ \langle w, Ly \rangle &= \langle w, f \rangle \\ J(u,w)\Bigr\rvert_a^b + \langle L^* w, y \rangle &= \langle w, f \rangle \\ J(u,w)\Bigr\rvert_a^b + 0 &= \langle w, f \rangle, \end{align*}

and in order for there to be infinitely many solutions, we must have the above condition holding. Note that in those cases where $J(u,v)\rvert_a^b = 0$ (in particular, homogeneous boundary conditions), then the condition is an orthogonality condition.

Therefore, the solvability condition is as follows: there are infinitely many solutions to the forced problem \eqref{forced} if \[ \langle w, f \rangle = J(u,w)\Bigr\rvert_a^b, \]

for every linearly independent $w$ that solves the homogeneous adjoint problem.