Treating differentials as fractions

Earlier in our tutorials, we discussed the treatment of differentials like $dx$ and $dy$, and whether you could simply manipulate the differentials as you would ordinary numbers.

From a rigorous sense, separating differentials should not be done until you can develop the theory of how to do it. This leads to a whole new theory called Non-standard analysis.

But let's not do that. Let's focus on where things go wrong in a practical sense. At least for the physicists, you may not care that it's not rigorous mathematics as long as it gives the right answer.

So let's see where it gives the wrong answer.

Definitions of limits

It's obvious that in a rigorous sense, you can't! The definition of the limit is \[ y'(x) = \lim_{\delta x\to 0} \frac{y(x + \delta x) - y(x)}{\delta x} \equiv \frac{dy}{dx}. \]

However, because both numerator and denominator are tending to zero, then you would have to identify $dx = 0$ and $dy = 0$. So a statement like \begin{equation} \label{dy} dy = y'(x) dx \end{equation}

is vacuous because it is expressing $0 = 0$. The nature of $y'(x)$ is that it is a very finely tuned quantity which corresponds to both numerators and denominators tending to zero in a very particular way. So $dx$ and $dy$ are intimately linked: one cannot exist without the other.

But it works!

But then again, “it works!” many students will claim. For example, if $y = y(x)$ was our original curve, and we now parameterised $x = x(t)$, then by the chain rule, \[ \frac{dy}{dt} = \frac{dx}{dt}\frac{dy}{dx} \]

and we point to the above formula and say “see? it works!”.

It works here, too!

We might also make an example of the rule for inverting functions. If $y = f(x)$ and we invert this using $x = f^{-1}(y)$, then the derivative of the inverse function is given by \[ \frac{df^{-1}(y)}{dy} = \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}. \]

See? They're basically fractions!

It works here, too (again)

Another example is separation of variables when solving differential equations. If we have \[ \frac{dy}{dx} = \frac{1}{y}, \]

then we simply manipulate the differentials \[ y \ dy = dx, \]

integrate both sides \[ \int y \, dy = \int dx \]

and voila: \[ \tfrac{1}{2} y^2 = x + C. \]

It works, see?

When stuff hits the fan

Blindly treating differentials like fractions works well enough when you're in first year and working with functions of a single variable. But treating $dy/dx$ like a fraction is a gateway drug to treating $\partial y/\partial x$ like a fraction.

Here's an obvious example. Let $F(x,y) = 0$ be a function that defines $y$ implicitly. For example, suppose that \[ F(x,y) = x + y. \]

Then blindly treating the differentials as fractions, we have \[ \frac{dy}{dx} = \frac{{\partial F}}{\partial x}\frac{\partial y}{\partial F} = \frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} = \frac{1}{1} = 1. \]

But obviously, $y = -x$ and so $dy/dx = -1$.

In fact, you can show that the correct formula is: \[ \frac{dy}{dx} = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} \]

so treating differentials like fractions will be problematic even for the simplest of problems.

Why it doesn't work

The reason why it works in 1D is because there is only one object being varied (the $dx$) and one object who's variation you are concerned about ($dy$). In 2D, for example, such as the case of $F(x,y)$, then the variation of $F$ depends on how we choose to vary both $x$ and $y$.

So while it makes sense to think of $dy/dx$ as dividing by a number, $dx$, it doesn't make sense to think of derivatives of $F$ as obtained by dividing by a vector $[dx, dy]$.

You will learn that there is a difference between $\partial F/\partial x$, which is the variation of $F$ as $x$ changes but $y$ is fixed, and the total variation of $F$ with respect to $x$, $dF/dx$. The latter is \begin{equation} \label{dFdx} \frac{dF}{dx} = \frac{dx}{dx} \frac{\partial F}{\partial x} + \frac{dy}{dx} \frac{\partial F}{\partial y}. \end{equation}

The difference is that by considering $y = y(x)$, then when we vary $x$, we must also take in account the change in $y$. Thus $F$ is changed in response to variations of both $x$ and $y$, but there is only a single quantity being varied ($x$). If we are interested in the curve $F(x,y) = 0$, then we can set the \eqref{dFdx} to zero, and this gives the proper value of $dy/dx$.

If you fall into the trap of constantly thinking of differentials as equivalent to ordinary numbers, then what is the difference between $dx$, $\Delta x$, $\delta x$, and $\partial x$? Similarly, what is the difference between $dF$, $\Delta F$, $\delta F$, and $\partial F$? Is $\delta x = dx$? Is $\partial x = dx$? If it's not true then do we say that $dx > \partial x$ or $dx < \partial x$? These are all very silly questions because the infinitesimals are ill-defined by themselves and should not be thought-of as real numbers.


Treating differentials like fractions is a gateway drug to further misunderstanding.

And in more than one dimension, it's just plain wrong.