Differential Equations · Numerical Methods
When exact solutions vanish — walk the slope field one step at a time.
Section 01
In your ODE course you've been solving equations analytically — separation of variables, integrating factors, exact equations. You find a formula like $y(t) = Ce^{-2t}$ and you're done. Clean. Exact.
But here's the uncomfortable truth: most first-order ODEs have no closed-form solution. The real universe is full of equations like:
Try all the techniques you know — none of them crack it into a neat formula. What do engineers and physicists do? They stop asking "what is the formula for $y(t)$?" and start asking a completely different question:
The Numerical Question
"If I start at $(t_0, y_0)$, what is the value of $y$ at specific future times?"
That shift — from finding a function to computing a table of values — is the entire premise of numerical methods. Euler's Method is the oldest and simplest of these.
Key Point
Euler's method is NOT limited to linear equations. It works on any ODE of the form $\frac{dy}{dt} = f(t,y)$, linear or nonlinear, regardless of whether you can solve it analytically. Linear equations tend to appear in introductory examples only because the arithmetic stays tidy — not because the method requires it.
Section 02
Forget the formula for a moment. Think about slope fields — those diagrams where you draw little arrows at each point $(t, y)$ showing the slope of the solution curve there.
You have an ODE $\dfrac{dy}{dt} = f(t, y)$. At every point $(t, y)$ in the plane, the right-hand side $f(t,y)$ is just a number — the slope of the solution curve passing through that point.
Euler's idea is beautifully simple: follow the arrows.
The Walking Metaphor
Imagine you're standing at your starting point $(t_0, y_0)$ in a foggy field. You can't see the path ahead, but you have a compass that tells you the slope of the ground right where you stand. So you: (1) look at the compass, (2) walk a small step in that direction, (3) check the compass again, (4) repeat.
The "fog" is the fact that we can't find the formula. The "compass" is $f(t,y)$ — it always tells us the slope at our current position. We can't see far ahead, but we can always take one small step.
At each step we're doing linear extrapolation — we pretend the curve is a straight line for a tiny interval and step along that line. The true curve is not a straight line, so we drift slightly. The key insight: smaller steps mean less drift per step, and the approximation becomes arbitrarily good as $h \to 0$.
Fig 1 — Euler steps (teal) vs. true solution (gold dashed). Error shown in rose. Slope field arrows in background.
Section 03
Start from the definition of derivative. At position $(t_n, y_n)$, the slope of the solution curve is:
A slope is $\Delta y / \Delta t$. If we take a horizontal step of size $h$ (the step size), we expect a vertical change of:
So our new estimated position is:
That's the whole algorithm. Repeat $N$ times to reach $t_0 + Nh$.
This formula is also a first-order Taylor expansion of the true solution around $t_n$:
Euler's method keeps only the first two terms. The discarded terms are the source of the error — they scale like $h^2$ per step, which means the total error scales like $h$ (first-order method). Halve your step size, halve your total error.
Concave-Up Curves
For equations like $y'=y$ where the curve is concave up (accelerating upward), Euler always underestimates. We use the slope at the start of each interval, but the true slope grows across the interval. Geometrically: we draw a tangent line at the bottom of a bowl, and the curve curves away above us.
Section 04
Choose an ODE — including nonlinear ones — then drag the sliders to see how step size and number of steps affect the approximation. Watch the error stats in real time.
EULER'S METHOD EXPLORER
Euler y(final)
—
True y(final)
—
Absolute error
—
Section 05
Click an example to see the full step-by-step solution with a comparison table.
Example 01 — Linear
Exponential Growth
y' = y, y(0)=1, h=0.5
Example 02 — Linear
Non-autonomous
y' = t + y, y(0)=1, h=0.1
Example 03 — Nonlinear
Logistic-style
y' = y(1−y), y(0)=0.1, h=0.5
Example 04 — Nonlinear
Separable Nonlinear
y' = −2ty, y(0)=1, h=0.25
Section 06
On an exam, Euler's Method problems all follow the same skeleton. Here is how to approach any problem systematically — before you compute a single number.
The problem gives you an IVP. Extract: the right-hand side $f(t,y)$, the initial values, the step size, and where you're headed. Write these down explicitly before computing anything.
$N = (t_{\text{target}} - t_0)\,/\,h$. This should be a whole number — if it isn't, double-check that you read $h$ correctly.
Don't just chain computations mentally. A table catches arithmetic errors and makes partial credit possible. Every column has a clear meaning.
This is the most common mistake. Euler uses the slope at the start of each interval. You compute $f(t_n, y_n)$ with the values you already have, then advance.
The update of $y$ comes first, then increment $t$. Both are using the current row's values.
If your ODE has $y' = y > 0$, the solution should be increasing. If $y'$ is negative, $y$ should fall. If your answer goes the wrong direction, you likely plugged in the wrong sign somewhere.
Common Exam Mistakes
① Using $(t_{n+1}, y_{n+1})$ in $f$ instead of $(t_n, y_n)$. ② Forgetting to increment $t$ each step. ③ Wrong number of steps — recount if your answer looks off. ④ Rounding too aggressively mid-computation — carry at least 4 decimal places until the final answer.
Section 07
At each single step, we discard the $h^2$ and higher terms of the Taylor series. The error introduced in one step is called the local truncation error:
Over $N = (b - a)/h$ steps, these errors accumulate. The global truncation error (total drift from the true solution) scales as:
Halve $h$ → half as many steps each half as big? No: you take twice as many steps, each with half the local error. Net result: global error halves. This is why Euler is called a first-order method.
Fig 2 — Global error vs. step size on log-log scale. Slope ≈ 1 confirms first-order convergence.
Euler is order 1. The Runge-Kutta 4 (RK4) method — the workhorse of scientific computing — evaluates the slope at four points per step and achieves order 4 (global error $\sim h^4$). Halve $h$, error drops by factor 16. The price: 4× the function evaluations per step. That tradeoff — accuracy vs. cost — is the central question of numerical ODE theory.
Why Euler Still Matters
Even though RK4 dominates in practice, Euler's method is the conceptual foundation. Every higher-order method is essentially "take a smarter average of slopes." Understanding Euler deeply means you understand all of them structurally.