Differential Equations · Numerical Methods

Euler's
Method

When exact solutions vanish — walk the slope field one step at a time.


The
Problem

In your ODE course you've been solving equations analytically — separation of variables, integrating factors, exact equations. You find a formula like $y(t) = Ce^{-2t}$ and you're done. Clean. Exact.

But here's the uncomfortable truth: most first-order ODEs have no closed-form solution. The real universe is full of equations like:

$$\frac{dy}{dt} = t^2 + \sin(y), \quad \text{or} \quad \frac{dy}{dt} = e^{-t^2} \cdot y$$

Try all the techniques you know — none of them crack it into a neat formula. What do engineers and physicists do? They stop asking "what is the formula for $y(t)$?" and start asking a completely different question:

The Numerical Question

"If I start at $(t_0, y_0)$, what is the value of $y$ at specific future times?"

That shift — from finding a function to computing a table of values — is the entire premise of numerical methods. Euler's Method is the oldest and simplest of these.

Key Point

Euler's method is NOT limited to linear equations. It works on any ODE of the form $\frac{dy}{dt} = f(t,y)$, linear or nonlinear, regardless of whether you can solve it analytically. Linear equations tend to appear in introductory examples only because the arithmetic stays tidy — not because the method requires it.


Core
Intuition

Forget the formula for a moment. Think about slope fields — those diagrams where you draw little arrows at each point $(t, y)$ showing the slope of the solution curve there.

You have an ODE $\dfrac{dy}{dt} = f(t, y)$. At every point $(t, y)$ in the plane, the right-hand side $f(t,y)$ is just a number — the slope of the solution curve passing through that point.

Euler's idea is beautifully simple: follow the arrows.

The Walking Metaphor

Imagine you're standing at your starting point $(t_0, y_0)$ in a foggy field. You can't see the path ahead, but you have a compass that tells you the slope of the ground right where you stand. So you: (1) look at the compass, (2) walk a small step in that direction, (3) check the compass again, (4) repeat.

The "fog" is the fact that we can't find the formula. The "compass" is $f(t,y)$ — it always tells us the slope at our current position. We can't see far ahead, but we can always take one small step.

Why This Works (And Why It's Approximate)

At each step we're doing linear extrapolation — we pretend the curve is a straight line for a tiny interval and step along that line. The true curve is not a straight line, so we drift slightly. The key insight: smaller steps mean less drift per step, and the approximation becomes arbitrarily good as $h \to 0$.

Fig 1 — Euler steps (teal) vs. true solution (gold dashed). Error shown in rose. Slope field arrows in background.


The
Formula

Deriving It From Scratch

Start from the definition of derivative. At position $(t_n, y_n)$, the slope of the solution curve is:

$$\frac{dy}{dt}\bigg|_{t=t_n} = f(t_n,\, y_n)$$

A slope is $\Delta y / \Delta t$. If we take a horizontal step of size $h$ (the step size), we expect a vertical change of:

$$\Delta y \;=\; \frac{dy}{dt} \cdot \Delta t \;=\; f(t_n,\, y_n)\cdot h$$

So our new estimated position is:

$$\boxed{y_{n+1} = y_n + h\cdot f(t_n,\, y_n)}$$ $$t_{n+1} = t_n + h$$

That's the whole algorithm. Repeat $N$ times to reach $t_0 + Nh$.

Connection to Taylor Series

This formula is also a first-order Taylor expansion of the true solution around $t_n$:

$$y(t_n + h) \;=\; y(t_n) + h\cdot y'(t_n) + \underbrace{\frac{h^2}{2}y''(t_n) + \cdots}_{\text{terms we discard}}$$

Euler's method keeps only the first two terms. The discarded terms are the source of the error — they scale like $h^2$ per step, which means the total error scales like $h$ (first-order method). Halve your step size, halve your total error.

Concave-Up Curves

For equations like $y'=y$ where the curve is concave up (accelerating upward), Euler always underestimates. We use the slope at the start of each interval, but the true slope grows across the interval. Geometrically: we draw a tangent line at the bottom of a bowl, and the curve curves away above us.


Interactive
Demo

Choose an ODE — including nonlinear ones — then drag the sliders to see how step size and number of steps affect the approximation. Watch the error stats in real time.

EULER'S METHOD EXPLORER

Euler y(final)

True y(final)

Absolute error


Worked
Examples

Click an example to see the full step-by-step solution with a comparison table.

Example 01 — Linear

Exponential Growth

y' = y, y(0)=1, h=0.5

Example 02 — Linear

Non-autonomous

y' = t + y, y(0)=1, h=0.1

Example 03 — Nonlinear

Logistic-style

y' = y(1−y), y(0)=0.1, h=0.5

Example 04 — Nonlinear

Separable Nonlinear

y' = −2ty, y(0)=1, h=0.25


Problem
Strategy

On an exam, Euler's Method problems all follow the same skeleton. Here is how to approach any problem systematically — before you compute a single number.

  1. Read off $f(t,y)$, $t_0$, $y_0$, $h$, and the target $t$

    The problem gives you an IVP. Extract: the right-hand side $f(t,y)$, the initial values, the step size, and where you're headed. Write these down explicitly before computing anything.

  2. Compute how many steps $N$ you need

    $N = (t_{\text{target}} - t_0)\,/\,h$. This should be a whole number — if it isn't, double-check that you read $h$ correctly.

  3. Set up a table: $n$, $t_n$, $y_n$, $f(t_n, y_n)$, $h\cdot f$

    Don't just chain computations mentally. A table catches arithmetic errors and makes partial credit possible. Every column has a clear meaning.

  4. At each row: evaluate $f$ at the current $(t_n, y_n)$ — never the next

    This is the most common mistake. Euler uses the slope at the start of each interval. You compute $f(t_n, y_n)$ with the values you already have, then advance.

  5. Advance: $y_{n+1} = y_n + h\cdot f(t_n, y_n)$, then $t_{n+1} = t_n + h$

    The update of $y$ comes first, then increment $t$. Both are using the current row's values.

  6. Check: does the answer make physical/geometric sense?

    If your ODE has $y' = y > 0$, the solution should be increasing. If $y'$ is negative, $y$ should fall. If your answer goes the wrong direction, you likely plugged in the wrong sign somewhere.

Common Exam Mistakes

Using $(t_{n+1}, y_{n+1})$ in $f$ instead of $(t_n, y_n)$.   Forgetting to increment $t$ each step.   Wrong number of steps — recount if your answer looks off.   Rounding too aggressively mid-computation — carry at least 4 decimal places until the final answer.


Error &
Step Size

Local vs. Global Error

At each single step, we discard the $h^2$ and higher terms of the Taylor series. The error introduced in one step is called the local truncation error:

$$\text{Local error per step} \;\sim\; \frac{h^2}{2}\,y''(t_n)$$

Over $N = (b - a)/h$ steps, these errors accumulate. The global truncation error (total drift from the true solution) scales as:

$$\text{Global error} \;\sim\; C \cdot h \quad \text{(first-order method)}$$

Halve $h$ → half as many steps each half as big? No: you take twice as many steps, each with half the local error. Net result: global error halves. This is why Euler is called a first-order method.

Fig 2 — Global error vs. step size on log-log scale. Slope ≈ 1 confirms first-order convergence.

Better Methods (What Comes Next)

Euler is order 1. The Runge-Kutta 4 (RK4) method — the workhorse of scientific computing — evaluates the slope at four points per step and achieves order 4 (global error $\sim h^4$). Halve $h$, error drops by factor 16. The price: 4× the function evaluations per step. That tradeoff — accuracy vs. cost — is the central question of numerical ODE theory.

Why Euler Still Matters

Even though RK4 dominates in practice, Euler's method is the conceptual foundation. Every higher-order method is essentially "take a smarter average of slopes." Understanding Euler deeply means you understand all of them structurally.