What Is an ODE, Really?
Before any formula, ask the physical question: what does it mean to "describe how something changes"? That's the whole job of a differential equation. Not a formula for where something is — a rule for where it's going.
The most general first-order ODE looks like:
Read this as: "the slope of \(y\) at position \(x\), given current value \(y\), is exactly \(F(x, y)\)." Notice \(F\) can depend on both \(x\) and \(y\). That's what makes ODEs rich — the slope of the curve can depend on where the curve currently is.
The word ordinary just means there is only one independent variable (here, \(x\) or \(t\)). When you have multiple independent variables you get partial differential equations — a separate story.
The Slope Pipeline
Here's the conceptual assembly line that connects raw data to a solved function:
Slopes as Geometry
A derivative is just the slope of a tangent line at a point. When you write \(\frac{dy}{dx} = 2x\), you are painting a rule: at every point \((x, y)\), the tangent to the solution must have slope \(2x\).
Think of it like a vector field of arrows. Each arrow points in the direction the solution curve must travel when it passes through that location. The solution curve is the one that stays tangent to every arrow it touches.
The Tangent Line as a Local Approximation
The derivative gives us the best linear approximation of the function near any point. If at \(x_0\) the function value is \(y_0\), and the ODE says the slope is \(m = F(x_0, y_0)\), then locally:
This is Euler's method in disguise — step forward a tiny bit using the local slope, re-evaluate the slope, step again. That's how computers numerically integrate ODEs. Each step says: "right now the slope is this, so let me walk a tiny bit in that direction."
Slope Fields — The ODE Made Visible
A slope field (or direction field) is the geometric heart of any first-order ODE. At a grid of points \((x, y)\), we draw a short tick mark whose angle matches the slope \(F(x, y)\) dictated by the ODE. Solution curves must stay tangent to these marks everywhere they pass.
Click anywhere on the field to draw a solution curve through that point (Euler's method, 500 steps).
Notice something crucial: no matter what ODE you choose, you see a family of non-crossing curves. Each click seeds a different initial condition — the curve is uniquely determined once you pick a starting point. This is the Existence and Uniqueness theorem made visual.
Integration: Rebuilding the Function from Its Slopes
Differentiation destroys information — specifically, it destroys the constant in a function. Both \(t^2\) and \(t^2 + 7\) have derivative \(2t\). Integration reverses differentiation, but it can't recover what was destroyed without help. That help is the initial condition.
Why \(\int\) is the Inverse of \(\frac{d}{dt}\)
The Fundamental Theorem of Calculus says exactly this. If \(\frac{dr}{dt} = g(t)\), then:
This integral accumulates all the tiny slope \(\times\) time contributions from \(t_0\) to \(t\). Each infinitesimal \(g(\tau)\,d\tau\) is a tiny nudge — integration sums them all up to get the total displacement in \(r\).
Step-by-step: Solving \(\frac{dr}{dt} = 2t\)
Step 1 — Separate variables. Move all \(r\)-stuff left, all \(t\)-stuff right:
Step 2 — Integrate both sides. The left side integrates trivially (the antiderivative of \(dr\) is just \(r\)):
Step 3 — Apply initial condition. Suppose \(r(0) = 3\):
The \(+C\) encodes the entire family of solutions. The initial condition selects exactly one member of that family. Think of the family as a stack of identical curves shifted vertically — the initial condition pins down which level you live on.
Gold bars = Riemann sum approximation of \(\int_0^T g(t)\,dt\). As subdivisions → ∞, the sum → exact area → exact solution.
Initial Conditions: Selecting One Curve from the Family
Every first-order ODE has infinitely many solutions — one for every value of the constant \(C\). An initial condition is a single data point that tells us which solution we're in.
For \(\frac{dy}{dx} = -y\), the general solution is:
Every choice of \(C\) gives a valid exponential decay. Specifying \(y(0) = 2\) forces \(C = 2\). The table below shows how the solution changes with the initial condition:
| Initial Condition \(y(0)\) | Constant \(C\) | Solution |
|---|---|---|
| \(-3\) | \(-3\) | \(y = -3e^{-x}\) |
| \(0\) | \(0\) | \(y = 0\) (trivial/equilibrium) |
| \(1\) | \(1\) | \(y = e^{-x}\) |
| \(2\) | \(2\) | \(y = 2e^{-x}\) |
| \(5\) | \(5\) | \(y = 5e^{-x}\) |
The ODE Zoo: Common Forms and Their Physics
Different structures of \(F(x, y)\) give very different behaviors. Here are the canonical first-order ODEs every engineer must recognize on sight:
1 — Separable: \(\frac{dy}{dx} = g(x)\cdot h(y)\)
Variables can be cleanly separated to opposite sides. The slope factors into a part that depends only on \(x\) and a part only on \(y\):
Example: \(\frac{dy}{dx} = xy\) separates to \(\frac{dy}{y} = x\,dx \Rightarrow \ln|y| = \frac{x^2}{2} + C \Rightarrow y = Ae^{x^2/2}\). This governs Gaussian distributions.
2 — Linear: \(\frac{dy}{dx} + P(x)\,y = Q(x)\)
The unknown \(y\) appears to the first power only. Solved with an integrating factor \(\mu(x) = e^{\int P(x)\,dx}\), which transforms the left side into an exact derivative:
RC circuit equation: \(\frac{dV}{dt} + \frac{1}{RC}V = \frac{V_s}{RC}\) is exactly this form. The integrating factor is \(e^{t/RC}\).
3 — Autonomous: \(\frac{dy}{dt} = F(y)\)
The slope depends only on the current value \(y\), not on time \(t\) explicitly. Population dynamics, radioactive decay, RC circuits with no driving signal — all autonomous. Equilibria exist where \(F(y^*) = 0\); stability depends on the sign of \(F'(y^*)\).
4 — Exact: \(M\,dx + N\,dy = 0\) with \(\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}\)
The ODE hides a potential function \(\Phi(x,y)\) such that \(d\Phi = 0\). The solution is simply \(\Phi(x,y) = C\) — a level curve. This connects ODEs to conservative vector fields from Physics 2.
Phase Line: Stability Without Solving
For an autonomous ODE \(\frac{dy}{dt} = F(y)\), you can understand the behavior of solutions without integrating at all, just by analyzing \(F(y)\).
Plot \(F(y)\) vs \(y\). Wherever \(F(y) > 0\), solutions move upward (increasing \(y\)). Wherever \(F(y) < 0\), solutions move downward. Wherever \(F(y) = 0\), solutions freeze — these are equilibria.
Left panel: \(F(y)\) vs \(y\). Right panel: solution curves \(y(t)\) starting from different initial values. Green dots = stable equilibria. Red dots = unstable equilibria.
Euler's Method: Numerical ODE Solving from Scratch
Euler's method is the most direct translation of the ODE definition into an algorithm. If \(\frac{dy}{dx} = F(x, y)\) and we know \(y_0 = y(x_0)\), we step forward:
At each step: evaluate the slope at the current point, walk a tiny distance \(\Delta x\) in that direction, repeat. This is just the tangent-line approximation applied over and over.
The error per step is \(\mathcal{O}(\Delta x^2)\), and the global error over a fixed interval is \(\mathcal{O}(\Delta x)\) — so halving the step size halves the error. Better methods (Runge-Kutta) get \(\mathcal{O}(\Delta x^4)\) error.
Teal = Euler approximation. Gold = exact solution. Watch the error shrink as step count increases.