HSD Chapter 2
Notes from Chapter 2 of Hirsch, Smale, Devaney (HSD)
Basics of Systems of ODEs
A system of differential equations is a collection of \(n\) interrelated differential equations: \[ \begin{aligned} x'_1 &= f_1(t, x_1, x_2, \ldots, x_n) \\ x'_2 &= f_2(t, x_1, x_2, \ldots, x_n) \\ \vdots \\ x'_n &= f_n(t, x_1, x_2, \ldots, x_n) \end{aligned} \]
Each of these functions is assumed to be \(C^\infty\), which means that we can take partial derivatives of any order. This assumption is mostly for convenience. Using vector notation, we can rewrite the system compactly as \(X' = F(t, X)\), where \(F(t, X)\) is simply
\[ F(t, X) = \begin{bmatrix} f_1(t, x_1, x_2, \ldots, x_n) \\ f_2(t, x_1, x_2, \ldots, x_n) \\ \vdots \\ f_n(t, x_1, x_2, \ldots, x_n) \end{bmatrix}. \]
A solution to this system is the system of equations \(X(t)\) such that \(X'(t) = F(t, X(t))\). If \(f_i\) does not depend on \(t\), then the system is called autonomous and we can drop the dependence on time, such that \(X' = F(X)\). Otherwise, it is a non-autonomous system. Similar to the previous chapter, the equilibrium points for an autonomous system occur at \(X_0\), giving us \(F(X_0)=\mathbf{0}\).
Second Order Differential Equations
A second order ODE looks like \(x'' = f(t, x, x')\). A popular example is Newton’s equation, \(mx'' = f(x)\). Or, there is the forced harmonic oscillator, \(mx'' + bx' + kx = f(t)\). But the general, simple second order ODE is \(x'' + ax' + bx = 0\), where \(a\) and \(b\) are constants.
The upshot is that we can treat second order ODEs as systems of first order equations by defining a new variable \(y = x'\). Then, we can rewrite the second order ODE as the system:
\[ \begin{aligned} x' &= y \\ y' &= -bx -ay, \end{aligned} \]
which we might generalize as
\[ \begin{aligned} x' &= f(x,y) \\ y' &= g(x,y). \end{aligned} \]
Collectively, in a 2D system, \([f(x,y), g(x,y)]\) represents \(F(x,y)\), a vector with \(x\) and \(y\) components. As an example for an ODE, we might have the system:
\[ \begin{aligned} x' &= y, \\ y' &= -x. \end{aligned} \]
We can draw the direction field of this system, which produces a 2D vector \(F(x,y) = [f(x,y), g(x,y)]\) for each \((x,y)\) position on the plane. In other words, at each point \((x,y)\), we place a vector showing the direction of the system. So far, we haven’t solved the ODE; but this gives us a visual representation of how the system behaves at each point.
Code
import matplotlib.pyplot as plt
import numpy as np
X, Y = np.meshgrid(np.arange(-2, 2, 0.1), np.arange(-2, 2, 0.1))
dx = Y
dy = -X
mag = np.sqrt(dx**2 + dy**2)
plt.quiver(X,Y,dx/mag,dy/mag,color='r')
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Direction Field')
plt.grid()
plt.show()Unsurprisingly, the solution is:
\[ \begin{aligned} x(t) &= a \sin(t) \\ y(t) &= a \cos(t). \end{aligned} \]
We can quickly verify that this is the case \[ \begin{aligned} x'(t) &= a\cos(t) = y(t) \\ y'(t) &= -a\sin(t) = -x(t). \end{aligned} \]
If we pick an \(a\), we get a circle of radius \(a\) centered at the origin. Of course, when \(a=0\), our solution is the constant function \(F(t) = (0,0)\), which is the equilibrium point.
Planar Linear Systems
The system above can be written in matrix form
\[ \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}. \]
In general, any 2D planar linear system can be written as
\[ \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}, \]
where we call the matrix of constants \(A\). The origin is always an equilibrium point. Other equilibria can be found by solving the system
\[ \begin{aligned} ax + by &= 0 \\ cx + dy &= 0 \end{aligned} \]
A way to check if there are non-zero solutions is by computing the determinant. If we recall that the determinant represents the volume of a linear transformation, we can see that when \(\det(A) = 0\), the system has non-trivial solutions. If \(\det(A) \ne 0\), then the only equilibrium is at the origin. There is a nontrivial null space only when the determinant is zero.
Let’s say that \(\det(A) = 0\). Then there are nonzero solutions to \(AX=\mathbf{0}\), i.e., there are non-origin equilibrium points (in fact, a whole line of equilibria through the origin). More generally, to understand the trajectories of the system, we start with some “special” vectors (yep, eigenvectors). If we have a vector \(V_0\) such that \(AV_0 = \lambda_0 V_0\), then we claim that
\[ X(t) = \exp(\lambda_0 t) V_0 \]
is a solution. Here, \(\lambda_0\) is the eigenvalue associated with the eigenvector \(V_0\). The claim can be verified by computing
\[ \begin{aligned} X'(t) &= \lambda_0 \exp(\lambda_0 t)V_0 \\ &= \exp(\lambda_0 t) (\lambda_0 V_0)\\ &= \exp(\lambda_0 t) (A V_0)\\ &= A (\exp(\lambda_0 t) V_0)\\ &= A X(t) \end{aligned} \]
If we scale the eigenvector by some constant \(\alpha\), we get another solution \(X(t) = \exp(\lambda_0 t) \alpha V_0\). For a 2D system, if we have 2 linearly independent eigenvectors, then we have two sets of solutions. Each of these eigenvector solutions is a straight-line solution.
To see why, note that \(X(0)=\alpha V_0\) and, for each \(t\), \(X(t)=\exp(\lambda_0 t)\alpha V_0\) is a scalar multiple of the fixed vector \(V_0\). Therefore the point \(X(t)\) stays on the line through the origin in the direction of \(V_0\). In fact, since \(\exp(\lambda_0 t)>0\) for all real \(t\), the trajectory stays on the same ray (half-line) from the origin passing through \(\alpha V_0\).
The distance from the origin is \[ \lVert X(t)\rVert = \exp(\lambda_0 t)\,|\alpha|\,\lVert V_0\rVert. \] So if \(\lambda_0>0\), then \(\lVert X(t)\rVert \to \infty\) as \(t\to\infty\), and \(X(t)\to (0,0)\) as \(t\to -\infty\). If \(\lambda_0<0\), the opposite happens: trajectories along this eigenvector ray decay into the origin forward in time. Finally, if \(\lambda_0=0\), then \(X(t)=\alpha V_0\) is constant, which is exactly the equilibrium-line case that occurs when \(\det(A)=0\) (since then \(0\) is an eigenvalue and \(AV_0=0\) for vectors \(V_0\) in the null space).