Lecture 1

This is the summary of Lecture 1, 25/10/2011.

Why Dynamical Systems?

 * Useful approximations for real-life applications
 * solvable - automation of solutions

Characterization of a Dynamical System

 * systems that evolves with time
 * state of the system e.g. position and velocity describing mechanical system - set of information we need to unambiguously characterize the evolution of the system in time
 * uniqueness

Linear Dynamical Systems
This carries the idea of Superposition Principle. Now we proceed as follows:

$$\Phi_{t}(x_{0}+y_{0})=\Phi_{t}(x_{0})+\Phi_{t}(y_{0})\,$$ $$\Phi_{t+t'}(x_{0})=\Phi_{t}\circ\Phi_{t'}(x_{0})$$

Now the question is how do we solve the above two equations. Using the previous results we have:

$$\Phi_{0}(x_{0})= x_{0}\,$$ $$\Phi_{t}(0)= 0\,$$ $$\Phi_{t}(2x_{0})= 2\Phi_{t}(x_{0})\,$$ $$\Phi_{t}(px_{0})= p\Phi_{t}(x_{0})\,$$ $$\Phi_{t}(\frac{p}{q}x_{0})= \frac{p}{q}\Phi_{t}(x_{0})\,$$ $$\Phi_{t}(\lambda x_{0})= \lambda\Phi_{t}(x_{0})\,$$

From the above results we can conclude that $$\Phi_{t}(x_{0})= Y(t)x_{0}\,$$

And thus, $$Y(t+t')= Y(t)Y(t')\,$$

This describes matrix multiplication and so there exists $$A$$ such that $$Y(t)=e^{At} \Rightarrow \Phi_{t}(x_{0})= e^{At}.x_{0}$$

By definition we have $$e^{At}=\sum_{n=0}^\infty \frac{(At)^{n}}{n!}$$

For this series to converge, we need $$\epsilon,k$$ such that

$$\left | \frac{(At)^{n}}{n!} \right \vert < kn^{-(1-\epsilon)} $$ As there is a norm for matrices, and this norm obeys

$$\left | (At)^{n} \right \vert < \left | (At) \right \vert^{n}$$

we take

$$q_{n}=\frac{u_{n+1}}{u_{n}}=\frac{\left | (At)\right \vert}{n+1}$$

and, as this converges to zero when $$n\rightarrow\infty$$, the series converges.

We can calculate a general form for dynamical systems starting from

$$\frac{d\Phi}{dt}=A\Phi \quad\text{but}\quad \Phi_{t}(x_{0})=x(t) \Rightarrow \frac{dx}{dt}= Ax$$

and dynamical systems can ONLY be of this form.

For discrete time, we have similar results, yielding:

$$X(N+1)=Ax(N)\,$$ $$\Phi_{N}(x_{0})=A^{N}x_{0}\,$$

So, starting from general phenomenological considerations about dynamical systems and linear systems, we derive mathematical properties (semigroups, superposition

principle). From there, we derive a fundamental theorem for linear dynamical systems:

$$\Phi_{t}(x)=e^{At}x \Rightarrow \dot{x}=Ax$$

Examples
$$\begin{cases} \dot{x}=ax\\ x(0)=x_{0} \end{cases}\Rightarrow x(t)=e^{at}x_{0}$$

But we know that, in this case, $$x_{0}\in \Re$$, so we can draw a phase portrait, illustrating how the system evolves in time according to the initial condition given.

$$\begin{cases} \dot{x_{1}}=a_{11}x_{1} \Rightarrow x_{1}(t)=e^{a_{11}t}x_{10}\\ \dot{x_{2}}=a_{22}x_{2} \Rightarrow x_{2}(t)=e^{a_{22}t}x_{20} \end{cases}$$

If we want to represent this system in a matricial way, we can obtain

$$\dot{X}=\begin{bmatrix}a_{11} & 0\\ 0 & a_{22} \end{bmatrix}X = AX$$ $$X(t)=e^{At}X_{0} = \begin{bmatrix}e^{a_{11}t} & 0\\ 0 & e^{a_{22}t} \end{bmatrix}X_{0}$$

because $$e^{At}=\sum_{n=0}^\infty \frac{(At)^{n}}{n!} = \sum_{n=0}^\infty \frac{\begin{bmatrix}a_{11} & 0\\ 0 & a_{22} \end{bmatrix}^{n}t^{n}}{n!} = \begin{bmatrix} \sum_{n=0}^\infty \frac{(a_{11}t)^{n}}{n!} & 0\\ 0 & \sum_{n=0}^\infty \frac{(a_{22}t)^{n}}{n!} \end{bmatrix} = \begin{bmatrix} e^{a_{11}t} & 0\\ 0 & e^{a_{22}t} \end{bmatrix}$$

And, finally, we can make a general case for A diagonal:

$$\begin{cases} \dot{x_{1}}=a_{11}x_{1} \\ \vdots \\ \dot{x_{n}}=a_{nn}x_{n} \end{cases}$$ $$A=\begin{bmatrix}a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & a_{nn} \end{bmatrix} $$

And the general solution is of the form:

$$X(t)=\begin{bmatrix}e^{a_{11}t} & 0 & \cdots & 0 \\ 0 & e^{a_{22}t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & e^{a_{nn}t} \end{bmatrix} X_{0}$$

If all $$a_{kk}\in\Re < 0$$, the phase portrait is a stable node, with X converging to zero when time passes. If we have all $$a_{kk}\in\Re > 0$$, this is an unstable node, diverging in all directions. In the hybrid case, we have a saddle with stable and unstable directions. The phase portrait is stable in the directions of the eigenvectors from negative eigenvalues, and unstable in the directions of eigenvectors with positive eigenvalues (if all eigenvalues are real with multiplicity 1).

If A is a semisimple matrix, there is an operator P such that $$A=P^{-1}DP\,$$

In this case, we can use $$Y=PX \Rightarrow \frac{dY}{dt}=P\frac{dX}{dt}=PAX=PAP^{-1}Y=DY$$

Therefore, we can rewrite the equation for Y as $$Y(t)=e^{Dt}Y(0)\,$$ $$X(t)=P^{-1}Y=P^{-1}\begin{bmatrix}e^{a_{11}t} & 0 & \cdots & 0 \\ 0 & e^{a_{22}t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & e^{a_{nn}t} \end{bmatrix} PX_{0}$$ $$e^{At}=P^{-1}e^{Dt}P\,$$

But sometimes A is not semisimple. For example, take the following system: $$\begin{cases} \dot{x_{1}}=\lambda x_{1}+ x_{2}\\ \dot{x_{2}}=\lambda x_{2} \end{cases} \Rightarrow A=\begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix} \,$$ $$A=D+N \Rightarrow A= \begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\,$$

Where N is a nilpotent matrix. This way, we compute $$e^{At}= \sum_{n=0}^\infty \frac{A^{n}t^{n}}{n!} = \sum_{n=0}^\infty \frac{(D+N)^{n}t^{n}}{n!}\,$$

And if D and N commute,

$$e^{At}=e^{Dt}e^{Nt} = \begin{bmatrix} e^{\lambda t} & 0 \\ 0 & e^{\lambda t} \end{bmatrix} \sum_{k=0}^\infty \frac{N^{k}t^{k}}{k!}\,$$

But the sum on N only has to go until the degree where $$N^{p}=0$$ (on this case, $$N^{2}=0$$):

$$e^{At}=\begin{bmatrix} e^{\lambda t} & 0 \\ 0 & e^{\lambda t} \end{bmatrix} (I + Nt) = \begin{bmatrix} e^{\lambda t} & 0 \\ 0 & e^{\lambda t} \end{bmatrix} \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} = e^{\lambda t} \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} \,$$

and we have

$$\begin{cases} x_{1}(t)=e^{\lambda t}(x_{1}(0)+tx_{2}(0))\\ x_{2}(t)=e^{\lambda t}x_{2}(0) \end{cases} \,$$

If A has complex eigenvalues, we can write it as $$A=\begin{bmatrix} \mu & -\omega \\ \omega & \mu \end{bmatrix} \Rightarrow e^{At}=e^{\mu t}\begin{bmatrix} \cos{\omega t} & -\sin{\omega t} \\ \sin{\omega t} & \cos{\omega t} \end{bmatrix}\,$$

If the real part of the eigenvalues is negative, the system is stable. If it positive, it is unstable.

Therefore, we can summarize all the cases for the 2D dynamical systems:


 * stable node: 2 negative real eigenvalues - $$T<0$$
 * unstable node: 2 positive real eigenvalues - $$T>0$$
 * saddle: 1 positive and 1 negative real eigenvalues - $$det A < 0$$
 * stable focus: complex eigenvalues with negative real part - $$T^{2}-4\Delta < 0$$ and $$T>0$$
 * unstable focus: complex eigenvalues with positive real part - $$T^{2}-4\Delta < 0$$ and $$T<0$$



Another useful example to ilustrate how to proceed with higher-order differential equations is The Linear Pendulum.

General Situation for stability
If we have a matrix $$A_{nxn}$$ and we want to evaluate its stability, all we need to check is the eigenvalue with the largest real part. If it is negative, then the system is stable. If it is positive, we have some sort of instability, be that a saddle, an unstable node or focus.

Discrete time systems
With small substitutions, the previous results still apply. The general form would be  $$X_{n+1}=AX_{n}=A^{n+1} X_{0} \,$$

On this case, the stability is given not by the eigenvalue with largest real part, but by the eigenvalue with largest modulus. This way, if $$|\lambda|<1$$, the system is stable.

Non-homogenous equations
$$\begin{cases} \dot{X}=AX+B(t)\\ X(0)=X_{0} \end{cases} \,$$

We solve X using the technique of variation of constants, with the result

$$X(t)=e^{At}X_{0}+ \int\limits_{0}^{t}e^{A(t-s)}B(s)ds\,$$

If the system is stable, the term $$e^{At}X_{0}$$ tends to zero with time, and so we can see the result of the integral as the steady-state response from the system in face of an external input.

An interesting further reading about this subject is the article on Non-Automonous Linear Differential Equations.