Switched systems

Switched systems are modelled by first-order differential (state) equations with multiple right-hand sides, that is,

\dot{\bm x} = \bm f_q(\bm x), \qquad q \in \{1,2, \ldots, m\}, \tag{1} where m right-hand sides are possible and the lower index q determines which right-hand side function is “active” at a given moment.

The question is, what dictates the evolution of the integer variable q? In other words, what drives the switching? It turns out that the switching can be time-driven or state-driven.

In both cases, the right-hand sides can also depend the control input \bm u.

Major results for switched systems have been achieved without the need to refer to the framework of hybrid systems. But now that we have built such general framework, it turns out useful to view switched systems as a special class of hybrid systems. The aspects in which they are special will be discussed in the following, but here let us state that in contrast to full hybrid systems, switched systems are a bit less rich on the discrete side.

Time-driven

The evolution of the state variable complies with the following model \dot{\bm x} = \bm f_{q(t)}(\bm x), where q(t) is some function of time. The values of q(t) can be under our control or beyond our control, deterministic or stochastic.

A hybrid automaton for a time-driven switched system is shown in Fig 1.

Figure 1: An automaton for a switched system with time-driven switching

The transition from one mode to another is triggered by the integer variable q(t) attaining the appropriate value.

Since the switching signal is unrelated to the (continuous) state of the system, the invariant of the two modes are usually covering the whole state space \mathcal X.

State-dependent switching

The model is \dot{\bm x} = \begin{cases} \bm f_1(\bm x), & \mathrm{if}\, \bm x \in \mathcal{X}_1,\\ \vdots\\ \bm f_m(\bm x), & \mathrm{if}\, \bm x \in \mathcal{X}_m. \end{cases}

Let’s consider just two domains \mathcal X_1 and \mathcal X_2. A hybrid automaton for a state-driven switched system is shown in Fig 2.

Figure 2: An automaton for a switched system with state-driven switching

The transition to the other mode is triggered by the continuous state of the system crossing the boundary between the two domains. The boundary is defined by the function s(\bm x) (called switching function), which is zero on the boundary, see the Fig 3.

Figure 3: State-dependent switching

Through examples we now illustrate the possible behaviors of the system when the flow transverses the boundary, when it pulls away from the boundary, and when it pushes towards the boundary.

Example 1 (The flow transverses the boundary) We consider the two right-hand sides of the state equation \bm f_1(\bm x) = \begin{bmatrix}1\\ x_1^2 + 2x_2^2\end{bmatrix} and \bm f_2(\bm x) = \begin{bmatrix}1\\ 2x_1^2+3x_2^2-2\end{bmatrix} and the switching function s(x_1,x_2) = (x_1+0.05)^2 + (x_2+0.15)^2 - 1.

The state portrait that also shows the switching function is generated using the following code.

Show the code
s(x₁,x₂) = (x₁+0.05)^2 + (x₂+0.15)^2 - 1.0

f₁(x₁,x₂) = x₁^2 + 2x₂^2
f₂(x₁,x₂) = 2x₁^2+3x₂^2-2.0

f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [1,f₁(x₁,x₂)] : [1,f₂(x₁,x₂)] 

N = 100
x₁ = range(0, stop = 0.94, length = N)

using CairoMakie
fig = Figure(size = (600, 600),fontsize=20)
ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
x10 = 0.5
x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
fig

The state portrait also shows a particular initial state \bm x_0 using a blue dot. Note that the projection of both vector fields \mathbf f_1 and \mathbf f_2 evaluated at \bm x_0 onto the normal (the gradient) of the switching function at \bm x_0 is positive, that is \left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \geq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \geq 0.

This is consistent with the observation that the flow goes through the boundary.

We can also plot a particular solution of the ODE using the following code.

Show the code
using DifferentialEquations
F(u, p, t) = f(u[1],u[2])
u0 = [0.0,0.4]
tspan = (0.0, 1.0)
prob = ODEProblem(F, u0, tspan)
sol = solve(prob, Tsit5(), reltol = 1e-8, abstol = 1e-8)

using Plots
Plots.plot(sol,lw=3,xaxis="Time",yaxis="x",label=false)

Strictly speaking, this solution does not satisfy the differential equation on the boundary of the two domains (the derivative of x_2 does not exist there). This is visually recognized in the above plot as the sharp corner in the solution. But other than that, the solution is perfectly “reasonable” – for a while the system evolves according to one state equations, then at one particular moment the system starts evolving according to another state equation. That is it. Not much more to see here.

Example 2 (The flow pulls away from the boundary) We now consider another pair of the right-hand sides. \bm f_1(\bm x) = \begin{bmatrix}-1\\ x_1^2 + 2x_2^2\end{bmatrix} and \bm f_2(\bm x) = \begin{bmatrix}1\\ 2x_1^2+3x_2^2-2\end{bmatrix}.

The switching function is the same as in the previous example.

The state portrait is below.

Show the code
f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [-1, f₁(x₁,x₂)] : [1, f₂(x₁,x₂)] 

fig = Figure(size = (600, 600),fontsize=20)
ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
x10 = 0.8
x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
fig

We focus on the blue dot again. The projections of the two vector fields onto the normal of the switching function satisfy \left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \leq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \geq 0.

The only interpretation of this situation is that a unique solution does not start at \bm x_0. Again, not much more to see here.

Example 3 (The flow pushes towards the boundary) And one last pair of the right-hand sides: \bm f_1(\bm x) = \begin{bmatrix}1\\ x_1^2 + 2x_2^2\end{bmatrix} and \bm f_2(\bm x) = \begin{bmatrix}-1\\ 2x_1^2+3x_2^2-2\end{bmatrix}.

The state-portrait is below.

Show the code
f(x₁,x₂) = s(x₁,x₂) <= 0.0 ? [1, f₁(x₁,x₂)] : [-1, f₂(x₁,x₂)] 

fig = Figure(size = (600, 600),fontsize=20)
ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
streamplot!(ax,(x₁,x₂)->Point2f(f(x₁,x₂)), 0..1.5, 0..1.5, colormap = :magma)
lines!(ax,x₁,sqrt.(1 .- (x₁ .+ 0.05).^2) .- 0.15, color = :red, linewidth=5)
x10 = 0.5
x20 = sqrt(1 - (x10 + 0.05)^2) - 0.15
Makie.scatter!(ax,[x10],[x20],color=:blue,markersize=30)
fig

The projections of the two vector fields onto the normal of the switching function satisfy \left.\left(\nabla s\right)^\top \bm f_1\right|_{\bm x_0} \geq 0, \quad \left.\left(\nabla s\right)^\top \bm f_2\right|_{\bm x_0} \leq 0.

But this is interesting! Once the trajectory hits the switching curve and tries to penetrate it futher, it is pushed back to the switching curve. As it tries to penetrate it further, it is pushed back to the switching curve again. And so on. But then, how does the state evolve from \bm x_0?

Hint: solve the ODE numerically with some finite step size. The solution will exhibit zig-zagging or chattering along the switching curve, away from the blue point. Now, keep shrinking the step size. The solution will ultimately “slide” smoothly along the switching curve. Perhaps this was your guess. One thing should worry you, however: such “sliding” solution satisfies neither of the two state equations!

We will make this more rigorous in a moment, but right now we just wanted to tease the intuition.

Conditions for existence and uniqueness of solutions of ODE

In order to analyze the situations such as the previous example, we need to recapitulate some elementary facts about the existence and uniqueness of solutions of ordinary differential equations (ODEs). And then we are going to add some new stuff.

Consider the ODE

\dot x(t) = f(x(t),t).

We ask the following two questions:

  • Under which conditions does a solution exists?
  • Under which conditions is the solution unique?

To answer both, the function f() must be analyzed.

But before we answer the two questions, we must ask another one that is even more fundamental:

  • What does it mean that a function x(t) is a solution of the the ODE?

However trivial this question may seem, an answer can escalate rather quickly – there are actually several concepts of a solution of an ordinary differential equation.

Classical solution (Peano, also Cauchy-Peano)

We assume that f(x(t),t) is continuous with respect to both x and t. Then existence of a solution is guaranteed locally (on some finite interval), but uniqueness is not.

Not guaranteed does not mean impossible

Uniqueness is not not excluded in all cases, it is just that it is not guaranteed.

A solution is guaranteed to be continuously differentiable ( x\in\mathrm C^1 ). Such function x(t) satisfies the ODE \dot x(t) = f(x(t),t) \; \forall t, that is why such solution is called classical.

Example 4 An example of a solution that exists only on a finite interval is \dot x(t) = x^2(t),\; x(0) = 1,
for which the solution is x(t) = \frac{1}{1-t} . The solution blows up at t=1 .

Example 5 An example of nonuniqueness is provided by \dot x(t) = \sqrt{x(t)}, \; x(0) = 0.

One possible solution is x(t) = \frac{1}{4}t^2. Another is x(t) = 0. Yet another example is x(t) = \frac{1}{4}(t-t_0)^2. It is related to the Leaky bucket example.

Strenghtening the requirement of continuity (Pickard-Lindelöf)

Since continuity of f(x(t),t) was not enough to guarantee uniqueness, we need to impose a stricter condition on f(). Namely, we impose a stricter condition on f() with respect to x – Lipschitz continuity, while we still require that the function be continuous with respect to t.

Now it is not only existence but also uniqueness of a solution that is guaranteed.

Uniqueness not guaranteed does not mean it is impossible

Similarly as with Peano conditions, here too the condition is not necessary, it is just sufficient – even if the function f is not Lipschitz continuous, there may exist a unique solution.

Since the condition is stricter than mere continuity, whatever goodies hold here too. In particular, the solution is guaranteed to be continuously differentiable.

If the function is only locally Lipschitz, the solution is guaranteed on some finite interval. If the function is (globally) Lipschitz, the solution is guaranteed on an unbounded interval.

Extending the set of solutions (Carathéodory)

In contrast with the classical solution, we can allow the solution x(t) to fail to satisfy the ODE at some isolated points in time. This is called Carathéodory (or extended) solution.

Carathéodory solution x(t) is more than just continuous (even more than uniformly continuous) but less than contiuously differentiable (aka \mathcal C^1) – it is absolutely continuous. Absolutely continuous function is a solution of the integral equation (indeed, an equation) x(t) = x(t_0) + \int_{t_0}^t f(x(\tau),\tau)\mathrm{d}\tau,
where we use Lebesgue integral (instead of Riemann).

Having referred to absolute continuity and Lebesgue integral, the discussion could quickly become rather technical. But all we want to say is that f can be “some kind of discontinuous” with respect to t. In particular, it must be measurable wrt t, which again seems to start escalating… But it suffices to say that it includes the case when f(x,t) is piecewise continuous with respect to t (sampled data control with ZOH).

Needles to say that for a continuous f, solutions x are just classical (smooth).

If the function f is discontinuous with respect to x, some more concepts of a solution need to be invoked so that existence and uniqueness can be analyzed.

Example 6 (Some more examples of nonexistence and nonuniqueness of solutions) The system with a discontinuous RHS \begin{aligned} \dot x_1 &= -2x_1 - 2x_2\operatorname*{sgn(x_1)},\\ \dot x_2 &= x_2 + 4x_1\operatorname*{sgn(x_1)} \end{aligned} can be reformulated as a switched system \begin{aligned} \dot{\bm x} &= \begin{bmatrix}-2 & 2\\-4 & 1\end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}, \quad s(\bm x)\leq 0\\ \dot{\bm x} &= \begin{bmatrix}-2 & -2\\4 & 1\end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}, \quad s(\bm x)> 0, \end{aligned} where the switching function is s(\bm x) = x_1.

Show the code
s(x) = x[1]

f₁(x) = [-2x[1] + 2x[2], x[2] - 4x[1]]
f₂(x) = [-2x[1] - 2x[2], x[2] + 4x[1]]

f(x) = s(x) <= 0.0 ? f₁(x) : f₂(x) 

using CairoMakie
fig = Figure(size = (600, 600),fontsize=20)
ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
streamplot!(ax,x->Point2f(f(x)), -1.5..1.5, -1.5..1.5, colormap = :magma)
vlines!(ax,0; ymin = -1.1, ymax = 1.1, color = :red)
fig

Sliding mode dynamics (on simple boundaries)

The previous example provided yet another illustration of a phenomenon of sliding, or a sliding mode. We say that there is an attractive sliding mode at \bm x_\mathrm{s}, if there is a trajectory that ends at \bm x_\mathrm{s}, but no trajectory that starts at \bm x_\mathrm{s}.

Generalized solutions (Filippov)

It is now high time to introduce yet another concept of a solution. A concept that will make it possible to model the sliding mode dynamics in a more rigorous way. Remember that when the state \bm x(t) slides along the boundary, it qualifies as a solution to neither of the two state equations in any sense we have discussed so far. But now comes the concept of Fillipov solution.

\bm x() is a Filippov solution on [t_0,t_1] if for almost all t \dot{\bm{x}}(t) \in \overline{\operatorname*{co}}\{\mathbf f(\bm x(t),t)\}, where \overline{\operatorname*{co}} denotes the (closed) convex hull.

Example 7 Consider the model in the previous example. The switching surface, along which the solution slides, is given by \mathcal{S}^+ = \{\bm x \mid x_1=0 \land x_2\geq 0\}.

Now, Filippov solution must satisfy the following differential inclusion \begin{aligned} \dot{\bm x}(t) &\in \overline{\operatorname*{co}}\{\bm A_1\bm x_1(t), \bm A_2\bm x_2(t)\}\\ &= \alpha_1(t) \bm A_1\bm x_1(t) + \alpha_2(t) \bm A_2\bm x_2(t), \end{aligned} where \alpha_1(t), \alpha_2(t) \geq 0, \alpha_1(t) + \alpha_2(t) = 1.

Note, however, that not all the weights keep the solution on \mathcal S^+. We must impose some restriction, namely that \dot x_1 = 0 for \bm x(t) \in \mathcal S^+. This leads to \alpha_1(t) [-2x_1 + 2x_2] + \alpha_2(t) [-2x_1 - 2x_2] = 0

Combining this with \alpha_1(t) + \alpha_2(t) = 1 gives \alpha_1(t) = \alpha_2(t) = 1/2, which in this simple case perhaps agrees with our intuition (the average of the two vector fields).

The dynamics on the sliding mode is modelled by \dot x_1 = 0, \quad \dot x_2 = x_2, \quad \bm x \in \mathcal{S}^+.

Possible nonuniqueness on intersection of boundaries

Back to top