# Output Feedback

 Prev: State Feedback Chapter 8 - Output Feedback Next: Transfer Functions

In this chapter we show how to use output feedback to modify the dynamics of the system through the use of observers. We introduce the concept of observability and show that if a system is observable, it is possible to recover the state from measurements of the inputs and outputs to the system. We then show how to design a controller with feedback from the observer state. A general controller with two degrees of freedom is obtained by adding feedforward. We illustrate by outlining a controller for a nonlinear system that also employs gain scheduling.

### Chapter contents

PDF (24 Jul 2020)

1. Frequency Domain Modeling
2. Determining the Transfer Function
• Transmission of Exponential Signals
• Transfer Functions for Linear Differential Equations
• Time Delays and Partial Differential Equations
• State Space Realizations of Transfer Functions
3. Laplace Transforms
4. Block Diagrams and Transfer Functions
• Control System Transfer Functions
• Algebraic Loops
5. Zero Frequency Gain, Poles, and Zeros
• Zero Frequency Gain
• Poles and Zeros
• Pole/Zero Cancellations
6. The Bode Plot
• Sketching and Interpreting Bode Plots
• Poles and Zeros in the Right Half-Plane
• System Insights from the Bode Plot
• Determining Transfer Functions Experimentally
Exercises

## Chapter Summary

This chapter describes how to estimate the state of a system through measurements of its inputs and outputs:

1. A linear system with dynamics

{\begin{aligned}{\dot {x}}&=Ax+Bu&\quad x&\in {R}^{n},u\in {R}\\y&=Cx+Du&y&\in {R}\end{aligned}} is said to be observable if we can determine the state of the system through measurements of the input $u(t)$ and the output $y(t)$ over a time interval $[0,T]$ .

2. The observability matrix for a linear system is given by

$W_{o}={\begin{bmatrix}C\\CA\\\vdots \\CA^{n-1}\end{bmatrix}}.$ A linear system is observable if and only if the observability matrix $W_{o}$ is full rank. Systems that are not reachable have "hidden" states that cannot be determined by looking at the inputs and outputs.

3. A linear system of the form

{\begin{aligned}{\frac {dz}{dt}}&={\begin{bmatrix}-a_{1}&1&0&\cdots &0\\-a_{2}&0&1&&0\\\vdots \\-a_{n-1}&0&0&&1\\-a_{n}&0&0&&0\\\end{bmatrix}}z+{\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n-1}\\b_{n}\end{bmatrix}}u\\y&={\begin{bmatrix}1&0&0\cdots &0\end{bmatrix}}z+Du.\end{aligned}} is said to be in observable canonical form. A system in this form is always observable and has a characteristic polynomial given by

$\det(sI-A)=s^{n}+a_{1}s^{n-1}+\cdots +a_{n-1}s+a_{n},$ An observable linear system can be transformed into observable canonical form through the use of a coordinate transformation $z=Tx$ .

4. An observer is a dynamical system that estimates the state of another system through measurement of inputs and outputs. For a linear system, the observer given by

${\frac {d{\hat {x}}}{dt}}=A{\hat {x}}+Bu+L(y-C{\hat {x}})$ generates an estimate of the state that converges to the actual state if $A-LC$ is has eigenvalues with negative real part. If a system is observable, then there exists a an observer gain $L$ such that the observer error is governed by a linear differential equation with an arbitrary characteristic polynomial. Hence the eigenvalues of the error dynamics for an observable linear system can be placed arbitrarily through the use of an appropriate observer gain.

5. A state feedback controller and linear observer can be combined to form a stabilizing controller for a reachable and observable linear system by using the estimate of the state in the feedback control law. The resulting controller is given by

{\begin{aligned}{\frac {d{\hat {x}}}{dt}}&=A{\hat {x}}+Bu+L(y-C{\hat {x}})\\u&=-K{\hat {x}}+K_{r}r\end{aligned}} 6. A discrete time, linear process with noise is given by

{\begin{aligned}x(k+1)&=Ax(k)+Bu(k)+v(k)&\quad x&\in R^{n},u\in R\\y(k)&=Cx(k)+Du(k)+w(k)&y&\in R\end{aligned}} where $v$ is a vector, white, Gaussian random process with mean 0, autocovariance $R_{w}$ , $w$ is a white, Guassian random process with mean 0, variance $R_{v}$ . We take the initial condition to be random with mean 0 and covariance $P_{0}$ . The optimal estimator is given by

${\hat {x}}(k+1)=A{\hat {x}}(k)+Bu(k)+L(y(k)-C{\hat {x}}(k))$ where the observer gain satisfies

{\begin{aligned}P(k+1)&=A^{T}P(k)A^{T}+R_{v}-AP(k)C^{T}(R_{w}+CPC^{T})^{-1}CP^{T}(k)A^{T}\\P(0)&=P_{0}\\L&=A^{T}P(k)C^{T}(R_{w}+CPC^{T})^{-1}\end{aligned}} This estimator is an example of a Kalman filter.

The following exercises cover some of the topics introduced in this chapter. Exercises marked with a * appear in the printed text.