当前位置: 首页 > >

CDC00-REG1324 On the Relationship Between LMIs and AREs Applications to Absolute Stability

发布时间:

CDC00-REG1324 On the Relationship Between LMIs and AREs: Applications to Absolute Stability Criteria, Robustness Analysis and Optimal Control
Laboratory for Intelligent and Nonlinear Control (LINC) Department of Electrical and Computer Engineering Duke University, Durham, NC 27708, U.S.A
Jing Li Hua O. Wang Linda Bushnell

y

The topic of this paper is on the relationship between LMIs (linear matrix inequalities) and ARE (algebraic Riccati equations). Besides the survey of some previous results, many new results on this relationship are given and proven in this paper. By doing that, this paper provides insights into nonlinear system stability, system robustness and optimal control. These results are very helpful for system analysts and designers.

Abstract

1 Introduction
The topic of this paper is on the relationship between LMIs (linear matrix inequalities) and ARE (algebraic Riccati equations). In recent years, LMIs have emerged as a powerful mathematical tools for the control engineers. However, many researchers expressed their results in LMI conditions without realizing that these results can also be (or have been) solved through an ARE approach. Conversely, many classical results expressed in AREs can also be written in the form of LMIs. With the development of fast optimization algorithms for LMIs, these classical results are more easily solved through their counterparts expressed in LMIs. Therefore, clari cations on the relationship between LMIs and AREs are necessary and very helpful. However, there is no paper that systematically explores the relationship between AREs and LMIs. This phenomenon does not imply that researchers are not interested in this relationship. On the contrary, many results on the relation between ARE and LMI have been published in the past few decades. But these results are sparsely distributed in many papers in di erent research areas. So part of this paper will serve as a survey of previous results, while at the same time many new results are given and proven in this paper.
Corresponding author. E-mail: hua@ee.duke.edu; Tel: (919) 660-5273; Fax: (919) 660-5293. Dr. Bushnell is also with the U.S. army research o ce, PO Box 12211, RTP, NC 27709. This research was supported in part by the army research o ce grant no. DAA G55-98-D-0002
y

1

Another motivation of this paper is our primary e ort to link di erent parts of control theory together. This e ort is a continuation of many previous work by other researchers. Consider the following paragraph taken out from 7] that was written 30 years ago. There are two main areas in control theory where in nite time least square minimization problems have been developed. On the one hand there is the standard regulator problem of optimal control theory, and on the other hand there are the Lyapunov functions which lead, via the so-called Kalman YakubovichPopov lemma, to the circle criterion and the Popov criterion in stability theory for feedback systems. However, the relations between these two areas are not quite clear. One of the reasons is because of di erent mathematical tools used in these areas. In the stability problem, most of the conditions are expressed in QMIs (quadratic matrix inequality) or LMIs. Many stability concepts, such as positive real function, passivity, nonexpansivity etc., have been proposed to investigate the problem. These concepts have lead to many important conclusions such as circle criterion, Popov criterion, small gain theorem. While in the linear regulator problem, the solution of ARE are the most concerned issue. Through discussion on the relationship between LMIs and AREs, this paper is able to provide insights into nonlinear system stability, system robustness and optimal control. It will be shown in this paper that under certain setup, the stability conclusion derived from di erent criteria will be the same. We also want to convert some of the stability criteria into LMI conditions that can be solved e ciently by the Interior-Point Method. These results are very helpful to system analysts and designers. Section 2 presents a list of results concerning the equivalences between di erent AREs and LMIs. Some results in this section are collected from the literature and their proof will be omitted. Interested readers are encouraged to look them up in the references. Sections 3, 4, 5 discuss the application of these equivalence results to absolute stability theory, robust control and optimal control, respectively. Through these discussions, this paper achieves our objective to some extent in establishing the connection between the two research areas aforementioned. Concluding remarks are collected in Section 6. In this paper, the notation M > 0 stands for a positive de nite symmetric matrix. And L(A; P ) = AT P T + PA is de ned as a mapping from Rn n Rn n to Rn n . The same holds for L(AT ; QT ) = AQT + QAT . P ?T is the same as (P ?1 )T .

2 Equivalences Between ARE and LMI
This section presents a list of results that originate from various branches of control theory. These results concern about the equivalence between AREs and LMIs, i.e. the solvability of an ARE will be equivalent to the feasibility of a corresponding LMI. The following lemma is the famous Kalman-Yakubovich-Popov Lemma (or the positive real lemma). It originates from Popov's criterion, which gives a frequency condition for stability of a feedback system with a memoryless nonlinearity. But with the emergence of powerful 2

2.1 Kalman-Yakubovich-Popov Lemma

LMI solvers, KYP lemma can also be thought as a tool that converts frequency constraints into linear optimization problems (or LMIs). Lemma 1 Let G(s) = C (sI ? A)?1 B + D 2 Pp p. A is Hurwitz, (A; B ) is controllable, (A; C ) is observable. Then G(s) is strictly positive real if and only if there exist a positive de nite symmetric matrix P , a matrix W , a matrix L and a real number which satisfy:

L(A; P ) = ?LT L ? P PB = C T ? LT W
W T W = D + DT

(1) (2) (3)

Based on KYP lemma, the following equivalence between LMI and ARE can be established 11]. Lemma 2 The following statements are equivalent: 1. The LMI

P = PT > 0 L(A; P ) PB ? C T 0 B T P ? C ?DT ? D
is feasible with variable P. 2. The Riccati equation

(4) (5)

L(A; P ) + (PB ? C T )(D + DT )?1(PB ? C T )T = 0
has a solution P = P T > 0

(6)

2.2 Bounded Real Lemma

The following lemma established the equivalence between the bounded real lemma in LMI form and its corresponding ARE form. The bounded real lemma is usually used to characterize nonexpansivity (or H1 norm).
observable, then G1

Lemma 3 Let G(s) = C (sI ? A)?1 B + D. A is Hurwitz, (A; B ) is controllable, (A; C ) is
P >0
0 (7) (8)

1 is equivalent to the following statements: 1. (Bounded Real lemma:) The feasibility of the following LMI with variable P = P T > 0

L(A; P ) + C T C B T P + DT C
2. The solvability of the ARE

PB + C T D DT D ? I

L(A; P ) + C T C + (PB + C T D)(I ? DT D)?1(PB + C T D)T = 0
3

(9)

2.3 Optimal Regulator

The following equivalence between LMI and ARE originates from optimal control theory. This equivalence will be further explored in Section 5. Theorem 4 Let G(s) = C (sI ? A)?1 B + D, where (A; B ) is stabilizable and (A; C ) is detectable. Suppose that W and R are positive de nite matrices, then the following statements are equivalent: 1. The ARE L(A; P ) + C T WC ? PBR?1B T P = 0 (10) has a solution P = P T > 0. 2. The feasibility of the inequality L(A; P ) + C T WC + K T B T P + PBK + K T RK 0 (11) with variable P = P T > 0 and K . 3. The feasibility of the LMI 0 L(AT ; Q) + BY + Y T B T ?QC T ?Y T 1 @ ?CQ ?W ?1 0 A 0 (12) ?Y 0 ?R?1 Before the proof of the above results, we will rst restate the lemma from 17]. Lemma 5 Assume R 0 and there is a matrix P = P T such that AT P + PA + PRP + ~ S 0, if (A; R) is stabilizable, then there exist a unique minimal solution P to the ARE T P + PA + PRP + S = 0. Furthermore, P P; 8P such that AT P + PA + PRP + S 0 ~ A ~ ~ ~ ^ ~) C and (A + RP

Proof:

To prove the equivalence between (11) and (12), we de ne Q = P ?1 and Y = KQ. Multiply both sides of (11) by Q , we get L(AT ; Q) + QC T WCQ + Y T B T + BY + Y T RY < 0 Then it is obvious (11) and (12) are equivalent from Schur complement. To prove the equivalence between (10) and (11),we shall rst assume (10) holds, i.e. there is a matrix P = P T satis es L(A; P ) + C T WC ? PBR?1 B T P = 0. Since (A; B ) is stabilizable, there will exist a matrix K0 such that A0 = A + BK0 is stable. Now let P1 = P1T T be the unique solution to the Lyapunov equation P1 A0 + AT P1 + K0 RK0 + C T WC + I = 0. 0 ~ 0 = R1=2 K0 + R?1=2 B T P , then De ne K ~T ~ (P1 ? P )A0 + AT (P1 ? P ) + K0 K0 + L(A; P ) + C T WC ? PBR?1 B T P + I = 0 0 Thus L(A0 ; P1 ? P ) < 0. Since A0 is stable, P1 > P > 0. Now assume (11) holds, the expression (A + BK )T P + P (A + BK ) + C T WC + K T RK will obtain its minimum at K = ?R?1 B T P if K is a free variable. So if (L(A + BK ); P ) + C T WC + K T RK < 0, L(A; P ) + C T WC ? PBR?1B T P < 0 will also hold. From lemma 5, we know (10) will hold. 4

The following equivalence between LMI and ARE originates from H1 control theory, and this equivalence will be further explored in Section 4. Consider LTI system x(t) = Ax(t) +B1 u(t) +B2 w(t) _ y(t) = C1 x(t) +D12 w(t) (13) z(t) = C2 x(t) +D21 u(t) Under some mild assumptions, we are seeking a linear controller so that the closed-loop transfer function Tzw satis es kTzw k1 < . One of the classical results is given in 1]. Lemma 6 There exists an admissible controller such that kTzw k1 < if and only if the following three conditions hold.

2.4 H1 Control

A C1 C1 ? C2 C2 2 dom(Ric) and Y = Ric(H ) 0, i.e. ii. J1 := ?B B T 1 1 ?A 1 1 T T T Y1 will be the solution of ARE: L(AT ; Y ) + Y ( ?2 C1 C1 ? C2 C2 )Y + B1B1 = 0 iii. (X1 Y1) < 2 In 4], similar result based on LMI is given: Lemma 7 There exists an admissible controller such that kTzw k1 < if and only if the following inequalities are feasible hold. T T T i. L(A; P ) ? ( ?2 B1 B1 ? B2 B2 ) + PC1 C1 P < 0 T T T ii. L(AT ; Q) + ( ?2 C1 C1 ? C2 C2 ) + QB1 B1 Q < 0 iii. P > 0 Q > 0 under the constraint min (PQ) 12
T T

B1 B1 ? B2B2 2 dom(Ric) and X = Ric(H ) 0, i.e. i. H1 := ?CA C 1 1 T 1 ?AT 1 T T T X1 will be the solution of ARE: L(A; X ) + X ( ?2 B1 B1 ? B2 B2 )X + C1 C1 = 0
T T

?2

?2 T

These two results has been proven in 4] to be equivalent by using Lemma 10 shown in Appendix. 1

3 Absolute Stability
In this section, the results above are applied to nonlinear control theory in the framework of absolute stability. It is shown in Sec. 3.2 that the positive real lemma implies the bounded real lemma. It is also shown Sec. 3.3 that in this framework, the circle criterion is just an extension of the small gain theorem. These two results are just an illustration of the connection between di erent branches of control theory. There are several other results, but they will be omitted due to lack of space.
1 It is necessary to perturb some of the equations and then apply the continuity of the solution under small perturbation.

5

r

+ G(s):= (A,B,C,0) -

y

Nonlinear Element ψ(t,y)

Figure 1: Lure's problem

3.1 Lure's Problem

The Lure's problem is formulated as in Fig 1, in which G(s) stands for the following system:

x = Ax + Bu _ y = Cx

(14) (15)

where x 2 Rn , u; y 2 Rp , (A; B ) is controllable, (A; C ) is observable and (t; y) belong a sector Kmin ; Kmax ]. The de nition of absolute stability is given as De nition 1 The Lure's system is absolutely stable if the origin is globally uniformly asymptotically stable for any nonlinearity in the given sector.

Lemma 8 (Circle Criterion:) For Lure's problem, the system is absolutely stable if GT (s) =
G(s) I + Kmin G(s)]?1 is Hurwitz and ZT (s) = I + Kmax G(s)] I + Kmin G(s)]?1 is strictly
positive real.

Suppose all the necessary requirements are satis ed. For system G(s) = D + C (SI ? A)?1B . It can be proven 6] that if G(s) is positive real then

3.2 From Positive Real Lemma to Bounded Real Lemma
I ?G k I + G k1 < 1

The realization of I ?G can be written as I +G A1 = A ? B (I + D)?1 C p 2 B1 = pB (I + D)?1 C1 = ? 2(I + D)?1 C D1 = (I + D)?1 (I ? D) From (1), we know that

(16) (17) (18) (19)

L(A1 ; P ) = ?LT L ? C T (I + DT )?1 (C ? W T L) ? (C T ? LT W )(I + D)?1C ? P p PB1 = p2(C T ? LT W )(I + D)?1 T B1 P = 2(I + DT )?1 (C ? W T L)
6

Now we are going to apply a series of congruence transformation on

C1 D1 ?I where the transformation matrix T1 ; T2 ; T3 de ned as: 0I 0 0 T1 = @ 0 I + D 0 0 0 I + DT 0 I 0 p T2 = @ 22 (I + D)?1 C I T3 =
Then, (20) will become

0 L(A ; P ) PB C T 1 1 1 @ B1T P ?I D11 A

(20)

1 A
0 0

0I 0 @0 I

0 I I

0 1 0 0A

0 I

1 A

0E E E 1 11 12 13 T @ E12 E22 E23 A
T T E13 E23 E33

(21)

where

1 E11 = ?LT L ? P ? 2 C T C

E12 E13 E22 E23 E33

p p T = 2L W ? 22 C T (D + DT ) p

= 22 C T (I + DT ) = ?2W T W ? DT D ? DDT ? D2 ? (DT )2 = ?(DT )2 ? DDT ? W T W = ?I ? DDT ? W T W
p2 2C

It is noted that (21) will equal to

0 p2 C T 1 2 ? @ D + DT A I D 0 P +0 0 1 ?@ 0 0 0 A
0 0 0

D + DT I + DT

0 LT 1 p ? p ? @ 2W T A L 2W
0

0

which is negative de nite. This is exactly the bounded real lemma. 7

3.3 From Small Gain Theorem to Circle criterion

Consider the Lure's problem, suppose the nonlinearity satis es the sector condition 0; 1 ]. Then from the small gain theorem, we know that the system will be stable if the H1 norm of G(s) is less than . By bounded real lemma, we get 0 L(A; P ) PB C T 1 @ BT P ? I 0 A < 0 (22) C 0 ? I It follows that 0 L(A; P ) PB ? C T C T 1 @ B T P ? C ?2 I I A<0 (23) which implies that

C

I

?I

From this equation, we know that the system I + 1 G(s) is positive real, which is the same condition given by Circle criterion.

L(A; P ) PB ? C T B T P ? C ?2 I

<0

(24)

4 Control of Linear System with Uncertainty
In this section, traditional robust control is investigated. Sec. 4.1 reviews the work done previously by Zhou 10]. This work is about the equivalences between the small gain theorem and robust H1 performance expressed in QMI form. In Sec. 4.2, it is shown that classical robust control results can also be written in simple LMI form without conservation. This o ers us new ways of looking at these traditional robust control results and solving problems more e ciently.

4.1 Robust H1 Performance
De nition 2 The system
x = A( )x + B ( )w; _ (25) z = C ( )x + D( )w (26) is said to satisfy strongly robust H1 performance criterion if kD( )k < 1; 8 and there exists a constant symmetric matrix P > 0 such that AT ( )P +PA( )+(PB ( )+C T ( )D( ))R?1 (B T ( )P +DT ( )C ( ))+C T ( )C ( ) < 0
where R = I ? DT ( )D( ) > 0. stands for uncertainty in the system.

(27)

Consider a simple class of uncertain system where the system is as in Fig. 2 with

2A M (s) = 4 C0
C1

B 0 B1
0 0 0 0

3 5

(28)

8

? (t) η ζ

z

M(s)

w

Figure 2: Uncertainty Description and = diag 1 (t); 2 (t); ::: m (t)]. Assume the uncertainty is normalized so that k k 1. It has been proven 10] that small gain theorem and robust H1 performance are equivalent for this system.

4.2 State Feedback Control of Linear Uncertain System
In 9], the following theorem has been proven: Lemma 9 Consider the system

x = (A + A)(t)x(t) + (B + B )(t)u(t) _
where A 2 Rn n; B 2 Rn p . Suppose the matching conditions

(29) (30)

A(t)

B (t) = DF (t) E1 E2 ]

are satis ed. Here F (t) 2 Rk j is the uncertainty which is bounded as below

F (t) 2 F := fF (t) : kF (t)k 1g (31) T Suppose rank(E2 ) = m p. De ne 2 2 Rm p such that rank( 2 ) = m and E2 E2 = T 2 . Let be chosen such that T = 0, T = I ( = 0 if m = p). Let := 2 2 > 0 such that the following ARE
T ( 2 T )?2 2 The system (29) is quadratically stabilizable via linear control if there exists 2 2 T L(A ? B E2 E1 ; P ) + P (DDT ? B B T ? 1 B T B T )P T T +E1 (I ? E2 E2 )E1 + I = 0

(32)

has a symmetric matrix solution P > 0. In this case, a stabilizing state feedback control law is given by T (33) u(t) = ?(( 21 T + )B T P + E2 E1 )x(t) Conversely, if the uncertainty system (29) is quadratically stabilizable via linear control, then there exists > 0 such that for all 2 (0; ), the ARE (32) admits a unique symmetric T matrix solution P > 0 such that (A ? B E2 E1 + (DDT ? B B T ? 1 B T B T )P0 ) is asymptotically stable. However, this procedure is rather complicated. In this subsection, we want to give a simple criterion. Rewrite the equation (29) as

x(t) = (A + DF (t)E1 )x(t) + (B + DF (t)E2 )u(t) _
9

(34)

G(s)
D 1/s E1

B

A

K E2 F(t)

G(s)
D 1/s E1 E2K

A

B

K

F(t)

Figure 3: State Feedback Control of Uncertainty Linear System Control objective is to nd a linear state feedback control law which stabilize the closed-loop system. After simple transformation, the system can be treated as a LTI system G(s) = (E2 K + E1 )(sI ? A ? BK )?1 D plus a time-varying state feedback. The transformation is shown in Fig. 3. If kGk1 < 1, then the closed-loop system will be stable. From bounded real lemma, this can be write as

0 L(A + BK; P ) PD (E + E K )T 1 1 2 @ DT P A<0 ?I 0
E1 + E2 K
0

?I

(35)

De ne Q = P ?1 and Y = KP ?1 we can convert this to the design LMI:

0 QAT + AQ + Y T B T + BY D (E Q + E Y )T 1 1 2 @ A<0 DT ?I 0
E1 Q + E2 Y
0

?I

(36)

Remark: If F (t) also satis es the sector bound (0; 1), we can apply the circle criterion to

the system, then the quadratic stability can be established. Remark: Consider when B = 0 and B = 0, then from 9], the (35) becomes both necessary and su cient condition. Now we need to prove that condition (35) is equivalent to Theorem 9 when E2 has full rank.

Proof:

Necessary Part: Suppose that ARE (32) is solvable. Then we need to nd a feedback gain F1 such that:

L(A + BF1)P ) + PDDT P + (E1 + E2 F1)T (E1 + E2F1 ) < 0
10

(37)

where , are de ned in Theorem 9. From the assumption of Theorem 9, it is obvious that T T T the following fact holds: T = 0, T = I , E2 E2 = 0, E2 E2 T = 0, E2 E2 = and 2 T E2 E T E2 = E T E2 E2 2 2 Notice (32) holds true, we get then (37) holds true if the following equation holds true. 1 T T B T P ?PB T E T E +E T E E T E T B T P +PB T E T E E T E ) < I 2 1 1 2 2 2 2 2 2 1 2 (?E1 E2 (38) But if E2 is of full rank, the left hand side of (37) will be equal to zero. Su cient Part: Suppose (37) holds true, then
T T T T AT P + PA + F T (B T P + E2 E1 ) + (PB + E1 E2 )F + F T E2 E2 F + E1 E1 + PDDT P < 0 T Since E2 is of full rank, then = (E2 E2 )?1 and = 0. From the result in subsection 2.3,

T Choose F1 = ?( 21 T + )B T P ? E2 E1 and substitute into the left-hand side of (37), we get T L((A ? B E2 E1 ? 21 B T B T P ? B B T P ); P ) + PDDT P + 1 1 T T (E1 ? 2 E2 B T P ? E2 B T P ? E2 E2 E1 )T (E1 ? 2 E2 B T P ? E2 B T P ? E2 E2 E1 )

we the following ARE is solvable.

T T T AT P + PA ? (PB + E1 E2 ) (B T P + E2 E1 ) + E1 E1 = 0

From Lemma 11,we conclude that there exists a solution to the ARE
T T T L(A ? B E2 E1 ; P1 ) + P1(DDT ? B B T )P1 + E1 (I ? E2 E2 )E1 + I = 0

5 Optimal Control
In this section, classical optimal control is related to LMIs through a sub-optimal control approach. The result veri es one of our previous results on the equivalences between LMI and ARE. For linear system

x = Ax + Bu _ y = Cx where (A; B ) is stabilizable and (A; C ) is detectable.
Performance index is de ned as:

J=

Z1
0

(y0 (t)Wy(t) + u0 (t)Ru(t)) dt

(39)

where W and R are positive de nite matrices. 11

We can also prove the equivalence of (10) and (11) using Optimal Control Theory. From LQR control, we know that state feedback control u(t) = ?R?1B T Px(t) will make the performance index achieve minimum J = 1 x0 (0)Px(0) where P is the positive de nite 2 solution to the Riccati Equation. Since the closed-loop system is stable, we can assume the ^ ^ Lyapunov function is of quadratic form V (x) = x0 Px. Since we can always scale P , we can assume J V (x(0)). That is V (x(0)) will give an upper bound of J . Now because the system is linear, so J V (x(0)) if and only if Suppose the state feedback is of the form u(t) = Kx(t), then the closed loop system will be x(t) = (A + BK )x(t). Enforcing (40) will give us the LMI _

dV dt

?(y0(t)Wy(t) + u(t)0 Ru(t))

(40)

0 AQ + QAT + BY + Y T B QC T Y T 1 @ CQ W ?1 0 A
Y
0

R?1

0

(41)

Remark: This idea can be further extended to model predictive control. LMI tools will
be used for optimization at each step, and the controller gain will be calculated based the optimized solution of LMIs.

6 Conclusions
In this paper, we summarize and explore some of the facts about the relation between LMI and ARE. That is, what is the position of the ARE solution in the convex feasibility set of LMI. Understanding this will not only give us the link between the classical linear control approach and the LMI control design approach, but also will help us to design controller with better performance and more robustness.

A Appendix

T T Lemma 10 Suppose the inequality L(A; X ) + X ( ?2 B1B1 ? B2B2 )X + C1T C1 < 0 has T 2 Rn n , then Hamiltonian matrix H1 has no eigenvalue on the a solution X0 = X0 T T T imaginary axis. If moreover X0 > 0, ARE: L(A; X ) + X ( ?2 B1 B1 ? B2 B2 )X + C1 C1 = 0

has a stabilizing solution X1 satisfying 0 X1 < X0

The following lemma is given in 7] concerning the solution of ARE. Lemma 11 Suppose matrices A, W = W T > 0 and Q = QT are given. If the ARE

equation

PWP = PA + AT P + Q has a positive de nite symmetric solution P , then for any 0 < W1 P1 W1 P1 = P1 A + AT P1 + Q1 has a positive de nite symmetric solution P1 P
12

W and Q1

Q, the

References
1] J. C. Doyle, K. Glover, P. P. Khargonekar and B. A. Francis, \State-Space Solutions to Standard H2 and H1 Control Problems," IEEE Trans. on Automatic Control, vol. 34, no.8, 1989. 2] B. A. Francis, \The linear multi-variable regulator problem," SIAM J. Control and Optimization, vol. 15, pp. 486-505, 1977. 3] W. M. Haddad and D. S. Bernstein, \ Explicit Construction of Quadratic Lyapunov Functions for the Small Gain, passivity, Circle and Popov Theorems and their applications to robust stability," in Proc. IEEE 30th CDC, 1991, pp. 2618-2623 4] P. Gahinet and P. Apkarian, \ A linear matrix inequality approach to H1 Control," Int. J. Robust and Nonlinear Control, vol. 4, pp. 421-448, 1994. 5] W. M. Haddad and D. S. Bernstein, \ Parameter-dependent Lyapunov functions and the Popov criterion in robust analysis and synthesis," IEEE Trans. Automatic Control,, vol. 40, pp. 536-543, 1995 6] J. T. Wen, \Time domain and frequency domain conditions for strict positive realness,", IEEE Trans. on Automatic Control, vol. 33, no. 10, pp. 988-992, 1990. 7] J. C. Williems, \ Least squares stationary optimal control and the algebraic Riccati equation," IEEE Trans. on Automatic Control, vol. 16, pp. 621-634, 1971. 8] J. C. Williems, \Dissipative dynamical systems, Part I and II," Archive Rational Mechanics and Anal., vol. 45, no. 5, pp.321-393, 1992 9] P. P. Khargonekar, I. R. Peterson and K. Zhou, " Robust Stabilization of Uncertainty Linear Systems: Quadratic Stabilization and H1 Control Theory," IEEE Trans. on Automatic Control, vol. 35, no. 3, pp. 356-361, 1990 10] K. Zhou, P. P. Khargonekar, J. Stoustrup and H. H. Niemann, \ Robust Stability and Performance of Uncertain System in State Space", in Proc. 31st IEEE CDC, 1992, pp. 662-668 11] S. Boyd, L. E.Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in Systems and Control Theory, SIAM, Philadelphia, PA, 1994. 12] P. Gahinet, A. Nemirovski, A. J. Laub and M. Chilali, LMI Control Toolbox, The Math Works, 1995 13] Michael Green and David J. N., Linear Robust Control, Prentice Hall, 1995 14] Hassan K. Khalil, Nonlinear Systems, Prentice Hall, 1996 15] D. E. Kirk, Optimal Control Theory: An introduction, Prentice Hall, 1970 16] A. A. Stoorvogel, H1 Control Problem - A state-space approach, Prentice Hall, 1992 17] K. Zhou, J. C. Doyle and K. Glover, Robust Optimal Control, Prentice Hall, 1996. 13




友情链接: