Analysis of instantaneous control for the Burgers equation

https://doi.org/10.1016/S0362-546X(01)00750-7Get rights and content

Introduction

In this research we are concerned with instantaneous control applied to the Burgers equation. As a model we take the distributed optimal control problem(P)minJ(y,u)=120TΩ|y−z|2dxdt+α20TΩc|u|2dxdts.t.yt−νy″+yy′=uχΩcin(0,T)×Ω,y=0on(0,T)×∂Ω,y(0)=φinΩ.Here, the control target is to match the given desired state z in the L2-sense by adjusting the body force u in a control volume Ωc⊆Ω=(0,1) in the L2-sense, i.e. with minimal energy and work. The first term in the cost functional measures the physical objective, the second one the size of the control, where the parameter α>0 plays the role of a weight. Instantaneous control is a suboptimal control approach which yields controls whose quality compares to that of the optimal ones, but which can be computed at significantly lower numerical costs. The main aim of this work is to analyze the stability properties of this suboptimal control approach in terms of the parameters involved in the method.

If the control goal is primarily to match the desired state without considering the minimization of the energy and work linear body force feedback control can be used instead of optimal control [9], [15], [16]. The idea behind instantaneous control in this context is two-fold—to construct a feedback law which matches the desired state, but at considerable control costs. The method works as follows. The uncontrolled Burgers equation is discretized with respect to time. Then, at selected time slices an instantaneous version of the cost functional is approximately minimized subject to a stationary quasi-Burgers equation, whose structure depends on the chosen discretization method. The control obtained is used to steer the system to the next time slice, where the procedure is repeated.

Instantaneous control is closely related to receding horizon control (rhc) or model predictive control (mpc) with finite time horizon [8], [19], [20]. Its difference to (rhc) and (mpc) can best be explained in terms of a chess analogy proposed by Bewley et al. in [3]: (rhc) and (mpc) compute the next n, say, optimal moves and apply the first one to steer the system to the next time slice. Instantaneous control computes an approximation to the next optimal move by applying exactly one step of the gradient method to the solution of the instantaneous control problem and applies the approximate control to advance to the next time slice. In terms of the chess analogy applying instantaneous control would correspond to playing Blitz. Clearly, it cannot be expected that the instantaneous control technique is a candidate to stabilize arbitrary dynamical systems, compare the citations above. However, as we shall show at hand of the Burgers equation, it is capable of enhancing the stability properties of dissipative systems. In its nature instantaneous control is a discrete-in-time feedback control approach and, as shall be shown, can be interpreted as the stable time discretization of a closed loop control law [13].

The optimal control problem (P) was studied by the second author in [24], where the augmented Lagrange SQP technique was used to solve (P). Related contributions were presented in [1], where optimality systems and existence of optimal controls for the control of the instationary Navier–Stokes equations in two space dimensions are discussed. A gradient technique was proposed in [10] to solve (P) with the Navier–Stokes equations as subsidiary condition, similar approaches to the numerical solution of the related optimal control problem can be found in [2].

Instantaneous control among other things was applied to control the Burgers equation with stochastic forcing in [6]. Thereafter this control technique was successfully applied to compute suboptimal controls for a great variety of fluid mechanical control problems [4], [5], [11], [14], [17], [18] and in the recent past also as real-time control approach to the cooling of steel [22]. However, as far as the authors know stability investigations for its application to the control of infinite-dimensional systems have not yet been provided. Stability analysis of the approach for finite-dimensional systems with distributed controls can be found in [13]. In the present work we extend these results to the Burgers equation. The analysis for the method applied to the instationary Navier–Stokes equations is provided by the first author in [12].

The paper is organized as follows. Section 2 is devoted to the study of (P), which serves as model problem for the validation of instantaneous control. We briefly discuss existence and second order sufficiency for solutions of this control problem. Section 3 is devoted to the derivation of the instantaneous control approach and the formulation of the algorithm. In Section 4 we relate the discrete-in-time control approach to a continuous closed-loop control law and prove both stability of the continuous and the discrete controller. Finally, in Section 5 we present numerical examples, which illustrate the theoretical results.

Throughout this work c and C denote global generic constants whose dependencies are mentioned when necessary.

Section snippets

Optimal control

For given T>0 let Q=(0,T)×Ω, where Ω=(0,1)⊂R. We set V=H01(Ω), H=L2(Ω) and identify the Hilbert space H with its dual H′. On H we use the common inner product, and endow the Hilbert space V with the inner product(ϕ,ψ)V=(ϕ′,ψ′)Hforϕ,ψ∈V.Moreover, by L2(0,T;V) we denote the space of measurable abstract functions ϕ:[0,T]→V, which are square integrable, i.e.,0T∥ϕ(t,·)∥V2dt<∞.We denote for instance by u(t) the function u(t,·) considered as a function of x only when t is fixed.

The control acts on a

Instantaneous control strategy

We now describe the moving horizon approach which is the main subject of the investigations.

The uncontrolled Burgers equation is discretized with respect to time. At every time step the present stationary tracking type control problem is approximately solved by applying exactly one gradient step with step size ρ>0. The control obtained is used to steer state to the next time slice and, in the long run, hopefully towards the desired state. By patching together the controls one obtains a

The control laws

Instantaneous control allows for an interpretation as a nonlinear discrete-in-time suboptimal closed loop control method, which turns out to be the stable time discretization of some continuous closed loop controller. We now outline the reconstruction of the continuous controller from the discrete-in-time control approach given by Algorithm 1. Furthermore, we present existence and uniqueness results for the continuous controller, and investigate the asymptotic behavior of both, the continuous

Numerical experiments

In this section we present numerical examples that demonstrate the stability properties of the continuous controller developed in (20) and of the discrete controller given by (29). We begin with comparing the results from the optimal control problem to that of instantaneous control. The computer programs are written in MATLAB version 5.3 and are executed on a Pentium III 550MHz personal computer.

In the test runs we use piecewise linear finite elements with 50 degrees of freedom for the spatial

First page preview

First page preview
Click to open first page preview

References (24)

  • C.E. García et al.

    Model predictive control: Theory and practice—a survey

    Automatica

    (1989)
  • F. Abergel et al.

    On some control problems in fluid mechanics

    Theoret. Comput. Fluid Dyn.

    (1990)
  • M. Berggren

    Numerical solution of a flow control problem: Vorticity reduction by dynamic boundary action

    SIAM J. Sci. Comput.

    (1998)
  • T.R. Bewley, P. Moin, R. Temam, DNS-based predictive control of turbulence: an optimal benchmark for feedback...
  • H. Choi, Suboptimal control of turbulent flow using control theory, Proceedings of the International Symposium on...
  • H. Choi, M. Hinze, K. Kunisch, Instantaneous control of backward-facing-step flows, Applied Numer. Math., 31 (1999)...
  • H. Choi et al.

    Feedback control for unsteady flow and its application to the stochastic Burgers equation

    J. Fluid Mech.

    (1993)
  • R. Dautray et al.

    Functional and variational methods

    (1990)
  • M.D. Gunzburger, S. Manservisi, Analysis and approximation for linear feedback control for tracking the velocity in...
  • M.D. Gunzburger et al.

    Analysis and approximation of the velocity tracking problem for Navier–Stokes flows with distributed control

    SIAM J. Numer. Anal.

    (2000)
  • D.C. Hill, Drag reduction strategies, CTR Annual Research Briefs, 1993. Center for Turbulence Research, Stanford...
  • M. Hinze, Optimal and instantaneous control of the instationary Navier–Stokes equations, Habilitationsschrift, 1999,...
  • Cited by (59)

    • Feedback control for fluid mixing via advection

      2023, Journal of Differential Equations
    • A mathematical analysis of an isothermal tube drawing process

      2020, Alexandria Engineering Journal
      Citation Excerpt :

      In glass industry, people are trying to control the geometry of the glass tube optimally and investigated the proper external parameters on their behalf through different techniques which have been discussed in literature [1,2,4–16,42].

    • Optimal control and optimality condition of the Camassa–Holm equation

      2017, European Journal of Control
      Citation Excerpt :

      The optimal control problems for the Burgers equation which represents shock wave, supersonic flow about airfoils, traffic flows, acoustic transmission and so on had been studied widely. For example, S. Volkwein et al. utilized proper orthogonal decomposition (POD) to solve open-loop and closed-loop optimal control problems for the Burgers equation [34] and discussed the instantaneous control of the Burgers equation [30]. Using the Augmented Lagrangian-SQP method, S. Volkwein also analyzed the optimal control problems for the stationary Burgers equation [53] and studied the distributed optimal control problems for time-dependent Burgers equation [55].

    • On the optimal control problem for the Novikov equation with strong viscosity

      2016, Journal of Mathematical Analysis and Applications
      Citation Excerpt :

      Volkwein [36] used the augmented Lagrangian–SQP technique to solve the optimal control problem governed by the Burgers equation. Hinze and Volkwein [10] discussed the instantaneous control of the Burgers equation. Lagnese and Leugering [16] considered the problem of boundary optimal control of a wave equation with boundary dissipation.

    • Adjoint-based optimization of particle trajectories in laminar flows

      2014, Applied Mathematics and Computation
      Citation Excerpt :

      This can be seen as feasibility study and a benchmark for the industrial application. Different techniques have been proposed to solve tracking-type problems for flow problems (see, e.g., [3–6]). A comprehensive treatment of distributed optimal control of NSE, comparing instantaneous and optimal control as well as first and second order methods can be found in [7].

    View all citing articles on Scopus
    View full text