• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 32
  • 28
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 63
  • 56
  • 50
  • 43
  • 40
  • 32
  • 27
  • 23
  • 20
  • 20
  • 20
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

On-line Traffic Signalization using Robust Feedback Control

Yu, Tungsheng 23 January 1998 (has links)
The traffic signal affects the life of virtually everyone every day. The effectiveness of signal systems can reduce the incidence of delays, stops, fuel consumption, emission of pollutants, and accidents. The problems related to rapid growth in traffic congestion call for more effective traffic signalization using robust feedback control methodology. Online traffic-responsive signalization is based on real-time traffic conditions and selects cycle, split, phase, and offset for the intersection according to detector data. A robust traffic feedback control begins with assembling traffic demands, traffic facility supply, and feedback control law for the existing traffic operating environment. This information serves the input to the traffic control process which in turn provides an output in terms of the desired performance under varying conditions. Traffic signalization belongs to a class of hybrid systems since the differential equations model the continuous behavior of the traffic flow dynamics and finite-state machines model the discrete state changes of the controller. A complicating aspect, due to the state-space constraint that queue lengths are necessarily nonnegative, is that the continuous-time system dynamics is actually the projection of a smooth system of ordinary differential equations. This also leads to discontinuities in the boundary dynamics of a sort common in queueing problems. The project is concerned with the design of a feedback controller to minimize accumulated queue lengths in the presence of unknown inflow disturbances at an isolated intersection and a traffic network with some signalized intersections. A dynamical system has finite L₂-gain if it is dissipative in some sense. Therefore, the H<SUB>infinity</SUB>-control problem turns to designing a controller such that the resulting closed loop system is dissipative, and correspondingly there exists a storage function. The major contributions of this thesis include 1) to propose state space models for both isolated multi-phase intersections and a class of queueing networks; 2) to formulate H<SUB>infinity</SUB> problems for the control systems with persistent disturbances; 3) to present the projection dynamics aspects of the problem to account for the constraints on the state variables; 4) formally to study this problem as a hybrid system; 5) to derive traffic-actuated feedback control laws for the multi-phase intersections. Though we have mathematically presented a robust feedback solution for the traffic signalization, there still remains some distance before the physical implementation. A robust adaptive control is an interesting research area for the future traffic signalization. / Ph. D.
122

Finite-time partial stability, stabilization, semistabilization, and optimal feedback control

L'afflitto, Andrea 08 June 2015 (has links)
Asymptotic stability is a key notion of system stability for controlled dynamical systems as it guarantees that the system trajectories are bounded in a neighborhood of a given isolated equilibrium point and converge to this equilibrium over the infinite horizon. In some applications, however, asymptotic stability is not the appropriate notion of stability. For example, for systems with a continuum of equilibria, every neighborhood of an equilibrium contains another equilibrium and a nonisolated equilibrium cannot be asymptotically stable. Alternatively, in stabilization of spacecraft dynamics via gimballed gyroscopes, it is desirable to find state- and output-feedback control laws that guarantee partial-state stability of the closed-loop system, that is, stability with respect to part of the system state. Furthermore, we may additionally require finite-time stability of the closed-loop system, that is, convergence of the system's trajectories to a Lyapunov stable equilibrium in finite time. The Hamilton-Jacobi-Bellman optimal control framework provides necessary and sufficient conditions for the existence of state-feedback controllers that minimize a given performance measure and guarantee asymptotic stability of the closed-loop system. In this research, we provide extensions of the Hamilton-Jacobi-Bellman optimal control theory to develop state-feedback control laws that minimize nonlinear-nonquadratic performance criteria and guarantee semistability, partial-state stability, finite-time stability, and finite-time partial state stability of the closed-loop system.
123

Dynamique des populations : contrôle stochastique et modélisation hybride du cancer

Claisse, Julien 04 July 2014 (has links) (PDF)
L'objectif de cette thèse est de développer la théorie du contrôle stochastique et ses applications en dynamique des populations. D'un point de vue théorique, nous présentons l'étude de problèmes de contrôle stochastique à horizon fini sur des processus de diffusion, de branchement non linéaire et de branchement-diffusion. Dans chacun des cas, nous raisonnons par la méthode de la programmation dynamique en veillant à démontrer soigneusement un argument de conditionnement analogue à la propriété de Markov forte pour les processus contrôlés. Le principe de la programmation dynamique nous permet alors de prouver que la fonction valeur est solution (régulière ou de viscosité) de l'équation de Hamilton-Jacobi-Bellman correspondante. Dans le cas régulier, nous identifions également un contrôle optimal markovien par un théorème de vérification. Du point de vue des applications, nous nous intéressons à la modélisation mathématique du cancer et de ses stratégies thérapeutiques. Plus précisément, nous construisons un modèle hybride de croissance de tumeur qui rend compte du rôle fondamental de l'acidité dans l'évolution de la maladie. Les cibles de la thérapie apparaissent explicitement comme paramètres du modèle afin de pouvoir l'utiliser comme support d'évaluation de stratégies thérapeutiques.
124

Stochastic Infinity-Laplacian equation and One-Laplacian equation in image processing and mean curvature flows : finite and large time behaviours

Wei, Fajin January 2010 (has links)
The existence of pathwise stationary solutions of this stochastic partial differential equation (SPDE, for abbreviation) is demonstrated. In Part II, a connection between certain kind of state constrained controlled Forward-Backward Stochastic Differential Equations (FBSDEs) and Hamilton-Jacobi-Bellman equations (HJB equations) are demonstrated. The special case provides a probabilistic representation of some geometric flows, including the mean curvature flows. Part II includes also a probabilistic proof of the finite time existence of the mean curvature flows.
125

Modélisation asymptotique pour la simulation aux grandes échelles de la combustion turbulente prémélangée

Khouider, Boualem January 2002 (has links)
Thèse diffusée initialement dans le cadre d'un projet pilote des Presses de l'Université de Montréal/Centre d'édition numérique UdeM (1997-2008) avec l'autorisation de l'auteur.
126

Optimal Bounded Control and Relevant Response Analysis for Random Vibrations

Iourtchenko, Daniil V 25 May 2001 (has links)
In this dissertation, certain problems of stochastic optimal control and relevant analysis of random vibrations are considered. Dynamic Programming approach is used to find an optimal control law for a linear single-degree-of-freedom system subjected to Gaussian white-noise excitation. To minimize a system's mean response energy, a bounded in magnitude control force is applied. This approach reduces the problem of finding the optimal control law to a problem of finding a solution to the Hamilton-Jacobi-Bellman (HJB) partial differential equation. A solution to this partial differential equation (PDE) is obtained by developed 'hybrid' solution method. The application of bounded in magnitude control law will always introduce a certain type of nonlinearity into the system's stochastic equation of motion. These systems may be analyzed by the Energy Balance method, which introduced and developed in this dissertation. Comparison of analytical results obtained by the Energy Balance method and by stochastic averaging method with numerical results is provided. The comparison of results indicates that the Energy Balance method is more accurate than the well-known stochastic averaging method.
127

Controle H-infinito não linear e a equação de Hamilton Jacobi-Isaacs. / Nonlinear H-infinity control and the Hamilton-Jacobi-Isaacs equation.

Ferreira, Henrique Cezar 10 December 2008 (has links)
O objetivo desta tese é investigar aspectos práticos que facilitem a aplicação da teoria de controle H1 não linear em projetos de sistemas de controle. A primeira contribuição deste trabalho é a proposta do uso de funções ponderação com dinâmica no projeto de controladores H1 não lineares. Essas funções são usadas no projeto de controladores H1 lineares para rejeição de perturbações, ruídos, atenuação de erro de rastreamento, dentre outras especificações. O maior obstáculo para aplicação prática da teoria de controle H1 não linear é a dificuldade para resolver simultaneamente as duas equações de Hamilton-Jacobi-Isaacs relacionadas ao problema de realimentação de estados e injeção da saída. Não há métodos sistematicos para resolver essas duas equações diferenciais parciais não lineares, equivalentes µas equações de Riccati da teoria de controle H1 linear. A segunda contribuição desta tese é um método para obter a injeção da saída transformando a equação de Hamilton-Jacobi-Isaacs em uma sequencia de equações diferenciais parciais lineares, que são resolvidas usando o método de Galerkin. Controladores H1 não lineares para um sistema de levitação magnética são obtidos usando o método clássico de expansão em série de Taylor e o método de proposto para comparação. / The purpose of this thesis is to investigate practical aspects to facilitate the ap- plication of nonlinear H1 theory in control systems design. Firstly, it is shown that dynamic weighting functions can be used to improve the performance and robustness of the nonlinear H1 controller such as in the design of H1 controllers for linear plants. The biggest bottleneck to the practical applications of nonlinear H1 control theory has been the di±culty in solving the Hamilton-Jacobi-Isaacs equations associated with the design of a state feedback and an output injection gain. There is no systematic numerical approach for solving this ¯rst order, nonlinear partial di®erential equations, which reduces to Riccati equations in the linear context. In this work, successive ap- proximation and Galerkin approximation methods are combined to derive an algorithm that produces an output injection gain. Design of nonlinear H1 controllers obtained by the well established Taylor approximation and by the proposed Galerkin approxi- mation method applied to a magnetic levitation system are presented for comparison purposes.
128

Optimizing Reflected Brownian Motion: A Numerical Study

Zihe Zhou (7483880) 17 October 2019 (has links)
This thesis focuses on optimization on a generic objective function based on reflected Brownian motion (RBM). We investigate in several approaches including the partial differential equation approach where we write our objective function in terms of a Hamilton-Jacobi-Bellman equation using the dynamic programming principle and the gradient descent approach where we use two different gradient estimators. We provide extensive numerical results with the gradient descent approach and we discuss the difficulties and future study opportunities for this problem.
129

Hamilton-Jacobi Theory and Superintegrable Systems

Armstrong, Craig Keith January 2007 (has links)
Hamilton-Jacobi theory provides a powerful method for extracting the equations of motion out of some given systems in classical mechanics. On occasion it allows some systems to be solved by the method of separation of variables. If a system with n degrees of freedom has 2n - 1 constants of the motion that are polynomial in the momenta, then that system is called superintegrable. Such a system can usually be solved in multiple coordinate systems if the constants of the motion are quadratic in the momenta. All superintegrable two dimensional Hamiltonians of the form H = (p_x)sup2 + (p_y)sup2 + V(x,y), with constants that are quadratic in the momenta were classified by Kalnins et al [5], and the coordinate systems in which they separate were found. We discuss Hamilton-Jacobi theory and its development from a classical viewpoint, as well as superintegrability. We then proceed to use the theory to find equations of motion for some of the superintegrable Hamiltonians from Kalnins et al [5]. We also discuss some of the properties of the Poisson algebra of those systems, and examine the orbits.
130

Approche probabiliste des particules collantes et système de gaz sans pression

Moutsinga, Octave 16 June 2003 (has links) (PDF)
A chaque instant $t$, nous construisons la dynamique des particules collantes dont la masse est distribuée initialement suivant une fonction de répartition $F_0$, avec une vitesse $u_0$, à partir de l'enveloppe convexe $H(\cdot,t)$ de la fonction $m\in (0,1)\mapsto \int_a^m\big( F_0^(-1)(z) + tu_0\big(F_0^(-1)(z)\big)\big)dz$. Ici, $F_0^(-1)$ est l'une des deux fonctions inverses de $F_0$. Nous montrons que les deux processus stochastiques $X_t^-(m)= \partial_m^-H(m,t),\; X_t^+(m) = \partial_m^+H(m,t)$, définis sur l'espace probabilisé $([0, 1], (\cal B), \lambda)$, sont indistinguables et ils modélisent les trajectoires des particules. Le processus $X_t:= X_t^- = X_t^+$ est une solution de l'équation $(EDS): \; \frac(dX_t)(dt) =\E[ u_0(X_0)/X_t]$, telle que $P(X_0 \leq x) = F_0(x)\,\,\forall x$. L'inverse $M_t:= M(\cdot,t)$ de la fonction $m\mapsto \partial_mH(m,t)$ est la fonction de répartition de la masse à l'instant $t$. Elle est aussi la fonction de répartition de la variable aléatoire $X_t$. On montre l'existence d'un flot $(\phi(x,t,M_s, u_s))_( s < t)$ tel que $X_t= \phi(X_s,t,M_s,u_s)$, où $u_s(x) = \E[ u_0(X_0)/X_s = x]$ est la fonction vitesse des particules à l'instant $s$. Si $\frac(dF_0^n)(dx)$ converge faiblement vers $\frac(dF_0)(dx)$, alors la suite des flots $\phi(\cdot,\cdot,F_0^n,u_0)$ converge uniformément, sur tout compact, vers $\phi(\cdot,\cdot,F_0,u_0)$. Ensuite, nous retrouvons et étendons certains résultats des équations aux dérivées partielles, à savoir que la fonction $(x,t)\mapsto M(x,t)$ est la solution entropique d'une loi de conservation scalaire de donnée initiale $F_0$, et la famille $\big(\rho(dx,t) = P(X_t\in dx),\, u(x,t) = \E[ u_0(X_0)/X_t = x]\big)_(t >0)$ est une solution faible du système de gaz sans pression de données initiales $\frac(dF_0(x))(dx), u_0$. Cette thèse contient aussi d'autres solutions de l'équation différentielle stochastique $(EDS)$ ci-dessus.

Page generated in 0.0494 seconds