• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 24
  • 17
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 167
  • 167
  • 63
  • 39
  • 35
  • 25
  • 24
  • 22
  • 21
  • 21
  • 20
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Design factors for the communication architecture of distributed discrete event simulation systems

Hoaglund, Catharine McIntire 01 January 2006 (has links)
The purpose of the thesis was to investigate the influence communication architecture decisions have on the performance of a simulation system with distributed components. In particular, the objective was to assess the relative importance of factors affecting reliability and variability of an external data interface to the performance of the simulation, as compared to factor within the simulation itself.
122

Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation

Sarkar, Falguni 05 1900 (has links)
Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead. In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of them on a network of workstations. The other algorithm has been analyzed probabilistically. The reason for choosing network of workstations is their low cost and the availability of efficient message passing softwares like PVM and MPI. All of these load balancing algorithms piggyback on the existing GVT computation algorithms and try to balance the speed of simulation in different lps. We have also designed an optimal GVT computation algorithm for the hypercubes and studied its performance with respect to the other GVT computation algorithms by simulating a hypercube in our network cluster. We use the topological properties of a star network in order to design an algorithm for computing a safe time-window for parallel discrete event simulation. We have analyzed and simulated the behavior of an open queuing network resembling such an architecture. Our algorithm is also extended for hierarchical stars and for recursive window computation.
123

Controle ótimo de sistemas com saltos Markovianos e ruído multiplicativo com custos linear e quadrático indefinido. / Indefinite quadratic with linear costs optimal control of Markov jump with multiplicative noise systems.

Paulo, Wanderlei Lima de 01 November 2007 (has links)
Esta tese trata do problema de controle ótimo estocástico de sistemas com saltos Markovianos e ruído multiplicativo a tempo discreto, com horizontes de tempo finito e infinito. A função custo é composta de termos quadráticos e lineares nas variáveis de estado e de controle, com matrizes peso indefinidas. Como resultado principal do problema com horizonte finito, é apresentada uma condição necessária e suficiente para que o problema de controle seja bem posto, a partir da qual uma solução ótima é derivada. A condição e a lei de controle são escritas em termos de um conjunto acoplado de equações de Riccati interconectadas a um conjunto acoplado de equações lineares recursivas. Para o caso de horizonte infinito, são apresentadas as soluções ótimas para os problemas de custo médio a longo prazo e com desconto, derivadas a partir de uma solução estabilizante de um conjunto de equações algébricas de Riccati acopladas generalizadas (GCARE). A existência da solução estabilizante é uma condição suficiente para que tais problemas sejam do tipo bem posto. Além disso, são apresentadas condições para a existência das soluções maximal e estabilizante do sistema GCARE. Como aplicações dos resultados obtidos, são apresentadas as soluções de um problema de otimização de carteiras de investimento com benchmark e de um problema de gestão de ativos e passivos de fundos de pensão do tipo benefício definido, ambos os casos com mudanças de regime nas variáreis de mercado. / This thesis considers the finite-horizon and infinite-horizon stochastic optimal control problem for discrete-time Markov jump with multiplicative noise linear systems. The performance criterion is assumed to be formed by a linear combination of a quadratic part and a linear part in the state and control variables. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. For the finite-horizon problem the main results consist of deriving a necessary and sufficient condition under which the problem is well posed and a optimal control law is derived. This condition and the optimal control law are written in terms of a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. For the infinite-horizon problem a set of generalized coupled algebraic Riccati equations (GCARE) is studied. In this case, a sufficient condition under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution for the GCARE are presented. Moreover, a solution for the discounted and long run average cost problems is presented. The results obtained are applied to solver a portfolio optimization problem with benchmark and a pension fund problem with regime switching.
124

Finite memory estimation and control of finite probabilistic systems.

Platzman, L. K. (Loren Kerry), 1951- January 1977 (has links)
Bibliography : leaves 196-200. / Thesis (Ph. D.)--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science, 1977. / Microfiche copy available in the Institute Archives and Barker Engineering Library. / by Loren Kerry Platzman. / Ph.D.
125

Fault tolerant optimal control

Chizeck, Howard Jay January 1982 (has links)
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING / Bibliography: leaves 898-903. / by Howard Jay Chizeck. / Sc.D.
126

New Stable Inverses of Linear Discrete Time Systems and Application to Iterative Learning Control

Ji, Xiaoqiang January 2019 (has links)
Digital control needs discrete time models, but conversion from continuous time, fed by a zero order hold, to discrete time introduces sampling zeros which are outside the unit circle, i.e. non-minimum phase (NMP) zeros, in the majority of the systems. Also, some systems are already NMP in continuous time. In both cases, the inverse problem to find the input required to maintain a desired output tracking, produces an unstable causal control action. The control action will grow exponentially every time step, and the error between time steps also grows exponentially. This prevents many control approaches from making use of inverse models. The problem statement for the existing stable inverse theorem is presented in this work, and it aims at finding a bounded nominal state-input trajectory by solving a two-point boundary value problem obtained by decomposing the internal dynamics of the system. This results in the causal part specified from the minus infinity time; and its non-causal part from the positive infinity time. By solving for the nominal bounded internal dynamics, the exact output tracking is achieved in the original finite time interval. The new stable inverses concepts presented and developed here address this instability problem in a different way based on the modified versions of problem states, and in a way that is more practical for implementation. The statements of how the different inverse problems are posed is presented, as well as the calculation and implementation. In order to produce zero tracking error at the addressed time steps, two modified statements are given as the initial delete and the skip step. The development presented here involves: (1) The detection of the signature of instability in both the nonhomogeneous difference equation and matrix form for finite time problems. (2) Create a new factorization of the system separating maximum part from minimum part in matrix form as analogous to transfer function format, and more generally, modeling the behavior of finite time zeros and poles. (3) Produce bounded stable inverse solutions evolving from the minimum Euclidean norm satisfying different optimization objective functions, to the solution having no projection on transient solutions terms excited by initial conditions. Iterative Learning Control (ILC) iterates with a real world control system repeatedly performing the same task. It adjusts the control action based on error history from the previous iteration, aiming to converge to zero tracking error. ILC has been widely used in various applications due to its high precision in trajectory tracking, e.g. semiconductor manufacturing sensors that repeatedly perform scanning maneuvers. Designing effective feedback controllers for non-minimum phase (NMP) systems can be challenging. Applying Iterative Learning Control (ILC) to NMP systems is particularly problematic. Incorporating the initial delete stable inverse thinkg into ILC, the control action obtained in the limit as the iterations tend to infinity, is a function of the tracking error produced by the command in the initial run. It is shown here that this dependence is very small, so that one can reasonably use any initial run. By picking an initial input that goes to zero approaching the final time step, the influence becomes particularly small. And by simply commanding zero in the first run, the resulting converged control minimizes the Euclidean norm of the underdetermined control history. Three main classes of ILC laws are examined, and it is shown that all ILC laws converge to the identical control history, as the converged result is not a function of the ILC law. All of these conclusions apply to ILC that aims to track a given finite time trajectory, and also apply to ILC that in addition aims to cancel the effect of a disturbance that repeats each run. Having these stable inverses opens up opportunities for many control design approaches. (1) ILC was the original motivation of the new stable inverses. Besides the scenario using the initial delete above, consider ILC to perform local learning in a trajectory, by using a quadratic cost control in general, but phasing into the skip step stable inverse for some portion of the trajectory that needs high precision tracking. (2) One step ahead control uses a model to compute the control action at the current time step to produce the output desired at the next time step. Before it can be useful, it must be phased in to honor actuator saturation limits, and being a true inverse it requires that the system have a stable inverse. One could generalize this to p-step ahead control, updating the control action every p steps instead of every one step. It determines how small p can be to give a stable implementation using skip step, and it can be quite small. So it only requires knowledge of future desired control for a few steps. (3) Note that the statement in (2) can be reformulated as Linear Model Predictive Control that updates every p steps instead of every step. This offers the ability to converge to zero tracking error at every time step of the skip step inverse, instead of the usual aim to converge to a quadratic cost solution. (4) Indirect discrete time adaptive control combines one step ahead control with the projection algorithm to perform real time identification updates. It has limited applications, because it requires a stable inverse.
127

Definition, analysis, and an approach for discrete-event simulation model interoperability

Wu, Tai-Chi, January 2005 (has links)
Thesis (Ph.D.) -- Mississippi State University. Department of Industrial and Systems Engineering. / Title from title screen. Includes bibliographical references.
128

A simulation framework for the analysis of reusable launch vehicle operations and maintenance

Dees, Patrick Daniel 26 July 2012 (has links)
During development of a complex system, feasibility initially overshadows other concerns, in some cases leading to a design which may not be viable long-term. In particular for the case of Reusable Launch Vehicles, Operations&Maintenance comprises the majority of the vehicle's LCC, whose stochastic nature precludes direct analysis. Through the use of simulation, probabilistic methods can however provide estimates on the economic behavior of such a system as it evolves over time. Here the problem of operations optimization is examined through the use of discrete event simulation. The resulting tool built from the lessons learned in the literature review simulates a RLV or fleet of vehicles undergoing maintenance and the maintenance sites it/they visit as the campaign evolves over a period of time. The goal of this work is to develop a method for uncovering an optimal operations scheme by investigating the effect of maintenance technician skillset distributions on important metrics such as the achievable annual flight rate and maintenance man hours spent on each vehicle per flight. Using these metrics, the availability of technicians for each subsystem is optimized to levels which produce the greatest revenue from flights and minimum expenditure from maintenance.
129

Stabilization of Discrete-time Systems With Bounded Control Inputs

Jamak, Anes January 2000 (has links)
In this paper we examine the stabilization of LTI discrete-time systems with control input constraints in the form of saturation nonlinearities. This kind of constraint is usually introduced to simulate the effect of actuator limitations. Since global controllability can not be assumed in the presence of constrained control, the controllable regions and their characterizations are analyzed first. We present an efficient algorithm for finding controllable regions in terms of their boundary hyperplanes (inequality constraints). A previously open question about the exact number of irredundant boundary hyperplanes is also resolved here. The main result of this research is a time-optimal nonlinear controller which stabilizes the system on its controllable region. We give analgorithm for on-line computation of control which is also implementable for high-order systems. Simulation results show superior response even in the presence of disturbances.
130

Stabilization of Discrete-time Systems With Bounded Control Inputs

Jamak, Anes January 2000 (has links)
In this paper we examine the stabilization of LTI discrete-time systems with control input constraints in the form of saturation nonlinearities. This kind of constraint is usually introduced to simulate the effect of actuator limitations. Since global controllability can not be assumed in the presence of constrained control, the controllable regions and their characterizations are analyzed first. We present an efficient algorithm for finding controllable regions in terms of their boundary hyperplanes (inequality constraints). A previously open question about the exact number of irredundant boundary hyperplanes is also resolved here. The main result of this research is a time-optimal nonlinear controller which stabilizes the system on its controllable region. We give analgorithm for on-line computation of control which is also implementable for high-order systems. Simulation results show superior response even in the presence of disturbances.

Page generated in 0.0813 seconds