Spelling suggestions: "subject:"scale systems"" "subject:"acale systems""
21 |
Pole Assignment and Robust Control for Multi-Time-Scale SystemsChang, Cheng-Kuo 05 July 2001 (has links)
Abstract
In this dissertation, the eigenvalue analysis and decentralized robust controller design of uncertain multi-time-scale system with parametrical perturbations are considered. Because the eigenvalues of the multi-time-scale systems cluster in some difference regions of the complex plane, we can use the singular perturbation method to separate the systems into some subsystems. These subsystems are independent to each other. We can discuss the properties of eigenvalues and design controller for these subsystem respectively, then we composite these controllers to a decentralized controller.
The eigenvalue positions dominate the stability and the performance of the dynamic system. However, we cannot obtain the precise position of the eigenvalues from the influence of parametrical perturbations. The sufficient conditions of the eigenvalues clustering for the multi-time-scale systems will be discussed. The uncertainties consider as unstructured and structured perturbations are taken into considerations. The design algorithm provides for designing a decentralized controller that can assign the poles to our respect regions. The specified regions are half-plane and circular disk.
Furthermore, the concepts of decentralized control and optimal control are used to design the linear quadratic regulator (LQR) controller and linear quadratic Gaussian (LQG) controller for the perturbed multi-time-scale systems. That is, the system can get the optimal robust performance.
The bound of the singular perturbation parameter would influence the robust stability of the multi-time-scale systems. Finally, the sufficient condition to obtain the upper bound of the singular perturbation parameter presented by the Lyapunov method and matrix norm. The condition also extends for the pole assignment in the specified regions of each subsystem respectively.
The illustrative examples are presented behind each topic. They show the applicability of the proposed theorems, and the results are satisfactory.
|
22 |
Rapid Architecture Alternative Modeling (RAAM): a framework for capability-based analysis of system of systems architecturesIacobucci, Joseph Vincent 04 April 2012 (has links)
The current national security environment and fiscal tightening make
it necessary for the Department of Defense to transition away from a
threat based acquisition mindset towards a capability based approach
to acquire portfolios of systems. This requires that groups of
interdependent systems must regularly interact and work together as
systems of systems to deliver desired capabilities. Technological
advances, especially in the areas of electronics, computing, and
communications also means that these systems of systems are tightly
integrated and more complex to acquire, operate, and manage. In
response to this, the Department of Defense has turned to system
architecting principles along with capability based analysis. However,
because of the diversity of the systems, technologies, and
organizations involved in creating a system of systems, the design
space of architecture alternatives is discrete and highly
non-linear. The design space is also very large due to the hundreds of
systems that can be used, the numerous variations in the way systems
can be employed and operated, and also the thousands of tasks that are
often required to fulfill a capability. This makes it very difficult
to fully explore the design space. As a result, capability based
analysis of system of systems architectures often only considers a
small number of alternatives. This places a severe limitation on the
development of capabilities that are necessary to address the needs of
the war fighter.
The research objective for this manuscript is to develop a Rapid
Architecture Alternative Modeling (RAAM) methodology to enable
traceable Pre-Milestone A decision making during the conceptual phase
of design of a system of systems. Rather than following current trends
that place an emphasis on adding more analysis which tends to increase
the complexity of the decision making problem, RAAM improves on
current methods by reducing both runtime and model creation
complexity. RAAM draws upon principles from computer science, system
architecting, and domain specific languages to enable the automatic
generation and evaluation of architecture alternatives. For example,
both mission dependent and mission independent metrics are
considered. Mission dependent metrics are determined by the
performance of systems accomplishing a task, such as Probability of
Success. In contrast, mission independent metrics, such as
acquisition cost, are solely determined and influenced by the other
systems in the portfolio. RAAM also leverages advances in parallel
computing to significantly reduce runtime by defining executable
models that are readily amendable to parallelization. This allows the
use of cloud computing infrastructures such as Amazon's Elastic
Compute Cloud and the PASTEC cluster operated by the Georgia Institute
of Technology Research Institute (GTRI). Also, the amount of data that
can be generated when fully exploring the design space can quickly
exceed the typical capacity of computational resources at the
analyst's disposal. To counter this, specific algorithms and
techniques are employed. Streaming algorithms and recursive
architecture alternative evaluation algorithms are used that reduce
computer memory requirements. Lastly, a domain specific language is
created to provide a reduction in the computational time of executing
the system of systems models. A domain specific language is a small,
usually declarative language that offers expressive power focused on a
particular problem domain by establishing an effective means to
communicate the semantics from the RAAM framework. These techniques
make it possible to include diverse multi-metric models within the
RAAM framework in addition to system and operational level trades.
A canonical example was used to explore the uses of the
methodology. The canonical example contains all of the features of a
full system of systems architecture analysis study but uses fewer
tasks and systems. Using RAAM with the canonical example it was
possible to consider both system and operational level trades in the
same analysis. Once the methodology had been tested with the canonical
example, a Suppression of Enemy Air Defenses (SEAD) capability model
was developed. Due to the sensitive nature of analyses on that
subject, notional data was developed. The notional data has similar
trends and properties to realistic Suppression of Enemy Air Defenses
data. RAAM was shown to be traceable and provided a mechanism for a
unified treatment of a variety of metrics. The SEAD capability model
demonstrated lower computer runtimes and reduced model creation
complexity as compared to methods currently in use. To determine the
usefulness of the implementation of the methodology on current
computing hardware, RAAM was tested with system of system architecture
studies of different sizes. This was necessary since system of systems
may be called upon to accomplish thousands of tasks. It has been
clearly demonstrated that RAAM is able to enumerate and evaluate the
types of large, complex design spaces usually encountered in
capability based design, oftentimes providing the ability to
efficiently search the entire decision space. The core algorithms for
generation and evaluation of alternatives scale linearly with expected
problem sizes. The SEAD capability model outputs prompted the
discovery a new issue, the data storage and manipulation requirements
for an analysis. Two strategies were developed to counter large data
sizes, the use of portfolio views and top `n' analysis. This proved
the usefulness of the RAAM framework and methodology during
Pre-Milestone A capability based analysis.
|
23 |
Asynchronous stochastic learning curve effects in a large scale production system /Lu, Roberto Francisco-Yi. January 2008 (has links)
Thesis (Ph. D.)--University of Washington, 2008. / Vita. Includes bibliographical references (leaves 126-133).
|
24 |
An architecture framework for composite services with process-personalizationSadasivam, Rajani Shankar. January 2007 (has links) (PDF)
Thesis (Ph. D.)--University of Alabama at Birmingham, 2007. / Title from PDF title page (viewed Feb. 4, 2010). Additional advisors: Barrett R. Bryant, Chittoor V. Ramamoorthy, Jeffrey H. Kulick, Gary J. Grimes, Gregg L. Vaughn, Murat N. Tanju. Includes bibliographical references (p. 161-183).
|
25 |
MULTIRATE INTEGRATION OF TWO-TIME-SCALE DYNAMIC SYSTEMSKeepin, William North. January 1980 (has links)
Simulation of large physical systems often leads to initial value problems in which some of the solution components contain high frequency oscillations and/or fast transients, while the remaining solution components are relatively slowly varying. Such a system is referred to as two-time-scale (TTS), which is a partial generalization of the concept of stiffness. When using conventional numerical techniques for integration of TTS systems, the rapidly varying components dictate the use of small stepsizes, with the result that the slowly varying components are integrated very inefficiently. This could mean that the computer time required for integration is excessive. To overcome this difficulty, the system is partitioned into "fast" and "slow" subsystems, containing the rapidly and slowly varying components of the solution respectively. Integration is then performed using small stepsizes for the fast subsystem and relatively large stepsizes for the slow subsystem. This is referred to as multirate integration, and it can lead to substantial savings in computer time required for integration of large systems having relatively few fast solution components. This study is devoted to multirate integration of TTS initial value problems which are partitioned into fast and slow subsystems. Techniques for partitioning are not considered here. Multirate integration algorithms based on explicit Runge-Kutta (RK) methods are developed. Such algorithms require a means for communication between the subsystems. Internally embedded RK methods are introduced to aid in computing interpolated values of the slow variables, which are supplied to the fast subsystem. The use of averaging in the fast subsystem is discussed in connection with communication from the fast to the slow subsystem. Theoretical support for this is presented in a special case. A proof of convergence is given for a multirate algorithm based on Euler's method. Absolute stability of this algorithm is also discussed. Four multirate integration routines are presented. Two of these are based on a fixed-step fourth order RK method, and one is based on the variable step Runge-Kutta-Merson scheme. The performance of these routines is compared to that of several other integration schemes, including Gear's method and Hindmarsh's EPISODE package. For this purpose, both linear and nonlinear examples are presented. It is found that multirate techniques show promise for linear systems having eigenvalues near the imaginary axis. Such systems are known to present difficulty for Gear's method and EPISODE. A nonlinear TTS model of an autopilot is presented. The variable step multirate routine is found to be substantially more efficient for this example than any other method tested. Preliminary results are also included for a pressurized water reactor model. Indications are that multirate techniques may prove fruitful for this model. Lastly, an investigation of the effects of the step-size ratio (between subsystems) is included. In addition, several suggestions for further work are given, including the possibility of using multistep methods for integration of the slow subsystem.
|
26 |
Accelerating Convergence of Large-scale Optimization AlgorithmsGhadimi, Euhanna January 2015 (has links)
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms. / <p>QC 20150327</p>
|
27 |
Stochastic modeling and prognostic analysis of complex systems using condition-based real-time sensor signalsBian, Linkan 14 March 2013 (has links)
This dissertation presents a stochastic framework for modeling the degradation processes of components in complex engineering systems using sensor based signals. Chapters 1 and 2 discuses the challenges and the existing literature in monitoring and predicting the performance of complex engineering systems. Chapter 3 presents the degradation model with the absorbing failure threshold for a single unit and the RLD estimation using the first-passage-time approach. Subsequently, we develop the estimate of the RLD using the first-passage-time approach for two cases: information prior distributions and non-informative prior distributions. A case study is presented using real-world data from rolling elements bearing applications. Chapter 4 presents a stochastic methodology for modeling degradation signals from components functioning under dynamically evolving environmental conditions. We utilize in-situ sensor signals related to the degradation process, as well as the environmental conditions, to predict and continuously update, in real-time, the distribution of a component’s residual lifetime. Two distinct models are presented. The first considers future environmental profiles that evolve in a deterministic manner while the second assumes the environment evolves as a continuous-time Markov chain. Chapters 5 and 6 generalize the failure-dependent models and develop a general model that examines the interactions among the degradation processes of interconnected components/subsystems. In particular, we model how the degradation level of one component affects the degradation rates of other components in the system. Hereafter, we refer to this type of component-to-component interaction caused by their stochastic dependence as degradation-rate-interaction (DRI). Chapter 5 focuses on the scenario in which these changes occur in a discrete manner, whereas, Chapter 6 focuses on the scenario, in which DRIs occur in a continuous manner. We demonstrate that incorporating the effects of component interactions significantly improves the prediction accuracy of RLDs. Finally, we outline the conclusion remarks and a future work plan in Chapter 7.
|
28 |
Nonlinear dynamical systems and control for large-scale, hybrid, and network systemsHui, Qing January 2008 (has links)
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Haddad, Wassim; Committee Member: Feron, Eric; Committee Member: JVR, Prasad; Committee Member: Taylor, David; Committee Member: Tsiotras, Panagiotis
|
29 |
Pervasive hypermediaAnderson, Kenneth M. January 1997 (has links)
Thesis (Ph. D., Information and Computer Science)--University of California, Irvine, 1997. / Includes bibliographical references.
|
30 |
The need for a national systems center : an ad-hoc committee reportJanuary 1978 (has links)
by Michael Athans. / "November 30, 1978." Caption title. / Travel support provided in part by National Science Foundation Grant NSF/ENG77-07777
|
Page generated in 0.0659 seconds