• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Robust design using sequential computer experiments

Gupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
562

Aircraft control using nonlinear dynamic inversion in conjunction with adaptive robust control

Fisher, James Robert 17 February 2005 (has links)
This thesis describes the implementation of Yao’s adaptive robust control to an aircraft control system. This control law is implemented as a means to maintain stability and tracking performance of the aircraft in the face of failures and changing aerodynamic response. The control methodology is implemented as an outer loop controller to an aircraft under nonlinear dynamic inversion control. The adaptive robust control methodology combines the robustness of sliding mode control to all types of uncertainty with the ability of adaptive control to remove steady state errors. A performance measure is developed in to reflect more subjective qualities a pilot would look for while flying an aircraft. Using this measure, comparisons of the adaptive robust control technique with the sliding mode and adaptive control methodologies are made for various failure conditions. Each control methodology is implemented on a full envelope, high fidelity simulation of the F-15 IFCS aircraft as well as on a lower fidelity full envelope F-5A simulation. Adaptive robust control is found to exhibit the best performance in terms of the introduced measure for several different failure types and amplitudes.
563

Auditory Based Modification of MFCC Feature Extraction for Robust Automatic Speech Recognition

Chiou, Sheng-chiuan 01 September 2009 (has links)
The human auditory perception system is much more noise-robust than any state-of theart automatic speech recognition (ASR) system. It is expected that the noise-robustness of speech feature vectors may be improved by employing more human auditory functions in the feature extraction procedure. Forward masking is a phenomenon of human auditory perception, that a weaker sound is masked by the preceding stronger masker. In this work, two human auditory mechanisms, synaptic adaptation and temporal integration are implemented by filter functions and incorporated to model forward masking into MFCC feature extraction. A filter optimization algorithm is proposed to optimize the filter parameters. The performance of the proposed method is evaluated on Aurora 3 corpus, and the procedure of training/testing follows the standard setting provided by the Aurora 3 task. The synaptic adaptation filter achieves relative improvements of 16.6% over the baseline. The temporal integration and modified temporal integration filter achieve relative improvements of 21.6% and 22.5% respectively. The combination of synaptic adaptation with each of temporal integration filters results in further improvements of 26.3% and 25.5%. Applying the filter optimization improves the synaptic adaptation filter and two temporal integration filters, results in the 18.4%, 25.2%, 22.6% improvements respectively. The performance of the combined-filters models are also improved, the relative improvement are 26.9% and 26.3%.
564

Robust clustering algorithms

Gupta, Pramod 05 April 2011 (has links)
One of the most widely used techniques for data clustering is agglomerative clustering. Such algorithms have been long used across any different fields ranging from computational biology to social sciences to computer vision in part because they are simple and their output is easy to interpret. However, many of these algorithms lack any performance guarantees when the data is noisy, incomplete or has outliers, which is the case for most real world data. It is well known that standard linkage algorithms perform extremely poorly in presence of noise. In this work we propose two new robust algorithms for bottom-up agglomerative clustering and give formal theoretical guarantees for their robustness. We show that our algorithms can be used to cluster accurately in cases where the data satisfies a number of natural properties and where the traditional agglomerative algorithms fail. We also extend our algorithms to an inductive setting with similar guarantees, in which we randomly choose a small subset of points from a much larger instance space and generate a hierarchy over this sample and then insert the rest of the points to it to generate a hierarchy over the entire instance space. We then do a systematic experimental analysis of various linkage algorithms and compare their performance on a variety of real world data sets and show that our algorithms do much better at handling various forms of noise as compared to other hierarchical algorithms in the presence of noise.
565

Uncertainty management in the design of multiscale systems

Sinha, Ayan 07 April 2011 (has links)
In this thesis, a framework is laid for holistic uncertainty management for simulation-based design of multiscale systems. The work is founded on uncertainty management for microstructure mediated design (MMD) of material and product, which is a representative example of a system over multiple length and time scales, i.e., a multiscale system. The characteristics and challenges for uncertainty management for multiscale systems are introduced context of integrated material and product design. This integrated approach results in different kinds of uncertainty, i.e., natural uncertainty (NU), model parameter uncertainty (MPU), model structure uncertainty (MSU) and propagated uncertainty (PU). We use the Inductive Design Exploration Method to reach feasible sets of robust solutions against MPU, NU and PU. MMD of material and product is performed for the product autonomous underwater vehicle (AUV) employing the material in-situ metal matrix composites using IDEM to identify robust ranged solution sets. The multiscale system results in decision nodes for MSU consideration at hierarchical levels, termed as multilevel design. The effectiveness of using game theory to model strategic interaction between the different levels to facilitate decision making for mitigating MSU in multilevel design is illustrated using the compromise decision support problem (cDSP) technique. Information economics is identified as a research gap to address holistic uncertainty management in simulation-based multiscale systems, i.e., to address the reduction or mitigation of uncertainty considering the current design decision and scope for further simulation model refinement in order to reach better robust solutions. It necessitates development of an improvement potential (IP) metric based on value of information which suggests the scope of improvement in a designer's decision making ability against modeled uncertainty (MPU) in simulation models in multilevel design problem. To address the research gap, the integration of robust design (using IDEM), information economics (using IP) and game theoretic constructs (using cDSP) is proposed. Metamodeling techniques and expected value of information are critically reviewed to facilitate efficient integration. Robust design using IDEM and cDSP are integrated to improve MMD of material and product and address all four types of uncertainty simultaneously. Further, IDEM, cDSP and IP are integrated to assist system level designers in allocating resources for simulation model refinement in order to satisfy performance and robust process requirements. The approach for managing MPU, MSU, NU and PU while mitigating MPU is presented using the MMD of material and product. The approach presented in this article can be utilized by system level designers for managing all four types of uncertainty and reducing model parameter uncertainty in any multiscale system.
566

Contributions to variable selection for mean modeling and variance modeling in computer experiments

Adiga, Nagesh 17 January 2012 (has links)
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
567

Framework for robust design: a forecast environment using intelligent discrete event simulation

Beisecker, Elise K. 29 March 2012 (has links)
The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach. The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition. The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms. The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point. Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm. The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
568

High precision motion control based on a discrete-time sliding mode approach

Li, Yufeng January 2001 (has links)
No description available.
569

Gain Scheduled Missile Control Using Robust Loop Shaping / Parameterstyrd missilstyrning med hjälp av robust kretsformning

Johansson, Henrik January 2002 (has links)
<p>Robust control design has become a major research area during the last twenty years and there are nowadays several robust design methods available. One example of such a method is the robust loop shaping method that was developed by K. Glover and D. C. MacFarlane in the late 1980s. The idea of this method is to use decentralized controller design to give the singular values of the loop gain a desired shape. This step is called Loop Shaping and it is followed by a Robust Stabilization procedure, which aims to give the closed loop system a maximum degree of stability margins. In this thesis, the robust loop shaping method is used to design a gain scheduled controller for a missile. The report consists of three parts, where the first part introduces the Robust Loop Shaping theory and a Gain Scheduling approach. The second part discusses the missile and its characteristics. In the third part a controller is designed and a short analysis of the closed loop system is performed. A scheduled controller is implemented in a nonlinear environment, in which performance and robustness are tested. Robust Loop Shaping is easy to use and simulations show that the resulting controller is able to cope with model perturbations without considerable loss in performance. The missile should be able to operate in a large speed interval. There, it is shown that a single controller does not stabilize the missile everywhere. The gain scheduled controller is however able to do so, which is shown by means of simulations.</p>
570

Robust nonlinear control design for a missile using backstepping / Robust olinjär missilstyrning med hjälp av backstepping

Dahlgren, Johan January 2002 (has links)
<p>This thesis has been performed at SAAB Bofors Dynamics. The purpose was to derive a robust control design for a nonlinear missile using backstepping. A particularly interesting matter was to see how different design choices affect the robustness. Backstepping is a relatively new design method for nonlinear systems which leads to globally stabilizing control laws. By making wise decisions in the design the resulting closed loop can receive significant robustness. The method also makes it possible to benefit from naturally stabilizing aerodynamic forces and momentums. It is based on Lyapunov theory and the control laws and a Lyapunov function are derived simultaneously. This Lyapunov function is used to guarantee stability. In this thesis the control laws for the missile are first derived by using backstepping. The missile dynamics are described with aerodynamic coeffcients with corresponding uncertainties. The robustness of the design w.r.t. the aerodynamic uncertainties is then studied further in detail. One way to analyze how the stability is affected by the errors in the coeffcients is presented. To improve the robustness and remove static errors, dynamics are introduced in the control laws by adding an integrator. One conclusion that has been reached is that it is hard to immediately determine how a certain design choice affects the robustness. Instead it is at the point when algebraic expressions for the closed loop system have been obtained, that it is possible to analyze the affects of a certain design choice. The designed control laws are evaluated by simulations which shows satisfactory results.</p>

Page generated in 0.0778 seconds