481 |
Modelling and control of a high performance electro-hydraulic test benchXu, Yaozhong 11 June 2013 (has links) (PDF)
Hydraulic systems are widely applied in industry for position or force control. However, due to hydraulic system nonlinearities, it is difficult to achieve a precise model valid over a large range of frequencies and movements. The work in this dissertation focuses on a high performance hydraulic test bench which involves three main hydraulic components, i.e. two high performance servovalves, a double rod actuator, and a specific intermediate block connecting the servovalves and actuator. This rig has been designed for testing aerospace or automotive components in real conditions (e.g. wear and ageing effects). The main objectives of this dissertation are first the development of a virtual prototype based on a precise model which is derived from the physical principles and experimental works, and then second the synthesis of several nonlinear control laws of this actuation system in a large operating range with a good robustness to the perturbations. The proposed model based on Bond Graph shows a very good agreement with experimental results not only at low frequencies, but also at high frequencies. Moreover, its performances are improved at high frequencies by introducing the dynamic effects due to the intermediate block. Besides, multivariable and monovariable control strategies, based on respectively the backstepping and the model-free method, are developed and implemented on the test bench. All the control strategies proposed have been validated by simulations and experiments. Results show they lead to better tracking precision and robustness performance compared to the conventional control techniques.
|
482 |
Second-order Least Squares Estimation in Generalized Linear Mixed ModelsLi, He 06 April 2011 (has links)
Maximum likelihood is an ubiquitous method used in the estimation of generalized linear mixed model (GLMM). However, the method entails computational difficulties and relies on the normality assumption for random effects. We propose a second-order least squares (SLS) estimator based on the first two marginal moments of the response variables. The proposed estimator is computationally feasible and requires less distributional assumptions than the maximum likelihood estimator. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is proposed. We show that the SLS estimators are consistent and asymptotically normally distributed under fairly general conditions in the framework of GLMM.
Missing data is almost inevitable in longitudinal studies. Problems arise if the missing data mechanism is related to the response process. This thesis develops the proposed estimators to deal with response data missing at random by either adapting the inverse probability weight method or applying the multiple imputation approach.
In practice, some of the covariates are not directly observed but are measured with error. It is well-known that simply substituting a proxy variable for the unobserved covariate in the model will generally lead to biased and inconsistent estimates. We propose the instrumental variable method for the consistent estimation of GLMM with covariate measurement error. The proposed approach does not need any parametric assumption on the distribution of the unknown covariates. This makes the method less restrictive than other methods that rely on either a parametric distribution of the covariates, or to estimate the distribution using some extra information.
In the presence of data outliers, it is a concern that the SLS estimators may be vulnerable due to the second-order moments. We investigated the robustness property of the SLS estimators using their influence functions. We showed that the proposed estimators have a bounded influence function and a redescending property so they are robust to outliers.
The finite sample performance and property of the SLS estimators are studied and compared with other popular estimators in the literature through simulation studies and real world data examples.
|
483 |
Robust routing optimization in resilient networks : Polyhedral model and complexity issuesZOTKIEWICZ, Mateusz 04 January 2011 (has links) (PDF)
In the thesis robust routing design problems in resilient networks are considered. In the first part computational complexity of such problems are discussed. The following cases are considered: - path protection and path restoration - failure-dependent and failure-independent restoration - cases with and without stub-release - single-link failures and multiple-link failures (shared risk link group) - non-bifurcated (unsplittable) flows and bifurcated flows For each of the related optimization cases a mixed-integer (in the non-bifurcated cases) or linear programming formulation (in all bifurcated cases) is presented, and their computational complexity is investigated. For the NP-hard cases original NP-hardness proofs are provided, while for the polynomial cases compact linear programming formulations (which prove the polynomiality in the question) are discussed. Moreover, pricing problems related to each of the considered NP-hard problems are discussed. The second part of the thesis deals with various routing strategies in networks where the uncertainty issues are modeled using the polyhedral model. In such networks two extrema are possible. The simplest in terms of implementation, and simultaneously the least effective strategy, is the robust stable routing. On the other hand, the most effective strategy, i.e., the dynamic routing, is virtually impossible to implement in real world networks. Therefore, the major aim of this part of the thesis is to present novel routing strategies that merge the simplicity of the robust stable routing with the efficiency of the dynamic routing
|
484 |
Mechatronic design under uncertaintiesZhang, Kai 22 October 2013 (has links) (PDF)
Flexible structures are increasingly used in various applications such as aerospace, automotive and so on. Since they are lightly damped and susceptible to vibrations, active vibration control is desirable. In practice, in addition to achieving effective vibration reduction, we have also to consider the required control energy to avoid the energy insufficiency, the control input to avoid control saturation and reduce the effects of measurement noises. On the other hand, as flexible structures have infinite number of resonant modes and only the first few can be employed in the system modeling and the controller design, there always exist neglected high-frequency dynamics, which can induce the spillover instability. Furthermore, the parametric uncertainties on modal parameters can degrade the control performances and even destabilize the closed-loop system. In this context, a quantitative robust control methodology for active vibration control of flexible structure is proposed in this thesis. Phase and gain control polices are first proposed to enforce frequency-dependent phase and gain requirements on the controller, which can be realized by the output feedback H1 control design. The phase and gain control polices based H1 control can make a trade-off among the complete set of control objectives and offer a qualitative robust controller. Especially, the LPV H1 control is used to reduce the required control energy for LPV systems. The generalized polynomial chaos (gPC) framework with finite element analysis is employed for uncertainty quantification. It allows us to investigate the effects of structural property uncertainties on natural frequencies and achieve their probabilistic information. Then, in the presence of parametric and dynamic uncertainties, µ / v analysis and the random algorithm using Monte Carlo Method are used to quantitatively ensure the closed-loop stability and performance robustness properties both in deterministic and probabilistic senses. The proposed quantitative robust control methodology is thus developed by employing various techniques from automatic control and mechanical engineering, thus reducing the gap between them for robust vibration control of flexible structures. Its effectiveness are verified by numerical simulations and experimental validation on LTI and LPV non-collocated piezoelectric cantilever beams.
|
485 |
Second-order Least Squares Estimation in Generalized Linear Mixed ModelsLi, He 06 April 2011 (has links)
Maximum likelihood is an ubiquitous method used in the estimation of generalized linear mixed model (GLMM). However, the method entails computational difficulties and relies on the normality assumption for random effects. We propose a second-order least squares (SLS) estimator based on the first two marginal moments of the response variables. The proposed estimator is computationally feasible and requires less distributional assumptions than the maximum likelihood estimator. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is proposed. We show that the SLS estimators are consistent and asymptotically normally distributed under fairly general conditions in the framework of GLMM.
Missing data is almost inevitable in longitudinal studies. Problems arise if the missing data mechanism is related to the response process. This thesis develops the proposed estimators to deal with response data missing at random by either adapting the inverse probability weight method or applying the multiple imputation approach.
In practice, some of the covariates are not directly observed but are measured with error. It is well-known that simply substituting a proxy variable for the unobserved covariate in the model will generally lead to biased and inconsistent estimates. We propose the instrumental variable method for the consistent estimation of GLMM with covariate measurement error. The proposed approach does not need any parametric assumption on the distribution of the unknown covariates. This makes the method less restrictive than other methods that rely on either a parametric distribution of the covariates, or to estimate the distribution using some extra information.
In the presence of data outliers, it is a concern that the SLS estimators may be vulnerable due to the second-order moments. We investigated the robustness property of the SLS estimators using their influence functions. We showed that the proposed estimators have a bounded influence function and a redescending property so they are robust to outliers.
The finite sample performance and property of the SLS estimators are studied and compared with other popular estimators in the literature through simulation studies and real world data examples.
|
486 |
L’interprétation mécaniste des communautés et des écosystèmesDegré, Karl 01 1900 (has links)
Les concepts d’écosystèmes et de communautés sont centraux aux explications en écologie ainsi qu’à plusieurs débats environnementaux. Cependant, depuis l’introduction de ces concepts, leur statut ontologique est controversé (c'est-à-dire un écosystème a-t-il une existence à part entière au-delà de l’existence de ses parties constituantes?). Alors que certains favorisent une interprétation épistémique de ces concepts, d’autres défendent plutôt que ces unités écologiques fassent partie intégrante du monde naturel. Dans le cadre de ce mémoire, j’argumente que la meilleure manière d’appréhender cette question ontologique est de comprendre que les communautés ainsi que les écosystèmes sont des mécanismes. Plus précisément, je propose que les écosystèmes et les communautés soient des ensembles d’entités et d’activités organisés de manière à exhiber de manière régulière des phénomènes précis. Les entités sont les composantes de ces mécanismes (ex : espèce, niche écologique) alors que les activités sont les relations causales produisant des changements (ex : photosynthèse, prédation). Finalement, les phénomènes que ces mécanismes produisent sont les propriétés émergentes propres au niveau d’organisation des communautés et des écosystèmes (ex. : pH, Biomasse). En utilisant l’approche manipulationniste développée par James Woodward (2003, 2004, 2010), j’argumente qu’il est possible d’identifier les composantes causales des écosystèmes et des communautés ainsi que leurs relations. Puisqu’il est possible de manipuler empiriquement et de manière contrefactuelle de différentes manières les communautés et les écosystèmes (Penn 2003; Swenson et Wilson 2000), je conclus que nous pouvons affirmer de manière robuste (Wimsatt 2007) que ces entités existent réellement. / The concepts of ecosystem and community are central to ecological explanations. However, since the introduction of these concepts, their ontological status is controversial. Taking as a starting point the mechanistic explanatory theories in philosophy of science, I argue that ecosystems and communities are mechanisms. More precisely, I suggest that they are entities and activities organized in such a way as to exhibit regular and precise phenomenons (Machamer, Darden, Craver 2000). While entities are the components of the mechanisms (ex: species, ecological niche), activities are the causal relations that produce changes (ex: photosynthesis, predation). Finally, the interaction of the entities and the activities produce emerging properties that are unique to the community and ecosystem level (ex: pH, biomass). By using the manipulationniste theory of Woodward (2003) and the experimental results of Swenson and Wilson (2000), I argue that it is possible to identify the causal components of the ecosystems and communities and their relations to one another. Since it is possible to manipulate empirically and counterfactually ecosystems and communities, I conclude that they are real entities.
|
487 |
Including severe uncertainty into environmentally benign life cycle design using information gap-decision theoryDuncan, Scott Joseph 15 January 2008 (has links)
Due to increasing interest in sustainable development, today s engineer is often tasked with designing systems that are environmentally benign over their entire life cycles. Unfortunately, environmental assessments commonly suffer from significant uncertainty due to lack of information, particularly for time-distant life cycle aspects. Under severe uncertainty, traditional uncertainty formalisms require more information than is available. However, a recently devised formalism, information-gap decision theory (IGDT), requires no more information than a nominal estimate; error bounds on that estimate are unknown. The IGDT decision strategy, accordingly, favors the design that is robust to the most estimation error while still guaranteeing no worse than some good enough critical level of performance. In some cases, one can use IGDT to identify a preferable design option without needing more information or more complex uncertainty analysis.
In this dissertation, IGDT is investigated and shown to enhance decision support for environmentally benign design and manufacturing (EBDM) problems. First, the applicability of the theory to EBDM problems is characterized. Conditions that warrant an info-gap analysis are reviewed, the insight it can reveal about design robustness is demonstrated, and practical limitations to its use are revealed. Second, a new mathematical technique is presented that expands capabilities for analyzing robustness to multiple info-gap uncertainties simultaneously. The technique elicits scaling factors more rigorously than before and allows one to imprecisely express their beliefs about info-gap scaling. Two examples problems affected by info-gaps are investigated: oil filter selection and remanufacturing process selection. It is shown that limited information about uncertainty can, in some cases, indeed enable one to identify a most preferable design without requiring more information.
|
488 |
A methodology for the robustness-based evaluation of systems-of-systems alternatives using regret analysisPoole, Benjamin Hancock 01 July 2008 (has links)
After surveying the state-of-the-art in evaluation of alternatives in the defense acquisition process, a methodology for the evaluation of the robustness of systems-of-systems alternatives was proposed. In the methodology, robustness is defined as the integral of the alternative s regret over the likelihood-weighted plausible scenario space. Surrogate modeling techniques were used to overcome shortcomings associated with conventional regret analysis, including the discrete nature of scenario cases and static results. The new methodology, called Global Regret Analysis, was tested using an example problem based on the air campaign over Iraq in Operation Desert Storm. The results of the testing indicate that the methodology can provide a measure of the robustness of different system-of-systems alternatives against a wide range of possible scenarios. The methodology was then demonstrated on the US Air Force s persistent, precision strike mission. The demonstration showed the ability of Global Regret Analysis to overcome issues associated with using a single or other small number of scenarios to evaluate systems-of-systems alternatives. The methodology was then compared to a variety of existing methods and found to have strength for a wide range of evaluation applications. The possibility of applying Global Regret Analysis for military mission planning and opportunities for future work were also discussed.
|
489 |
An ensemble speaker and speaking environment modeling approach to robust speech recognitionTsao, Yu 18 November 2008 (has links)
In this study, an ensemble speaker and speaking environment modeling (ESSEM) approach is proposed to characterize environments in order to enhance performance robustness of automatic speech recognition (ASR) systems under adverse conditions. The ESSEM process comprises two stages, the offline and online phases. In the offline phase, we prepare an ensemble speaker and speaking environment space formed by a collection of super-vectors. Each super-vector consists of the entire set of means from all the Gaussian mixture components of a set of hidden Markov Models that characterizes a particular environment. In the online phase, with the ensemble environment space prepared in the offline phase, we estimate the super-vector for a new testing environment based on a stochastic matching criterion. A series of techniques is proposed to further improve the original ESSEM approach on both offline and online phases. For the offline phase, we focus on methods to enhance the construction and coverage of the environment space. We first demonstrate environment clustering and environment partitioning algorithms to well structure the environment space; then, we propose a discriminative training algorithm to enhance discrimination across environment super-vectors and therefore broaden the coverage of the ensemble environment space. For the online phase, we study methods to increase the efficiency and precision in estimating the target super-vector for the testing condition. To enhance the efficiency, we incorporate dimensionality reduction techniques to reduce the complexity of the original environment space. To improve the precision, we first study different forms of mapping function and propose a weighted N-best information technique; then, we propose cohort selection, environment space adaptation and multiple cluster matching algorithms to facilitate the environment characterization. We evaluate the proposed ESSEM framework on the Aurora-2 connected digit recognition task. Experimental results verify that the original ESSEM approach already provides clear improvement over a baseline system without environment compensation. Moreover, the performance of ESSEM can be further enhanced by using the proposed offline and online algorithms. A significant improvement of 16.08% word error rate reduction is achieved by ESSEM with optimal offline and online configuration over our best baseline system on the Aurora-2 task.
|
490 |
The design and implementation of a robust, cost-conscious peer-to-peer lookup serviceHarvesf, Cyrus Mehrabaun 17 November 2008 (has links)
Peer-to-peer (p2p) technology provides an excellent platform for the delivery of rich content and media that scales with the rapid growth of the Internet. This work presents a lookup service design and implementation that provides provable fault tolerance and operates in a cost-conscious manner over the Internet.
<br><br>
Using a distributed hash table (DHT) as a foundation, we propose a replica placement that improves object availability and reachability to implement a robust lookup service. We present a framework that describes tree-based routing DHTs and formally prove several properties for DHTs of this type. Specifically, we prove that our replica placement, which we call MaxDisjoint, creates a provable number of disjoint routes from any source node to a replica set. We evaluate this technique through simulation and demonstrate that it creates disjoint routes more effectively than existing replica placements. Furthermore, we show that disjoint routes have a marked impact on routing robustness, which we measure as the probability of lookup success.
<br><br>
To mitigate the costs incurred by multi-hop DHT routing, we develop an organization-based id assignment scheme that bounds the transit costs of prefix-matching routes. To further reduce costs, we use MaxDisjoint placement to create multiple routes of varying costs. This technique helps reduce cost in two ways: (1) replication may create local copies of an object that can be accessed at zero transit cost and (2) MaxDisjoint replication creates multiple, bounded cost, disjoint routes of which the minimal cost route can be used to resolve the lookup. We model the trade-off between the storage cost and routing cost benefit of replication to find the optimal degree to which an object should be replicated. We evaluate our approach using a lookup service implementation and show that it dramatically reduces cost over existing DHT implementations. Furthermore, we show that our technique can be used to manage objects of varying popularity in a manner that is more cost effective than caching.
<br><br>
By improving its robustness and cost effectiveness, we aim to increase the pervasiveness of p2p in practice and unlock the potential of this powerful technology.
|
Page generated in 0.0445 seconds