• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

[en] ROBUST REGULATION OF A MONOPOLIST / [pt] REGULAÇÃO ROBUSTA DE UM MONOPOLISTA

CARLOS ANTONIO BURGA IDROGO 08 January 2016 (has links)
[pt] Este trabalho estuda o problema de um regulador que enfrenta um monopolista sem observar seus custos. Diferente de estudos anteriores, deixamos o pressuposto forte de que o regulador conhece a verdadeira distribuição de probabilidade de custos do monopolista. Em vez disso, vamos supor que o regulador tém uma distribuição prior e sua incerteza é representada pelo conjunto de distribuições mean preserving spread da sua prior. Regulador é avesso a incerteza, ou seja, ele maximiza o bem-estar social esperado sob a pior distribuição neste conjunto. Regulação ótima depende do estado da natureza e garante que o bem-estar social esperado não é afetado pela incerteza do regulador. Regulador não pode dar incentivos tão forte como os dados quando a distribuição é conhecida, o que significa que a robustez reduz o poder dos contratos. / [en] This work studies the problem of a regulator who faces a monopolist with unknown costs. Different to previous studies, we depart from the strong assumption that regulator knows the true probability distribution of monopolist costs. Instead, we assume that regulator holds a prior distribution and his uncertainty is represented by the set of mean preserving spread distributions of this prior. Regulator is uncertainty averse, i.e., he maximizes expected social welfare under the worst distribution in this set. Optimal regulation is state dependent and guarantees that expected social welfare is not affected by regulator uncertainty. Regulator can not give such strong incentive as those given when distribution is known, which means that concern for robustness reduces the power of contracts.
652

Cross entropy-based analysis of spacecraft control systems

Mujumdar, Anusha Pradeep January 2016 (has links)
Space missions increasingly require sophisticated guidance, navigation and control algorithms, the development of which is reliant on verification and validation (V&V) techniques to ensure mission safety and success. A crucial element of V&V is the assessment of control system robust performance in the presence of uncertainty. In addition to estimating average performance under uncertainty, it is critical to determine the worst case performance. Industrial V&V approaches typically employ mu-analysis in the early control design stages, and Monte Carlo simulations on high-fidelity full engineering simulators at advanced stages of the design cycle. While highly capable, such techniques present a critical gap between pessimistic worst case estimates found using analytical methods, and the optimistic outlook often presented by Monte Carlo runs. Conservative worst case estimates are problematic because they can demand a controller redesign procedure, which is not justified if the poor performance is unlikely to occur. Gaining insight into the probability associated with the worst case performance is valuable in bridging this gap. It should be noted that due to the complexity of industrial-scale systems, V&V techniques are required to be capable of efficiently analysing non-linear models in the presence of significant uncertainty. As well, they must be computationally tractable. It is desirable that such techniques demand little engineering effort before each analysis, to be applied widely in industrial systems. Motivated by these factors, this thesis proposes and develops an efficient algorithm, based on the cross entropy simulation method. The proposed algorithm efficiently estimates the probabilities associated with various performance levels, from nominal performance up to degraded performance values, resulting in a curve of probabilities associated with various performance values. Such a curve is termed the probability profile of performance (PPoP), and is introduced as a tool that offers insight into a control system's performance, principally the probability associated with the worst case performance. The cross entropy-based robust performance analysis is implemented here on various industrial systems in European Space Agency-funded research projects. The implementation on autonomous rendezvous and docking models for the Mars Sample Return mission constitutes the core of the thesis. The proposed technique is implemented on high-fidelity models of the Vega launcher, as well as on a generic long coasting launcher upper stage. In summary, this thesis (a) develops an algorithm based on the cross entropy simulation method to estimate the probability associated with the worst case, (b) proposes the cross entropy-based PPoP tool to gain insight into system performance, (c) presents results of the robust performance analysis of three space industry systems using the proposed technique in conjunction with existing methods, and (d) proposes an integrated template for conducting robust performance analysis of linearised aerospace systems.
653

Využití numerické lineární algebry k urychlení výpočtu odhadů MCD / Exploiting numerical linear algebra to accelerate the computation of the MCD estimator

Sommerová, Kristýna January 2018 (has links)
This work is dealing with speeding up the algorithmization of the MCD es- timator for detection of the mean and the covariance matrix of a normally dis- tributed multivariate data contaminated with outliers. First, the main idea of the estimator and its well-known aproximation by the FastMCD algorithm is discussed. The main focus was to be placed on possibilities of a speedup of the iteration step known as C-step while maintaining the quality of the estimations. This proved to be problematic, if not impossible. The work is, therefore, aiming at creating a new implementation based on the C-step and Jacobi method for eigenvalues. The proposed JacobiMCD algorithm is compared to the FastMCD in terms of floating operation count and results. In conclusion, JacobiMCD is not found to be fully equivalent to FastMCD but hints at a possibility of its usage on larger problems. The numerical experiments suggest that the computation can indeed be quicker by an order of magnitude, while the quality of results is close to those from FastMCD in some settings. 1
654

A Unified Robust Minimax Framework for Regularized Learning Problems

Zhou, Hongbo 01 May 2014 (has links)
Regularization techniques have become a principled tool for model-based statistics and artificial intelligence research. However, in most situations, these regularization terms are not well interpreted, especially on how they are related to the loss function and data matrix in a given statistic model. In this work, we propose a robust minimax formulation to interpret the relationship between data and regularization terms for a large class of loss functions. We show that various regularization terms are essentially corresponding to different distortions to the original data matrix. This supplies a unified framework for understanding various existing regularization terms, designing novel regularization terms based on perturbation analysis techniques, and inspiring novel generic algorithms. To show how to apply minimax related concepts to real-world learning tasks, we develop a new fault-tolerant classification framework to combat class noise for general multi-class classification problems; further, by studying the relationship between the majorizable function class and the minimax framework, we develop an accurate, efficient, and scalable algorithm for solving a large family of learning formulations. In addition, this work has been further extended to tackle several important matrix-decomposition-related learning tasks, and we have validated our work on various real-world applications including structure-from-motion (with missing data) and latent structure dictionary learning tasks. This work, composed of a unified formulation, a scalable algorithm, and promising applications in many real-world learning problems, contributes to the understanding of various hidden robustness in many learning models. As we show, many classical statistical machine learning models can be unified using this formulation and accurate, efficient, and scalable algorithms become available from our research.
655

[en] DESIGN AND ROBUST CONTROL OF A SELF-BALANCING PERSONAL ROBOTIC TRANSPORTER VEHICLE / [pt] PROJETO E CONTROLE ROBUSTO DE UM TRANSPORTADOR PESSOAL ROBÓTICO AUTO EQUILIBRANTE

CESAR RAUL MAMANI CHOQUEHUANCA 07 April 2011 (has links)
[pt] Nesta dissertação, um transportador pessoal robótico auto-equilibrante (TPRE) foi desenvolvido, consistindo de uma plataforma com duas rodas que funciona a partir do equilíbrio do indivíduo que o utiliza, assemelhando-se ao funcionamento do clássico pêndulo invertido. Entre as características que o TPRE tem, podem-se destacar a rapidez na movimentação, o uso de um espaço reduzido, alta capacidade de carga, e capacidade de fazer curvas de raio nulo. Ao contrário de veículos motorizados tradicionais, o TPRE utiliza alimentação elétrica, portanto não gera emissões poluentes e, além disso, não contribui com poluição sonora. Para a locomoção, são utilizados dois motores de corrente contínua de potências entre 0,7HP e 1,6HP. Para medir o ângulo de inclinação e a velocidade da variação do ângulo de inclinação, é utilizado um acelerômetro de três eixos e um girômetro de um eixo. Para indicar a direção do TPRE, foi utilizado um potenciômetro deslizante. A modelagem dinâmica do sistema foi feita usando o método de Kane, utilizada posteriormente em simulações na plataforma Matlab. O controlador lê os sinais provenientes do acelerômetro, do girômetro e do potenciômetro deslizante, e envia o sinal de controle, em forma de PWM, a placas controladoras de velocidade dos motores, usando a linguagem eLua. Os algoritmos de controle desenvolvidos neste trabalho foram PID, Fuzzy e Robusto, tendo como variáveis de controle o erro e a velocidade da variação do erro do ângulo de inclinação. Experimentos demonstram que os controles Fuzzy e Robusto reduzem significativamente as oscilações do sistema em terrenos planos em relação ao PID. Verifica-se também uma maior estabilidade para terrenos irregulares ou inclinados. / [en] A Self Balancing Personal Transporter (SBPT) is a robotic platform with two wheels that functions from the balance of the individual who uses it, resembling the operation of classic inverted pendulum. In this thesis, a SBPT is designed, built and controlled. Among the features from the developed SBPT, it can be mentioned: relatively high speeds, agility, compact aluminum structure, zero turn radius, and high load capacity, when compared to other SBPT in the market. Unlike traditional motor vehicles, the SBPT uses electric power, so there is no polluent emissions to the environment and no noise pollution. It is powered by two motors with output powers between 0.7HP and 1.6HP. To measure the tilt angle and its rate of change, a three-axis accelerometer and a gyroscope are used. The turning commands to the SBPT are sent through a potentiometer attached to the handle bars. The method of Kane is used to obtain the system dynamic equations, which are then used in Matlab simulations. The controller, programmed in eLua, reads the signals from the accelerometer, gyroscope and potentiometer slider, process them, and then sends PWM output signals to the speed controller of the drive motors. This thesis studies three control implementations: PID, Fuzzy and Robust Control. The control variables are the error and error variation of the tilt angle. It is found that the Fuzzy and Robust controls are more efficient than the PID to stabilize the system on inclined planes and on rough terrain.
656

Une méthode d'optimisation hybride pour une évaluation robuste de requêtes / A Hybrid Method to Robust Query Processing

Moumen, Chiraz 29 May 2017 (has links)
La qualité d'un plan d'exécution engendré par un optimiseur de requêtes est fortement dépendante de la qualité des estimations produites par le modèle de coûts. Malheureusement, ces estimations sont souvent imprécises. De nombreux travaux ont été menés pour améliorer la précision des estimations. Cependant, obtenir des estimations précises reste très difficile car ceci nécessite une connaissance préalable et détaillée des propriétés des données et des caractéristiques de l'environnement d'exécution. Motivé par ce problème, deux approches principales de méthodes d'optimisation ont été proposées. Une première approche s'appuie sur des valeurs singulières d'estimations pour choisir un plan d'exécution optimal. A l'exécution, des statistiques sont collectées et comparées à celles estimées. En cas d'erreur d'estimation, une ré-optimisation est déclenchée pour le reste du plan. A chaque invocation, l'optimiseur associe des valeurs spécifiques aux paramètres nécessaires aux calculs des coûts. Cette approche peut ainsi induire plusieurs ré-optimisations d'un plan, engendrant ainsi de mauvaises performances. Dans l'objectif d'éviter cela, une approche alternative considère la possibilité d'erreurs d'estimation dès la phase d'optimisation. Ceci est modélisé par l'utilisation d'un ensemble de points d'estimations pour chaque paramètre présumé incertain. L'objectif est d'anticiper la réaction à une sous-optimalité éventuelle d'un plan d'exécution. Les méthodes dans cette approche cherchent à générer des plans robustes dans le sens où ils sont capables de fournir des performances acceptables et stables pour plusieurs conditions d'exécution. Ces méthodes supposent souvent qu'il est possible de trouver un plan robuste pour l'ensemble de points d'estimations considéré. Cette hypothèse reste injustifiée, notamment lorsque cet ensemble est important. De plus, la majorité de ces méthodes maintiennent sans modification un plan d'exécution jusqu'à la terminaison. Cela peut conduire à de mauvaises performances en cas de violation de la robustesse à l'exécution. Compte tenu de ces constatations, nous proposons dans le cadre de cette thèse une méthode d'optimisation hybride qui vise deux objectifs : la production de plans d'exécution robustes, notamment lorsque l'incertitude des estimations utilisées est importante, et la correction d'une violation de la robustesse pendant l'exécution. Notre méthode s'appuie sur des intervalles d'estimations calculés autour des paramètres incertains, pour produire des plans d'exécution robustes. Ces plans sont ensuite enrichis par des opérateurs dits de contrôle et de décision. Ces opérateurs collectent des statistiques à l'exécution et vérifient la robustesse du plan en cours. Si la robustesse est violée, ces opérateurs sont capables de prendre des décisions de corrections du reste du plan sans avoir besoin de rappeler l'optimiseur. Les résultats de l'évaluation des performances de notre méthode indiquent qu'elle fournit des améliorations significatives dans la robustesse d'évaluation de requêtes. / The quality of an execution plan generated by a query optimizer is highly dependent on the quality of the estimates produced by the cost model. Unfortunately, these estimates are often imprecise. A body of work has been done to improve estimate accuracy. However, obtaining accurate estimates remains very challenging since it requires a prior and detailed knowledge of the data properties and run-time characteristics. Motivated by this issue, two main optimization approaches have been proposed. A first approach relies on single-point estimates to choose an optimal execution plan. At run-time, statistics are collected and compared with estimates. If an estimation error is detected, a re-optimization is triggered for the rest of the plan. At each invocation, the optimizer uses specific values for parameters required for cost calculations. Thus, this approach can induce several plan re-optimizations, resulting in poor performance. In order to avoid this, a second approach considers the possibility of estimation errors at the optimization time. This is modelled by the use of multi-point estimates for each error-prone parameter. The aim is to anticipate the reaction to a possible plan sub-optimality. Methods in this approach seek to generate robust plans, which are able to provide good performance for several run-time conditions. These methods often assume that it is possible to find a robust plan for all expected run-time conditions. This assumption remains unjustified. Moreover, the majority of these methods maintain without modifications an execution plan until the termination. This can lead to poor performance in case of robustness violation at run-time. Based on these findings, we propose in this thesis a hybrid optimization method that aims at two objectives : the production of robust execution plans, particularly when the uncertainty in the used estimates is high, and the correction of a robustness violation during execution. This method makes use of intervals of estimates around error-prone parameters. It produces execution plans that are likely to perform reasonably well over different run-time conditions, so called robust plans. Robust plans are then augmented with what we call check-decide operators. These operators collect statistics at run-time and check the robustness of the current plan. If the robustness is violated, check-decide operators are able to make decisions for plan modifications to correct the robustness violation without a need to recall the optimizer. The results of performance studies of our method indicate that it provides significant improvements in the robustness of query processing.
657

Robust multivariate mixture regression models

Li, Xiongya January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Weixing Song / In this dissertation, we proposed a new robust estimation procedure for two multivariate mixture regression models and applied this novel method to functional mapping of dynamic traits. In the first part, a robust estimation procedure for the mixture of classical multivariate linear regression models is discussed by assuming that the error terms follow a multivariate Laplace distribution. An EM algorithm is developed based on the fact that the multivariate Laplace distribution is a scale mixture of the multivariate standard normal distribution. The performance of the proposed algorithm is thoroughly evaluated by some simulation and comparison studies. In the second part, the similar idea is extended to the mixture of linear mixed regression models by assuming that the random effect and the regression error jointly follow a multivariate Laplace distribution. Compared with the existing robust t procedure in the literature, simulation studies indicate that the finite sample performance of the proposed estimation procedure outperforms or is at least comparable to the robust t procedure. Comparing to t procedure, there is no need to determine the degrees of freedom, so the new robust estimation procedure is computationally more efficient than the robust t procedure. The ascent property for both EM algorithms are also proved. In the third part, the proposed robust method is applied to identify quantitative trait loci (QTL) underlying a functional mapping framework with dynamic traits of agricultural or biomedical interest. A robust multivariate Laplace mapping framework was proposed to replace the normality assumption. Simulation studies show the proposed method is comparable to the robust multivariate t-distribution developed in literature and outperforms the normal procedure. As an illustration, the proposed method is also applied to a real data set.
658

Experimental Designs for Generalized Linear Models and Functional Magnetic Resonance Imaging

January 2014 (has links)
abstract: In this era of fast computational machines and new optimization algorithms, there have been great advances in Experimental Designs. We focus our research on design issues in generalized linear models (GLMs) and functional magnetic resonance imaging(fMRI). The first part of our research is on tackling the challenging problem of constructing exact designs for GLMs, that are robust against parameter, link and model uncertainties by improving an existing algorithm and providing a new one, based on using a continuous particle swarm optimization (PSO) and spectral clustering. The proposed algorithm is sufficiently versatile to accomodate most popular design selection criteria, and we concentrate on providing robust designs for GLMs, using the D and A optimality criterion. The second part of our research is on providing an algorithm that is a faster alternative to a recently proposed genetic algorithm (GA) to construct optimal designs for fMRI studies. Our algorithm is built upon a discrete version of the PSO. / Dissertation/Thesis / Doctoral Dissertation Statistics 2014
659

PID Controller Tuning and Adaptation of a Buck Converter

January 2016 (has links)
abstract: Buck converters are electronic devices that changes a voltage from one level to a lower one and are present in many everyday applications. However, due to factors like aging, degradation or failures, these devices require a system identification process to track and diagnose their parameters. The system identification process should be performed on-line to not affect the normal operation of the device. Identifying the parameters of the system is essential to design and tune an adaptive proportional-integral-derivative (PID) controller. Three techniques were used to design the PID controller. Phase and gain margin still prevails as one of the easiest methods to design controllers. Pole-zero cancellation is another technique which is based on pole-placement. However, although these controllers can be easily designed, they did not provide the best response compared to the Frequency Loop Shaping (FLS) technique. Therefore, since FLS showed to have a better frequency and time responses compared to the other two controllers, it was selected to perform the adaptation of the system. An on-line system identification process was performed for the buck converter using indirect adaptation and the least square algorithm. The estimation error and the parameter error were computed to determine the rate of convergence of the system. The indirect adaptation required about 2000 points to converge to the true parameters prior designing the controller. These results were compared to the adaptation executed using robust stability condition (RSC) and a switching controller. Two different scenarios were studied consisting of five plants that defined the percentage of deterioration of the capacitor and inductor within the buck converter. The switching logic did not always select the optimal controller for the first scenario because the frequency response of the different plants was not significantly different. However, the second scenario consisted of plants with more noticeable different frequency responses and the switching logic selected the optimal controller all the time in about 500 points. Additionally, a disturbance was introduced at the plant input to observe its effect in the switching controller. However, for reasonable low disturbances no change was detected in the proper selection of controllers. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
660

Robustesse et visualisation de production de mélanges / Robustness and visualization of blend's production

Aguilera Cabanas, Jorge Antonio 28 October 2011 (has links)
Le procédé de fabrication de mélanges (PM) consiste à déterminer les proportions optimales à mélanger d'un ensemble de composants de façon que le produit obtenu satisfasse un ensemble de spécifications sur leurs propriétés. Deux caractéristiques importantes du problème de mélange sont les bornes dures sur les propriétés du mélange et l'incertitude répandue dans le procédé. Dans ce travail, on propose une méthode pour la production de mélanges robustes en temps réel qui minimise le coût de la recette et la sur-qualité du mélange. La méthode est basée sur les techniques de l'Optimisation Robuste et sur l'hypothèse que les lois des mélange sont linéaires. On exploite les polytopes sous-jacents pour mesurer, visualiser et caractériser l'infaisabilité du PM et on analyse la modification des bornes sur les composants pour guider le procédé vers le ``meilleur`` mélange robuste. On propose un ensemble d'indicateurs et de visualisations en vue d'offrir une aide à la décision. / The oil blending process (BP) consists in determining the optimal proportions to blend from a set of available components such that the final product fulfills a set of specifications on their properties. Two important characteristics of the blending problem are the hard bounds on the blend's properties and the uncertainty pervading the process. In this work, a real-time optimization method is proposed for producing robust blends while minimizing the blend quality giveaway and the recipe's cost. The method is based on the Robust Optimization techniques and under the assumption that the components properties blend linearly. The blending intrinsic polytopes are exploited in order to measure, visualize and characterize the infeasibility of the BP. A fine analysis of the components bounds modifications is conducted to guide the process towards the ``best`` robust blend. A set of indices and visualizations provide a helpful support for the decision maker.

Page generated in 0.0391 seconds