• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Some robust optimization methods for inverse problems.

January 2009 (has links)
Wang, Yiran. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 70-73). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.6 / Chapter 1.1 --- Overview of the subject --- p.6 / Chapter 1.2 --- Motivation --- p.8 / Chapter 2 --- Inverse Medium Scattering Problem --- p.11 / Chapter 2.1 --- Mathematical Formulation --- p.11 / Chapter 2.1.1 --- Absorbing Boundary Conditions --- p.12 / Chapter 2.1.2 --- Applications --- p.14 / Chapter 2.2 --- Preliminary Results --- p.17 / Chapter 2.2.1 --- Weak Formulation --- p.17 / Chapter 2.2.2 --- About the Unique Determination --- p.21 / Chapter 3 --- Unconstrained Optimization: Steepest Decent Method --- p.25 / Chapter 3.1 --- Recursive Linearization Method Revisited --- p.25 / Chapter 3.1.1 --- Frechet differentiability --- p.26 / Chapter 3.1.2 --- Initial guess --- p.28 / Chapter 3.1.3 --- Landweber iteration --- p.30 / Chapter 3.1.4 --- Numerical Results --- p.32 / Chapter 3.2 --- Steepest Decent Analysis --- p.35 / Chapter 3.2.1 --- Single Wave Case --- p.36 / Chapter 3.2.2 --- Multiple Wave Case --- p.39 / Chapter 3.3 --- Numerical Experiments and Discussions --- p.43 / Chapter 4 --- Constrained Optimization: Augmented Lagrangian Method --- p.51 / Chapter 4.1 --- Method Review --- p.51 / Chapter 4.2 --- Problem Formulation --- p.54 / Chapter 4.3 --- First Order Optimality Condition --- p.56 / Chapter 4.4 --- Second Order Optimality Condition --- p.60 / Chapter 4.5 --- Modified Algorithm --- p.62 / Chapter 5 --- Conclusions and Future Work --- p.68 / Bibliography --- p.70
492

Design and Optimisation Methods for Structures produced by means of Additive Layer Manufacturing processes / Conception et optimisation des structures obtenues par Additive Layer Manufacturing

Costa, Giulio 22 October 2018 (has links)
Le développement récent des technologies de fabrication additive par couches (Additive Layer Manufacturing) a généré de nouvelles opportunités en termes de conception. Généralement, une étape d'optimisation topologique est réalisée pour les structures ALM. Cette tâche est aujourd'hui facilitée par des progiciels commerciaux, comme Altair OptiStruct ou Simulia TOSCA. Néanmoins, la liberté accordée par l’ALM est seulement apparente et des problèmes majeurs empêchent une exploitation complète et généralisée de cette technologie.La première lacune importante provient de l'intégration directe du résultat d'un calcul d’optimisation topologique dans un environnement CAO approprié. Quoi qu'il en soit, la géométrie optimisée résultante n'est disponible que sous une forme discrétisée, c'est-à-dire en termes d'éléments finis (FE) obtenus à la fin de l'optimisation. La frontière de la géométrie optimisée n'est pas décrite par une entité géométrique, par conséquent la topologie résultante n'est pas compatible avec les logiciels de CAO qui constituent l'environnement naturel du concepteur. Une phase de reconstruction CAO longue est nécessaire et le concepteur est obligé de prendre une quantité considérable de décisions arbitraires. Souvent la topologie CAO compatible résultante ne répond plus aux contraintes d'optimisation.La deuxième restriction majeure est liée aux exigences technologiques spécifiques à l’ALM qui doivent être intégrées directement dans la formulation du problème d'optimisation: considérer la spécificité de l’ALM uniquement comme un post-traitement de la tâche d’optimisation topologique impliquerait des modifications si importantes de la pièce que la topologie optimisée pourrait être complètement différente de la solution optimisée.Cette thèse propose une méthodologie générale pour résoudre les inconvénients mentionnés ci-dessus. Un algorithme d’optimisation topologique innovant a été développé: il vise à fournir une description de la topologie basée sur des entités NURBS et B-Spline purement géométriques, qui sont nativement CAO compatibles. Dans ce cadre, les analyses éléments finis sont utilisées uniquement pour évaluer les réponses physiques du problème étudié. En particulier, une entité géométrique NURBS / B-Spline de dimension D + 1 est utilisée pour résoudre le problème d’optimisation topologique de dimension D.L'efficacité de cette approche a été testée sur certains benchmarks 2D et 3D, issus de la littérature. L'utilisation d'entités NURBS dans la formulation de l’optimisation topologique accélère considérablement la phase de reconstruction CAO pour les structures 2D et présente un grand potentiel pour les problèmes 3D. En outre, il est prouvé que les contraintes géométriques, comme par exemple les épaisseurs minimale et maximale de matière, peuvent être efficacement et systématiquement traitées au moyen de l'approche proposée. De plus, des contraintes géométriques spéciales (non disponibles dans les outils commerciaux), par exemple le rayon de courbure local de la frontière de la phase solide, peuvent être formulées également grâce à la formulation NURBS. La robustesse de la méthodologie proposée a été testée en prenant en compte d'autres grandeurs mécaniques, telles que les charges de flambement et les fréquences naturelles liées aux modes de vibration.Enfin, malgré la nature intrinsèque de l'algorithme d’optimisation topologique basé sur les NURBS, certains outils ont été développés pour déterminer automatiquement le contour des pièces 2D sous forme de courbe et sous forme de surface dans le cadre 3D. L’identification automatique des paramètres des courbes 2D a été entièrement développée et un algorithme original a été proposé. Les principes fondamentaux de la méthode sont également discutés pour l'identification des paramètres des surfaces limites pour les pièces 3D. / The recent development of Additive Layer Manufacturing (ALM) technologies has made possible new opportunities in terms of design. Complicated shapes and topologies, resulting from dedicated optimisation processes or by the designer decisions, are nowadays attainable. Generally, a Topology Optimisation (TO) step is considered when dealing with ALM structures and today this task is facilitated by commercial software packages, like Altair OptiStruct or Simulia TOSCA. Nevertheless, the freedom granted by ALM is only apparent and there are still major issues hindering a full and widespread exploitation of this technology.The first important shortcoming comes from the integration of the result of a TO calculation into a suitable CAD environment. The optimised geometry is available only in a discretised form, i.e. in terms of Finite Elements (FE), which are retained into the computational domain at the end of the TO analysis. Therefore, the boundary of the optimised geometry is not described by a geometrical entity, hence the resulting topology is not compatible with CAD software that constitutes the natural environment for the designer. A time consuming CAD-reconstruction phase is needed and the designer is obliged to make a considerable amount of arbitrary decisions. Consequently, often the resulting CAD-compatible topology does not meet the optimisation constraints.The second major restriction is related to ALM specific technological requirements that should be integrated directly within the optimisation problem formulation and not later: considering ALM specificity only as post-treatment of the TO task would imply so deep modifications of the component that the optimised configuration would be completely overturned.This PhD thesis proposes a general methodology to overcome the aforementioned drawbacks. An innovative TO algorithm has been developed: it aims at providing a topology description based on purely geometric, intrinsically CAD-compliant entities. In this framework, NURBS and B-Spline geometric entities have been naturally considered and FE analyses are used only to evaluate the physical responses for the problem at hand. In particular, a NURBS/B-Spline geometric entity of dimension D+1 is used to solve the TO problem of dimension D. The D+1 coordinate of the NURBS/B-Spline entity is related to a pseudo-density field that is affected to the generic element stiffness matrix; according to the classical penalisation scheme employed in density-based TO methods.The effectiveness of this approach has been tested on some 2D and 3D benchmarks, taken from literature. The use of NURBS entities in the TO formulation significantly speeds up the CAD reconstruction phase for 2D structures and exhibits a great potential for 3D TO problems. Further, it is proven that geometrical constraints, like minimum and maximum length scales, can be effectively and consistently handled by means of the proposed approach. Moreover, special geometric constraints (not available in commercial tools), e.g. on the local curvature radius of the boundary, can be formulated thanks to the NURBS formulation as well. The robustness of the proposed methodology has been tested by taking into account other mechanical quantities of outstanding interest in engineering, such as buckling loads and natural frequencies.Finally, in spite of the intrinsic CAD-compliant nature of the NURBS-based TO algorithm, some support tools have been developed in order to perform the curve and surface fitting in a very general framework. The automatic curve fitting has been completely developed and an original algorithm is developed for choosing the best values of the NURBS curve parameters, both discrete and continuous. The fundamentals of the method are also discussed for the more complicated surface fitting problem and ideas/suggestions for further researches are provided.
493

Formal methods for resilient control

Sadraddini, Sadra 20 February 2018 (has links)
Many systems operate in uncertain, possibly adversarial environments, and their successful operation is contingent upon satisfying specific requirements, optimal performance, and ability to recover from unexpected situations. Examples are prevalent in many engineering disciplines such as transportation, robotics, energy, and biological systems. This thesis studies designing correct, resilient, and optimal controllers for discrete-time complex systems from elaborate, possibly vague, specifications. The first part of the contributions of this thesis is a framework for optimal control of non-deterministic hybrid systems from specifications described by signal temporal logic (STL), which can express a broad spectrum of interesting properties. The method is optimization-based and has several advantages over the existing techniques. When satisfying the specification is impossible, the degree of violation - characterized by STL quantitative semantics - is minimized. The computational limitations are discussed. The focus of second part is on specific types of systems and specifications for which controllers are synthesized efficiently. A class of monotone systems is introduced for which formal synthesis is scalable and almost complete. It is shown that hybrid macroscopic traffic models fall into this class. Novel techniques in modular verification and synthesis are employed for distributed optimal control, and their usefulness is shown for large-scale traffic management. Apart from monotone systems, a method is introduced for robust constrained control of networked linear systems with communication constraints. Case studies on longitudinal control of vehicular platoons are presented. The third part is about learning-based control with formal guarantees. Two approaches are studied. First, a formal perspective on adaptive control is provided in which the model is represented by a parametric transition system, and the specification is captured by an automaton. A correct-by-construction framework is developed such that the controller infers the actual parameters and plans accordingly for all possible future transitions and inferences. The second approach is based on hybrid model identification using input-output data. By assuming some limited knowledge of the range of system behaviors, theoretical performance guarantees are provided on implementing the controller designed for the identified model on the original unknown system.
494

Aplicação de técnicas de controle preditivo em uma coluna de destilação. / Application of predictive control techniques in a distillation column.

Martin, Paulo Alexandre 25 March 2011 (has links)
Este trabalho apresenta todos os passos para a implementação de técnicas de controle preditivo em uma coluna de destilação. Inicialmente a tese introduz basicamente o funcionamento e a meta do processo de destilação. Modelos linearizados em tempo contínuo da coluna de destilação são obtidos a partir de ensaios experimentais da coluna em diferentes pontos de operação. Com base nestes modelos, várias topologias de controladores preditivos baseados em modelo são implementadas. Um otimizador em tempo real é integrado aos controladores preditivos para a redução do custo operacional da planta. Resultados simulados e resultados experimentais de todas as topologias de controladores preditivos estudados são apresentados. / This work presents all the steps to the implementation of predictive control techniques in a distillation column. First the thesis basically introduces the working and the goal of the distillation process. Linearized models in continuous time of the distillation column are obtained from experimental tests of the column in different operating points. Based on this models, several model based predictive controllers topologies are implemented. A real time optimizer is integrated with the predictive controllers to the reduction of the plant operational cost. Simulated results and experimental results of all studied predictive controllers topologies are presented.
495

Simulation-Based Design Under Uncertainty for Compliant Microelectromechanical Systems

Wittwer, Jonathan W. 11 March 2005 (has links)
The high cost of experimentation and product development in the field of microelectromechanical systems (MEMS) has led to a greater emphasis on simulation-based design for increasing first-pass design success and reliability. The use of compliant or flexible mechanisms can help eliminate friction, wear, and backlash, but compliant MEMS are sensitive to variations in material properties and geometry. This dissertation proposes approaches for design stage uncertainty analysis, model validation, and robust optimization of nonlinear compliant MEMS to account for critical process uncertainties including residual stress, layer thicknesses, edge bias, and material stiffness. Methods for simulating and mitigating the effects of non-idealities such joint clearances, semi-rigid supports, non-ideal loading, and asymmetry are also presented. Approaches are demonstrated and experimentally validated using bistable micromechanisms and thermal microactuators as examples.
496

Robustní přístupy v optimalizaci portfolia se stochastickou dominancí / Robust approaches in portfolio optimization with stochastic dominance

Kozmík, Karel January 2019 (has links)
We use modern approach of stochastic dominance in portfolio optimization, where we want the portfolio to dominate a benchmark. Since the distribution of returns is often just estimated from data, we look for the worst distribution that differs from empirical distribution at maximum by a predefined value. First, we define in what sense the distribution is the worst for the first and second order stochastic dominance. For the second order stochastic dominance, we use two different formulations for the worst case. We derive the robust stochastic dominance test for all the mentioned approaches and find the worst case distribution as the optimal solution of a non-linear maximization problem. Then we derive programs to maximize an objective function over the weights of the portfolio with robust stochastic dominance in constraints. We consider robustness either in returns or in probabilities for both the first and the second order stochastic dominance. To the best of our knowledge nobody was able to derive such program before. We apply all the derived optimization programs to real life data, specifically to returns of assets captured by Dow Jones Industrial Average, and we analyze the problems in detail using optimal solutions of the optimization programs with multiple setups. The portfolios calculated using...
497

Penalized methods and algorithms for high-dimensional regression in the presence of heterogeneity

Yi, Congrui 01 December 2016 (has links)
In fields such as statistics, economics and biology, heterogeneity is an important topic concerning validity of data inference and discovery of hidden patterns. This thesis focuses on penalized methods for regression analysis with the presence of heterogeneity in a potentially high-dimensional setting. Two possible strategies to deal with heterogeneity are: robust regression methods that provide heterogeneity-resistant coefficient estimation, and direct detection of heterogeneity while estimating coefficients accurately in the meantime. We consider the first strategy for two robust regression methods, Huber loss regression and quantile regression with Lasso or Elastic-Net penalties, which have been studied theoretically but lack efficient algorithms. We propose a new algorithm Semismooth Newton Coordinate Descent to solve them. The algorithm is a novel combination of Semismooth Newton Algorithm and Coordinate Descent that applies to penalized optimization problems with both nonsmooth loss and nonsmooth penalty. We prove its convergence properties, and show its computational efficiency through numerical studies. We also propose a nonconvex penalized regression method, Heterogeneity Discovery Regression (HDR) , as a realization of the second idea. We establish theoretical results that guarantees statistical precision for any local optimum of the objective function with high probability. We also compare the numerical performances of HDR with competitors including Huber loss regression, quantile regression and least squares through simulation studies and a real data example. In these experiments, HDR methods are able to detect heterogeneity accurately, and also largely outperform the competitors in terms of coefficient estimation and variable selection.
498

Experimental validation of a high accuracy pointing system / Validation expérimentale d’un système de pointage de grande précision

Sanfedino, Francesco 25 April 2019 (has links)
Dans la quasi-totalité des missions d'observation de la Terre requérant une grande précision de pointage, les micro-vibrations sont le principal élément dégradant les performances de pointage. Les principales sources de micro-perturbations sont les roues et, lorsqu'il y en a, les refroidisseurs cryogéniques. D’autres sources de perturbations sont les propulseurs chimiques, les moteurs pas à pas de l'antenne solaire, les mécanismes d'entraînement,… L'objectif de cette thèse (NPI) est de concevoir et de valider un système de pointage actif de haute précision à base d’actionneurs piézoélectriques capable de rejeter les micro-vibrations au niveau d’un miroir, avec des pénalités de masse et de puissance contrôlées. Les caractéristiques attendues de ce système sont : • une grande bande passante en boucle fermée : typiquement jusqu'à 100 Hz • une faible erreur résiduelle: typiquement inférieure à 50-100 rad (ordre de grandeur approximatif) • un encombrement et une masse faibles (à quantifier au cours de la thèse) • une puissance requise minimale (à optimiser au cours de la thèse) • la modularité • une possible évolution Ce sujet est fortement pluridisciplinaire (mécanique, automatique, optique et instrumentation). Les défis scientifiques de la thèse sont : • la conception d’un système de pointage actif à bande passante élevée avec impact de masse et de volume faible et une puissance requise à minimiser • la commande robuste du système de pointage actif permettant de rejeter des micro-perturbations dont le spectre varie en fonction des phases de la mission • la tenue des performances en précision • la définition d'une méthodologie générique de conception intégrée applicable à d'autres systèmes de pointage (plusieurs degrés de liberté, ...) / On almost all high accuracy pointing Science and Earth observation missions, micro-vibrations are the major contributor to pointing performances degradations (RPE). The main sources of micro-disturbances being the wheels and, when present, the cry-coolers. Other disturbance sources may originate from chemical thrusters, antenna stepper motors, Solar Array Drive Mechanisms (SADM), antenna trimming mechanisms, or payload mechanisms set either inside the sensitive payload, or inside another payload of the same spacecraft. The objective of this NPI is to investigate and validate a high accuracy active pointing system able to reject micro-vibrations at instrument level: • large control bandwidth : typically up to 100Hz • low residual error : typically lower than 50-100nrad (rough order magnitude to be further defined in the frame of this NPI) • low mass and volume impacts • scalable • modular This subject is strongly multidisciplinary (mechanics, control theory, optics and instrumentation). The scientific challenges of the thesis are: • the design of an active pointing system with high bandwidth, low impact of mass and volume and minimized power • the robust control of the active pointing system allowing to reject micro-disturbances whose spectrum varies according to the phases of the mission • obtaining high accuracy performances • the definition of a generic methodology of integrated design applicable to other pointing systems (several degrees of freedom e.g.)
499

Two-level lognormal frailty model and competing risks model with missing cause of failure

Tang, Xiongwen 01 May 2012 (has links)
In clustered survival data, unobservable cluster effects may exert powerful influences on the outcomes and thus induce correlation among subjects within the same cluster. The ordinary partial likelihood approach does not account for this dependence. Frailty models, as an extension to Cox regression, incorporate multiplicative random effects, called frailties, into the hazard model and have become a very popular way to account for the dependence within clusters. We particularly study the two-level nested lognormal frailty model and propose an estimation approach based on the complete data likelihood with frailty terms integrated out. We adopt B-splines to model the baseline hazards and adaptive Gauss-Hermite quadrature to approximate the integrals efficiently. Furthermore, in finding the maximum likelihood estimators, instead of the Newton-Raphson iterative algorithm, Gauss-Seidel and BFGS methods are used to improve the stability and efficiency of the estimation procedure. We also study competing risks models with missing cause of failure in the context of Cox proportional hazards models. For competing risks data, there exists more than one cause of failure and each observed failure is exclusively linked to one cause. Conceptually, the causes are interpreted as competing risks before the failure is observed. Competing risks models are constructed based on the proportional hazards model specified for each cause of failure respectively, which can be estimated using partial likelihood approach. However, the ordinary partial likelihood is not applicable when the cause of failure could be missing for some reason. We propose a weighted partial likelihood approach based on complete-case data, where weights are computed as the inverse of selection probability and the selection probability is estimated by a logistic regression model. The asymptotic properties of the regression coefficient estimators are investigated by applying counting process and martingale theory. We further develop a double robust approach based on the full data to improve the efficiency as well as the robustness.
500

Optimization under uncertainty: conic programming representations, relaxations, and approximations

Xu, Guanglin 01 August 2017 (has links)
In practice, the presence of uncertain parameters in optimization problems introduces new challenges in modeling and solvability to operations research. There are three main paradigms proposed for optimization problems under uncertainty. These include stochastic programming, robust optimization, and sensitivity analysis. In this thesis, we examine, improve, and combine the latter two paradigms in several relevant models and applications. In the second chapter, we study a two-stage adjustable robust linear optimization problem in which the right-hand sides are uncertain and belong to a compact, convex, and tractable uncertainty set. Under standard and simple assumptions, we reformulate the two-stage problem as a copositive optimization program, which in turns leads to a class of tractable semidefinite-based approximations that are at least as strong as the affine policy, which is a well studied tractable approximation in the literature. We examine our approach over several examples from the literature and the results demonstrate that our tractable approximations significantly improve the affine policy. In particular, our approach recovers the optimal values of a class of instances of increasing size for which the affine policy admits an arbitrary large gap. In the third chapter, we leverage the concept of robust optimization to conduct sensitivity analysis of the optimal value of linear programming (LP). In particular, we propose a framework for sensitivity analysis of LP problems, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex, and tractable uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case LP and two-stage adjustable linear programming. We define the best-case and worst-case LP optimal values over the uncertainty set. As the concept aligns well with the general spirit of robust optimization, we denote our approach as robust sensitivity analysis. While the best-case and worst-case optimal values are difficult to compute in general, we prove that they equal the optimal values of two separate, but related, copositive programs. We then develop tight, tractable conic relaxations to provide bounds on the best-case and worst case optimal values, respectively. We also develop techniques to assess the quality of the bounds, and we validate our approach computationally on several examples from—and inspired by—the literature. We find that the bounds are very strong in practice and, in particular, are at least as strong as known results for specific cases from the literature. In the fourth chapter of this thesis, we study the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing all distributions that could have generated the observed samples with a high confidence. The problem of interest is to investigate the bound on the expected optimal value over the Wasserstein ambiguity set. Under standard assumptions, we reformulate the problem into a copositive programming problem, which naturally leads to a tractable semidefinite-based approximation. We compare our approach with a moment-based approach from the literature for two applications. The numerical results illustrate the effectiveness of our approach. Finally, we conclude the thesis with remarks on some interesting open questions in the field of optimization under uncertainty. In particular, we point out that some interesting topics that can be potentially studied by copositive programming techniques.

Page generated in 0.0635 seconds