• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 18
  • 7
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 100
  • 100
  • 21
  • 20
  • 19
  • 19
  • 16
  • 16
  • 14
  • 13
  • 13
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Tuning evolutionary search for closed-loop optimization

Allmendinger, Richard January 2012 (has links)
Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is becoming more popular across the sciences, it may be subject to rather unexplored resourcing issues, as any experiment may require resources in order to be conducted. In this thesis we are concerned with understanding how evolutionary search is affected by three particular resourcing issues -- ephemeral resource constraints (ERCs), changes of variables, and lethal environments -- and the development of search strategies to combat these issues. The thesis makes three broad contributions. First, we motivate and formally define the resourcing issues considered. Here, concrete examples in a range of applications are given. Secondly, we theoretically and empirically investigate the effect of the resourcing issues considered on evolutionary search. This investigation reveals that resourcing issues affect optimization in general, and that clear patterns emerge relating specific properties of the different resourcing issues to performance effects. Thirdly, we develop and analyze various search strategies augmented on an evolutionary algorithm (EA) for coping with resourcing issues. To cope specifically with ERCs, we develop several static constraint-handling strategies, and investigate the application of reinforcement learning techniques to learn when to switch between these static strategies during an optimization process. We also develop several online resource-purchasing strategies to cope with ERCs that leave the arrangement of resources to the hands of the optimizer. For problems subject to changes of variables relating to the resources, we find that knowing which variables are changed provides an optimizer with valuable information, which we exploit using a novel dynamic strategy. Finally, for lethal environments, where visiting parts of the search space can cause the permanent loss of resources, we observe that a standard EA's population may be reduced in size rapidly, complicating the search for innovative solutions. To cope with such scenarios, we consider some non-standard EA setups that are able to innovate genetically whilst simultaneously mitigating risks to the evolving population.
32

Bias in mixtures of normal distributions and joint modeling of longitudinal and time-to-event data with monotonic change curves

Lourens, Spencer 01 May 2015 (has links)
Estimating parameters in a mixture of normal distributions dates back to the 19th century when Pearson originally considered data of crabs from the Bay of Naples. Since then, many real world applications of mixtures have led to various proposed methods for studying similar problems. Among them, maximum likelihood estimation (MLE) and the continuous empirical characteristic function (CECF) methods have drawn the most attention. However, the performance of these competing estimation methods has not been thoroughly studied in the literature and conclusions have not been consistent in published research. In this article, we review this classical problem with a focus on estimation bias. An extensive simulation study is conducted to compare the estimation bias between the MLE and CECF methods over a wide range of disparity values. We use the overlapping coefficient (OVL) to measure the amount of disparity, and provide a practical guideline for estimation quality in mixtures of normal distributions. Application to an ongoing multi-site Huntington disease study is illustrated for ascertaining cognitive biomarkers of disease progression. We also study joint modeling of longitudinal and time-to-event data and discuss pattern-mixture and selection models, but focus on shared parameter models, which utilize unobserved random effects in order to "join" a marginal longitudinal data model and marginal survival model in order to assess an internal time-dependent covariate's effect on time-to-event. The marginal models used in the analysis are the Cox Proportional Hazards model and the Linear Mixed model, and both of these models are covered in some detail before defining joints models and describing the estimation process. Joint modeling provides a modeling framework which accounts for correlation between the longitudinal data and the time-to-event data, while also accounting for measurement error in the longitudinal process, which previous methods failed to do. Since it has been shown that bias is incurred, and this bias is proportional to the amount of measurement error, utilizing a joint modeling approach is preferred. Our setting is also complicated by monotone degeneration of the internal covariate considered, and so a joint model which utilizes monotone B-Splines to recover the longitudinal trajectory and a Cox Proportional Hazards (CPH) model for the time-to-event data is proposed. The monotonicity constraints are satisfied via the Projected Newton Raphson Algorithm as described by Cheng et al., 2012, with the baseline hazard profiled out of the $Q$ function in each M-step of the Expectation Maximization (EM) algorithm used for optimizing the observed likelihood. This method is applied to assess Total Motor Score's (TMS) ability to predict Huntington Disease motor diagnosis in the Biological Predictors of Huntington's Disease study (PREDICT-HD) data.
33

A computational framework for elliptic inverse problems with uncertain boundary conditions

Seidl, Daniel Thomas 29 October 2015 (has links)
This project concerns the computational solution of inverse problems formulated as partial differential equation (PDE)-constrained optimization problems with interior data. The areas addressed are twofold. First, we present a novel software architecture designed to solve inverse problems constrained by an elliptic system of PDEs. These generally require the solution of forward and adjoint problems, evaluation of the objective function, and computation of its gradient, all of which are approximated numerically using finite elements. The creation of specialized "layered"' elements to perform these tasks leads to a modular software structure that improves code maintainability and promotes functional interoperability between different software components. Second, we address issues related to forward model definition in the presence of boundary condition (BC) uncertainty. We propose two variational formulations to accommodate that uncertainty: (a) a Bayesian formulation that assumes Gaussian measurement noise and a minimum strain energy prior, and (b) a Lagrangian formulation that is completely free of displacement and traction BCs. This work is motivated by applications in the field of biomechanical imaging, where the mechanical properties within soft tissues are inferred from observations of tissue motion. In this context, the constraint PDE is well accepted, but considerable uncertainty exists in the BCs. The approaches developed here are demonstrated on a variety of applications, including simulated and experimental data. We present modulus reconstructions of individual cells, tissue-mimicking phantoms, and breast tumors.
34

Rank-Constrained Optimization: Algorithms and Applications

Sun, Chuangchuang 07 November 2018 (has links)
No description available.
35

PRECONDITIONERS FOR PDE-CONSTRAINED OPTIMIZATION PROBLEMS

Alqarni, Mohammed Zaidi A. 08 November 2019 (has links)
No description available.
36

Online optimization of froth flotation processes

Lindqvist, Johan January 2023 (has links)
No description available.
37

Estudo e implementação de um método de restrições ativas para problemas de otimização em caixas / Analysis and design of an active-set method for box-constrained optimization

Gentil, Jan Marcel Paiva 23 June 2010 (has links)
Problemas de otimização em caixas são de grande importância, não só por surgirem naturalmente na formulação de problemas da vida prática, mas também por aparecerem como subproblemas de métodos de penalização ou do tipo Lagrangiano Aumentado para resolução de problemas de programação não-linear. O objetivo do trabalho é estudar um algoritmo de restrições ativas para problemas de otimização em caixas recentemente apresentado chamado ASA e compará-lo à versão mais recente de GENCAN, que é também um método de restrições ativas. Para tanto, foi elaborada uma metodologia de testes robusta e minuciosa, que se propõe a remediar vários dos aspectos comumente criticados em trabalhos anteriores. Com isso, puderam ser extraídas conclusões que levaram à melhoria de GENCAN, conforme ficou posteriormente comprovado por meio da metodologia aqui introduzida. / Box-constrained optimization problems are of great importance not only for naturally arising in several real-life problems formulation, but also for their occurrence as sub-problems in both penalty and Augmented Lagrangian methods for solving nonlinear programming problems. This work aimed at studying a recently introduced active-set method for box-constrained optimization called ASA and comparing it to the latest version of GENCAN, which is also an active-set method. For that purpose, we designed a robust and thorough testing methodology intended to remedy many of the widely criticized aspects of prior works. Thereby, we could draw conclusions leading to GENCAN\'s further development, as it later became evident by means of the same methodology herein proposed.
38

Analysis of Mesh Strategies for Rapid Source Location in Chemical/Biological Attacks

Howard, Patricia Ann 30 April 2004 (has links)
Currently, researchers at Sandia National Laboratories are creating software that is designed to determine the source of a toxic release given sensor readings of the toxin concentration at fixed locations in the building. One of the most important concerns in solving such problems is computation time since even a crude approximation to the source, if found in a timely manner, will give emergency personnel the chance to take appropriate actions to contain the substance. The manner in which the toxin spreads depends on the air flow within the building. Due to the turbulence in the air flow, it is necessary to calculate the flow field on a fine mesh. Unfortunately, using a fine mesh for every calculation in this problem may result in prohibitively long computation times when other features are incorporated into the model. The goal of this thesis is to reduce the computation time required by the software mentioned above by applying two different mesh coarsening strategies after the flow field is computed. The first of these strategies is to use a uniformly coarse mesh and the second is to use our knowledge of the air flow in the building to construct an adaptive mesh. The objective of the latter strategy is to use a fine mesh only in areas where it is absolutely necessary, i.e., in areas where there is a great change in the flow field.
39

Solveurs performants pour l'optimisation sous contraintes en identification de paramètres / Efficient solvers for constrained optimization in parameter identification problems

Nifa, Naoufal 24 November 2017 (has links)
Cette thèse vise à concevoir des solveurs efficaces pour résoudre des systèmes linéaires, résultant des problèmes d'optimisation sous contraintes dans certaines applications de dynamique des structures et vibration (la corrélation calcul-essai, la localisation d'erreur, le modèle hybride, l'évaluation des dommages, etc.). Ces applications reposent sur la résolution de problèmes inverses, exprimés sous la forme de la minimisation d'une fonctionnelle en énergie. Cette fonctionnelle implique à la fois, des données issues d'un modèle numérique éléments finis, et des essais expérimentaux. Ceci conduit à des modèles de haute qualité, mais les systèmes linéaires point-selle associés, sont coûteux à résoudre. Nous proposons deux classes différentes de méthodes pour traiter le système. La première classe repose sur une méthode de factorisation directe profitant de la topologie et des propriétés spéciales de la matrice point-selle. Après une première renumérotation pour regrouper les pivots en blocs d'ordre 2. L'élimination de Gauss est conduite à partir de ces pivots et en utilisant un ordre spécial d'élimination réduisant le remplissage. Les résultats numériques confirment des gains significatifs en terme de remplissage, jusqu'à deux fois meilleurs que la littérature pour la topologie étudiée. La seconde classe de solveurs propose une approche à double projection du système étudié sur le noyau des contraintes, en faisant une distinction entre les contraintes cinématiques et celles reliées aux capteurs sur la structure. La première projection est explicite en utilisant une base creuse du noyau. La deuxième est implicite. Elle est basée sur l'emploi d'un préconditionneur contraint avec des méthodes itératives de type Krylov. Différentes approximations des blocs du préconditionneur sont proposées. L'approche est implémentée dans un environnement distribué parallèle utilisant la bibliothèque PETSc. Des gains significatifs en terme de coût de calcul et de mémoire sont illustrés sur plusieurs applications industrielles. / This thesis aims at designing efficient numerical solution methods to solve linear systems, arising in constrained optimization problems in some structural dynamics and vibration applications (test-analysis correlation, model error localization,hybrid model, damage assessment, etc.). These applications rely on solving inverse problems, by means of minimization of an energy-based functional. This latter involves both data from a numerical finite element model and from experimental tests, which leads to high quality models, but the associated linear systems, that have a saddle-point coefficient matrices, are long and costly to solve. We propose two different classes of methods to deal with these problems. First, a direct factorization method that takes advantage of the special structures and properties of these saddle point matrices. The Gaussian elimination factorization is implemented in order to factorize the saddle point matrices block-wise with small blocks of orders 2 and using a fill-in reducing topological ordering. We obtain significant gains in memory cost (up to 50%) due to enhanced factors sparsity in comparison to literature. The second class is based on a double projection of the generated saddle point system onto the nullspace of the constraints. The first projection onto the kinematic constraints is proposed as an explicit process through the computation of a sparse null basis. Then, we detail the application of a constraint preconditioner within a Krylov subspace solver, as an implicit second projection of the system onto the nullspace of the sensors constraints. We further present and compare different approximations of the constraint preconditioner. The approach is implemented in a parallel distributed environment using the PETSc library. Significant gains in computational cost and memory are illustrated on several industrial applications.
40

Les données géographiques 3D pour simuler l’impact de la réglementation urbaine sur la morphologie du bâti / 3D geographic data for simulating the impact of urban regulations on building morphology

Brasebin, Mickaël 02 April 2014 (has links)
Les données géographiques 3D sont de plus en plus courantes et modélisent de manières variées le territoire. Elles sont souvent utilisées pour mieux comprendre la ville et ses phénomènes sous-jacents en intégrant de nombreuses informations (environnementales, économiques, etc.) pour l'appui à l'aménagement du territoire. À l'échelle locale, le plan local d'urbanisme (PLU) décrit les connaissances régulant le développement urbain, incluant des contraintes tri-dimensionnelles (par exemple : hauteur maximale d'un bâtiment ou surface de plancher) que doivent respecter les nouveaux bâtiments. Ces contraintes sont rédigées dans un format textuel, difficile de compréhension pour le non-initié et dont l'interprétation sur un territoire donné est complexe. L'objectif de cette thèse est de montrer comment les données géographiques 3D permettent d'exploiter les règlements locaux d'urbanisme à travers deux usages : la vérification de règles d'urbanisme et la proposition de configurations bâties. Notre méthodologie s'appuie sur une modélisation de l'espace urbain, représentant les objets pertinents mentionnés dans les règlements, support d'une formalisation des règles avec le langage OCL. La proposition de configurations bâties est réalisée grâce à une méthode d'optimisation basée sur un recuit simulé trans-dimensionnel et une technique de vérification du respect des règles / 3D geographic data are very frequent and represent territories in various ways. Such data are often used to better understand cities and their underlying phenomena by integrating different information (environmental, economic, etc.) to support urban planning. On a local scale, the French Local Urban Plan (PLU) describes constraints that regulate the urban development, notably through tri-dimensional constraints (for example by defining a maximal height or by limiting built area) that new buildings must respect. These constraints are compiled in a textual format. They are difficult to understand for non experts and their impact for a given territory is complex to assess. The aim of this thesis is to demonstrate how 3D geographic data enable the exploitation of local urban regulation constraints through two different uses: the verification of the respect of constraints and the generation of building configurations. Our method relies on a model of the urban environment, representing relevant objects according to regulations. This model supports the formulation of the constraints with the OCL language. The generation of building configurations is processed by an optimization method based on a trans-dimensional simulated annealing relying on a rule checker

Page generated in 0.1499 seconds