• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 14
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 178
  • 178
  • 31
  • 25
  • 25
  • 24
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Maximização da soma das receitas de competidores por meio de análise conjunta baseada em escolhas : um estudo aplicado ao mercado de educação superior privado

Sibemberg, Fernando Igor January 2017 (has links)
O mercado de Educação Superior privado no Brasil apresenta altos índices de concentração, caracterizando-se como um oligopólio, podendo, portanto, ser estudado sob a ótica da Teoria dos Jogos. Uma das técnicas existentes para abordar este tipo de mercado é conhecida por Análise Conjunta Baseada em Escolhas (Choice Based Conjoint Analysis), que permite estimar as utilidades atribuídas para cada característica dos produtos, prevendo o desejo de cada produto gerado pela combinação dos seus atributos, possibilitando, assim, simular como as decisões de uma amostra de respondentes seriam distribuídas em um mercado simulado entre dois ou mais produtos competidores. Esses modelos, porém, limitam-se a maximizar a receita individual de cada produto, de forma isolada, não levando em conta a possibilidade das firmas terem interesses em maximizar a soma de dois ou mais produtos de forma conjunta. Isso se torna necessário, por exemplo, quando uma empresa comercializa dois produtos que competem no mesmo mercado. Com o objetivo de maximizar a receita conjunta de dois ou mais produtos, foi desenvolvido um método alternativo, baseado em Programação Não-Linar, que foi aplicado em uma cidade brasileira e em um país centro-americano. A comparação dos resultados do modelo desenvolvido com os do modelo tradicional evidencia que o modelo desenvolvido apresenta melhores resultados – soma das receitas das firmas de interesse – gerando uma taxa de crescimento na receita 3% maior, no caso brasileiro e 75% maior no estudo centro-americano. O modelo desenvolvido pode ser adaptado e utilizado em outros mercados oligopolistas ou para otimizar diferentes funções-objetivo. / The Brazilian Higher Education private market shows high levels of concentration and can be considered an oligopoly. Therefore, one can study it as a Game Theory problem. Choice Based Conjoint Analysis – a technic that can be used to approach this kind of market – can estimates the utilities of each products’ features and predict the desire of each product generated by the combination of its attributes. Such technic can simulate how the decisions of a sample of respondents would be distributed among the products of a market made of two or more competitor. These models, however, only maximize the revenues of individual products, not considering the possibility of firms wanting to maximize the sum of the revenue of two or more products. This is useful, for instance, when a company trends two or more products that compete in the same market. An alternative method, based on nonlinear programming, was developed, in order to maximize the conjoint revenue of two or more products and it was applied in a Brazilian city and in a Central American country. Comparing both models – the traditional versus the developed one –, we can see that the developed model shows better outcomes – ie, sum of both companies’ revenues – resulting in a revenue increase rate 3% higher in the Brazilian case and 75% higher in the Central American study. This model can be fitted to other oligopolistic markets or to optimize others objective functions.
122

Application of Design-of-Experiment Methods and Surrogate Models in Electromagnetic Nondestructive Evaluation / Application des méthodes de plans d’expérience numérique et de modèles de substitution pour le contrôle nondestructif électromagnétique

Bilicz, Sandor 30 May 2011 (has links)
Le contrôle non destructif électromagnétique (CNDE) est appliqué dans des domaines variés pour l'exploration de défauts cachés affectant des structures. De façon générale, le principe peut se poser en ces termes : un objet inconnu perturbe un milieu hôte donné et illuminé par un signal électromagnétique connu, et la réponse est mesurée sur un ou plusieurs récepteurs de positions connues. Cette réponse contient des informations sur les paramètres électromagnétiques et géométriques des objets recherchés et toute la difficulté du problème traité ici consiste à extraire ces informations du signal obtenu. Plus connu sous le nom de « problèmes inverses », ces travaux s'appuient sur une résolution appropriée des équations de Maxwell. Au « problème inverse » est souvent associé le « problème direct » complémentaire, qui consiste à déterminer le champ électromagnétique perturbé connaissant l'ensemble des paramètres géométriques et électromagnétiques de la configuration, défaut inclus. En pratique, cela est effectué via une modélisation mathématique et des méthodes numériques permettant la résolution numérique de tels problèmes. Les simulateurs correspondants sont capables de fournir une grande précision sur les résultats mais à un coût numérique important. Sachant que la résolution d'un problème inverse exige souvent un grand nombre de résolution de problèmes directs successifs, cela rend l'inversion très exigeante en termes de temps de calcul et de ressources informatiques. Pour surmonter ces challenges, les « modèles de substitution » qui imitent le modèle exact peuvent être une solution alternative intéressante. Une manière de construire de tels modèles de substitution est d'effectuer un certain nombre de simulations exactes et puis d'approximer le modèle en se basant sur les données obtenues. Le choix des simulations (« prototypes ») est normalement contrôlé par une stratégie tirée des outils de méthodes de « plans d'expérience numérique ». Dans cette thèse, l'utilisation des techniques de modélisation de substitution et de plans d'expérience numérique dans le cadre d'applications en CNDE est examinée. Trois approches indépendantes sont présentées en détail : une méthode d'inversion basée sur l'optimisation d'une fonction objectif et deux approches plus générales pour construire des modèles de substitution en utilisant des échantillonnages adaptatifs. Les approches proposées dans le cadre de cette thèse sont appliquées sur des exemples en CNDE par courants de Foucault / Electromagnetic Nondestructive Evaluation (ENDE) is applied in various industrial domains for the exploration of hidden in-material defects of structural components. The principal task of ENDE can generally be formalized as follows: an unknown defect affects a given host structure, interacting with a known electromagnetic field, and the response (derived from the electromagnetic field distorted by the defect) is measured using one or more receivers at known positions. This response contains some information on the electromagnetic constitutive parameters and the geometry of the defect to be retrieved. ENDE aims at extracting this information for the characterization of the defect, i.e., at the solution of the arising “inverse problem”. To this end, one has to be able to determine the electromagnetic field distorted by a defect with known parameters affecting a given host structure, i.e., to solve the “forward problem”. Practically, this is performed via the mathematical modeling (based on the Maxwell's equations) and the numerical simulation of the studied ENDE configuration. Such simulators can provide fine precision, but at a price of computational cost. However, the solution of an inverse problem often requires several runs of these “expensive-to-evaluate” simulators, making the inversion procedure firmly demanding in terms of runtime and computational resources. To overcome this challenge, “surrogate modeling” offers an interesting alternative solution. A surrogate model imitates the true model, but as a rule, it is much less complex than the latter. A way to construct such surrogates is to perform a couple of simulations and then to approximate the model based on the obtained data. The choice of the “prototype” simulations is usually controlled by a sophisticated strategy, drawn from the tools of “design-of-experiments”. The goal of the research work presented in this Dissertation is the improvement of ENDE methods by using surrogate modeling and design-of-experiments techniques. Three self-sufficient approaches are discussed in detail: an inversion algorithm based on the optimization of an objective function and two methods for the generation of generic surrogate models, both involving a sequential sampling strategy. All approaches presented in this Dissertation are illustrated by examples drawn from eddy-current nondestructive testing.
123

Modelo integrado para seleção de cargas e reposicionamento de contêineres vazios no transporte marítimo. / Integrated model of cargo selection and empty containers repositioning in maritime transport.

Teixeira, Rafael Buback 23 September 2011 (has links)
A popularização dos contêineres no transporte de cargas gerais por volta dos anos 60 provocou significativa mudança no tráfego de mercadorias ao redor do mundo. A utilização deste equipamento simplifica e agiliza o processo de transporte e manuseio de cargas, uma vez que permite a movimentação entre diferentes modais com rapidez e segurança nas operações de carga e descarga. Neste contexto, esta pesquisa trata do problema que integra decisões de escolha de cargas a serem transportadas pelo modal marítimo com decisões de reposicionamento de contêineres vazios de modo a maximizar a receita total. O modelo baseia-se em um problema de fluxo em rede multiproduto, a partir da qual é proposta uma modelagem matemática inédita, que permite levar em consideração as principais restrições encontradas na prática tais como: horizonte de planejamento de longo prazo; diferentes tipos e tamanhos de contêineres; múltiplos navios, rotas e suas respectivas programações; rotas que permitem que um porto seja visitado mais de uma vez; capacidades dos navios em termos de número máximo de contêineres cheios e vazios por tipo e peso máximo total; para cada rota e trecho entre dois portos consecutivos; etc. O modelo proposto foi implementado em C++ e utiliza o software de otimização GUROBI, lançado recentemente, assim como uma planilha eletrônica para os dados de entrada. O mesmo foi comparado a um modelo da literatura que utiliza método heurístico para resolução de problema semelhante. O modelo também foi aplicado a problemas de diversos portes evidenciando que é capaz de resolver problemas até à otimização de maneira eficiente e em tempos de processamento reduzidos. / The popularization of containers in transporting general cargo caused a significant change in freight traffic around the world. The use of this mechanism simplifies and streamlines the process of shipping and handling charges, allowing you to move it between different transport modes, with speed and safety in loading and unloading process. In this context, this research deals the problem that incorporates decisions of cargo selection to be transported by sea with decisions involving reposition empty containers in order to maximize total revenue. The problem is modeled as a multi-product network flow problem and is proposed a novel mathematical model, which takes into account the main constraints encountered in practice, such as planning horizon of long-term; different types and sizes of containers, multiple ships and routes and their schedules, routes that allow a port to be visited more than once, and capacity of vessels in terms of maximum number of full and empty containers by type, and maximum weight for each route and the segment between two consecutive ports, etc. The proposed model was implemented in C++ and uses for its solution, the optimization software recently launched, GUROBI, as well as a spreadsheet for data entry. The same was applied to a problem of literature that uses a heuristic method to solve it. The model also was applied to several size of problems showing the model able to solve problem to optimality of efficient way and in processing time reduced.
124

Técnicas de programação matemática para a análise e projeto de sistemas biotecnológicos. / Mathematical programming techniques for analysis and design of biotechnological systems.

Carlos Arturo Martínez Ríascos 02 September 2005 (has links)
A complexidade de alguns sistemas biotecnológicos impossibilita seu estudo sem o uso de técnicas de programação matemática avançadas. A quantificação de fluxos metabólicos e a síntese e projeto ótimos de plantas multiproduto são problemas com esta característica, abordados na presente tese. A quantificação de fluxos metabólicos empregando balanços de marcações é representada como um problema de otimização não-linear, o qual se resolve através da minimização da diferença entre as medidas experimentais e as predições do modelo da rede metabólica. Este problema surge da necessidade de se caracterizar o metabolismo mediante a estimação das velocidades das reações bioquímicas. O modelo matemático para problemas deste tipo é composto basicamente por balanços de metabólitos e de isótopos; os primeiros são lineares, enquanto os segundos introduzem não-linearidades ao problema e, neste trabalho, são modelados mediante uma modificação da técnica de matrizes de mapeamento de átomos. Para quantificar os fluxos metabólicos considerando a existência de ótimos locais, desenvolveu-se um algoritmo branch & bound espacial, no qual a busca global é feita mediante a divisão da região de busca (branching) e a geração de seqüências de limites (bounding) que convergem para a solução global. Como estudo de caso, estimaram-se os fluxos no metabolismo central de Saccharomyces cerevisiae. Os resultados confirmam a existência de soluções locais e a necessidade de desenvolver uma estratégia de busca global; a solução global obtida apresenta semelhanças, nos fluxos centrais, com a melhor solução obtida por um algoritmo evolucionário. Quanto aos problemas de síntese e projeto de sistemas biotecnológicos multiproduto, As abordagens mais empregadas para resolve-los são a definição e dimensionamento seqüencial das operações unitárias, e a fixação dos parâmetros de dimensionamento e de estimação do tempo de operação (com valores obtidos em laboratório ou planta piloto); porém ambas abordagens fornecem soluções subótimas. Por outro lado, a solução simultânea da síntese e projeto de sistemas biotecnológicos multiproduto gera modelos misto-inteiros não-lineares (MINLP) de grande porte, devido à combinação das decisões, ligadas à existência de alternativas no processo, com as restrições não-lineares geradas dos modelos das operações. Como estudo de caso considera-se uma planta para produção de insulina, vacina para hepatite B, ativador de plasminogênio tecidual (tissue plasminogen activator) e superóxido dismutase, mediante três hospedeiros diferentes: levedura (S. cerevisiae) com expressão extra ou intracelular, Escherichia coli e células de mamíferos. O projeto deve satisfazer a meta de produção para cada produto, minimizando os custos de capital e selecionando os hospedeiros, as operações e o arranjo dos equipamentos em cada estágio. Os resultados obtidos mostram que a formulação das decisões por abordagem big-M permite resolver o modelo MINLP gerado e que a consideração de múltiplos produtos com seqüências e condições de processamento diferentes gera grande ociosidade nos equipamentos e aumenta o custo total do projeto. Para o estudo de caso observou-se que a alocação de tanques intermediários tem um efeito limitado na diminuição do custo do projeto, porém a implementação simultânea da flexibilização do scheduling, do projeto de equipamentos auxiliares e tanques intermediários permite obter projetos satisfatórios. / The complexity of biotechnological systems does not allow their study without the use of advanced mathematical programming techniques. Metabolic flux quantification and optimal synthesis and design of multiproduct plants are problems with this characteristic, and are addressed in this thesis. The metabolic flux quantification employing labeling balances is formulated as a nonlinear optimization problem that is solved by the minimization of the difference between experimental measurements and predictions of the metabolic network model. This problem is generated by the necessity of estimating the rates of biochemical reactions that characterize the metabolism. The mathematical model for this class of problems is composed by balances of metabolites and isotopes; the former are linear whereas the latter are nonlinear and, in this work, are modeled by a modification of the atom mapping matrix technique. A spatial branch & bound algorithm was developed to quantify the metabolic fluxes, that considers the existence of local optima; in this algorithm, the global search is developed by the division of the searching region (branching) and the generation of sequences of bounds (bounding) that converge to the global solution. As a case study, fluxes in central metabolism of Saccharomyces cerevisiae were estimated. The results confirm the existence of local solutions and the necessity of develop a global search strategy; the central fluxes in the obtained global solution are similar to those ones obtained by an evolutionary algorithm. To solve problems of synthesis and design of multiproduct biotechnological systems, the most employed approaches are the sequential selection and sizing of the unit operations, and the fixing of sizing and time parameters (employing values from laboratory or pilot plants); nevertheless, both approaches generate suboptimal solutions. On the other hand, the simultaneous solution of the synthesis and design of multiproduct biotechnological systems generates large size mixed-integer nonlinear models (MINLP), due to the combination of options into the processing with nonlinear constraints from the operation models. As case study, a plant for production of insulin, hepatitis B vaccine, tissue plasminogen activator and superoxide dismutase was considered, by three hosts: yeast (S. cerevisiae) with extra or intracellular expression, Escherichia coli and mammalian cells. The design must satisfy the production target for each product, minimizing the capital cost and considering the selection of hosts, the operations and the number of parallel units in each stage. The obtained results show that the formulation of decisions by the big-M approach allows the solution of the generated MINLP model and that consideration of several products with different processing sequences and conditions generates large idleness at the equipment and increases the total cost of the design. In the case study it was observed that the allocation of storage tanks has a limited effect on cost reduction, but the simultaneous implementation of flexible scheduling, design of auxiliary equipments and intermediate storage tanks allow the generation of satisfactory designs.
125

Camera Motion Estimation for Multi-Camera Systems

Kim, Jae-Hak, Jae-Hak.Kim@anu.edu.au January 2008 (has links)
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
126

Analytic and Numerical Methods for the Solution of Electromagnetic Inverse Source Problems

Popov, Mikhail January 2001 (has links)
No description available.
127

Geometry guided phase transition pathway and stable structure search for crystals

Crnkic, Edin 21 May 2012 (has links)
Recently a periodic surface model was developed to assist geometric construction in computer-aided nano-design. This implicit surface model helps create super-porous nano structures parametrically and support crystal packing. In this thesis, a new approach for pathway search in phase transition simulation of crystal structures is proposed. The approach relies on the interpolation of periodic loci surface models. Respective periodic plane models are reconstructed from the positions of individual atoms at the initial and final states, and surface correspondence is found using a Simulated Annealing-like algorithm. With geometric constraints imposed based on physical and chemical properties of crystals, two surface interpolation methods are used to approximate the intermediate atom positions on the transition pathway in the full search of the minimum energy path. This hybrid approach integrates geometry information in configuration space and physics information to allow for efficient transition pathway search. The methods are demonstrated by examples of FeTi, VO2, and FePt. Additionally, two new particle swarm optimization (PSO) algorithms are developed and applied to crystal structure relaxation of the initial and final states. The PSO algorithms are integrated into the Quantum-Espresso open-source software package and tested against the default Broyden-Fletcher-Goldfarb-Shanno relaxation method.
128

Applicability of deterministic global optimization to the short-term hydrothermal coordination problem

Ferrer Biosca, Alberto 30 March 2004 (has links)
Esta Tesis esta motivada por el interés en aplicar procedimientos de optimización global a problemas del mundo real. Para ello, nos hemos centrado en el problema de Coordinación Hidrotérmica de la Generación Eléctrica a Corto Plazo (llamado Problema de Generación en esta Tesis) donde la función objetivo y las restricciones no lineales son polinomios de grado como máximo cuatro. En el Problema de Generación no tenemos disponible una representación en diferencia convexa de las funciones involucradas ni tampoco es posible utilizar la estructura del problema para simplificarlo. No obstante, cuando disponemos de una función continua f(x) definida en un conjunto cerrado y no vacío S el problema puede transformarse en otro equivalente expresado mediante minimize l(z) subject to z 2 D n int. (programa d.c. canónico), donde l(z) es una función convexa (en general suele ser una función lineal) con D y C conjuntos convexos y cerrados. Una estructura matemática tal como Dnint C no resulta siempre aparente y aunque lo fuera siempre queda por realizar una gran cantidad de cálculos para expresarla de manera que se pueda resolver el problema de una manera eficiente desde un punto de vista computacional.La característica más importante de esta estructura es que aparecen conjuntos convexos y complementarios de conjuntos convexos. Por este motivo en tales problemas se pueden usar herramientas analíticas tales como subdifernciales y hiperplanos soporte. Por otro lado, como aparecen conjuntos complementarios de conjuntos convexos, estas herramientas analíticas se deben usar de una manera determinada y combinándolas con herramientas combinatorias tales como cortes por planos, Branco and bound y aproximación interior.En esta tesis se pone de manifiesto la estructura matemática subyacente en el Problema de Generación utilizando el hecho de que los polinomios son expresables como diferencia de funciones convexas. Utilizando esta propiedad describimos el problema como un programa d.c. canónico equivalente. Pero aun mas, partiendo de la estructura de las funciones del Problema de Generación es posible rescribirlo de una manera mas conveniente y obtener de este modo ventajas numéricas desde elpunto de vista de la implementación.Basándonos en la propiedad de que los polinomios homogéneos de grado 1 son un conjunto de generadores del espacio vectorial de los polinomios homogéneos de grado m hemos desarrollamos los conceptos y propiedades necesarios que nos permiten expresar un polinomio cualquiera como diferencia de polinomios convexos, También, se ha desarrollado y demostrado la convergencia de un nuevo algoritmo de optimización global (llamado Algoritmo Adaptado) que permite resolver el Problema de Generación. Como el programa equivalente no esta acotado se ha introducido una técnica de subdivisión mediante prismas en lugar de la habitual subdivisión mediante conos.Para obtener una descomposición óptima de un polinomio en diferencia de polinomios convexos, se ha enunciado el Problema de Norma Mínima mediante la introducción del concepto de Descomposición con Mínima Desviación, con lo cual obtenemos implementaciones m´as eficientes, al reducir el n´umero de iteraciones del Algoritmo Adaptado. Para resolver el problema de Norma Mínima hemos implementado un algoritmo de programación cuadrática semi-infinita utilizando una estrategia de build-up and build-down, introducida por Den Hertog (1997) para resolver programas lineales semi-infinitos, la cual usa un procedimiento de barrera logarítmica.Finalmente, se describen los resultados obtenidos por la implementación de los algoritmos anteriormente mencionados y se dan las conclusiones. / This Thesis has been motivated by the interest in applying deterministic global optimization procedures to problems in the real world with no special structures. We have focused on the Short-Term Hydrothermal Coordination of Electricity Generation Problem (also named Generation Problem in this Thesis) where the objective function and the nonlinear constraints are polynomials of degree up to four. In the Generation Problem there is no available d.c. representation of the involved functions and we cannot take advantage of any special structure of the problem either. Hence, a very general problem, such as the above-mentioned, does not seem to have any mathematical structure conducive to computational implementations. Nevertheless, when f(x) is a continuous function and S is a nonempty closed set the problem can be transformed into an equivalent problem expressed by minimize l(z) subject to z 2 D n intC (canonical d.c. program), where l(z) is a convex function (which is usually a linear function) and D and C are closed convex sets. A mathematical complementary convex structure such as D n int C is not always apparent and even when it is explicit, a lot of work still remains to be done to bring it into a form amenable to efficient computational implementations. The attractive feature of the mathematicalcomplementary convex structure is that it involves convexity. Thus, we can use analytical tools from convex analysis like sub differential and supporting hyper plane.On the other hand, since convexity is involved in a reverse sense, these tools must be used in some specific way and combined with combinatorial tools like cutting planes, branch and bound and outer approximation.We introduce the common general mathematical complementary convex structure underlying in global optimization problems and describe the Generation Problem, whose functions are d.c. functions because they are polynomials. Thus, by using the properties of the d.c. functions, we describe the Generation Problem as an equivalent canonical d.c. programming problem. From the structure of its functions the Generation Problem can be rewritten as a more suitable equivalent reverse convex program in order to obtain an adaptation for advantageous numerical implementations.Concepts and properties are introduced which allow us to obtain an explicit representation of a polynomial as a deference of convex polynomials, based on the fact that the set of mth powers of homogeneous polynomials of degree 1 is a generating set for the vector space of homogeneous polynomials of degree m.We also describe a new global optimization algorithm (adapted algorithm) in order to solve the Generation Problem. Since the equivalent reverse convex program is unbounded we use prismatical subdivisions instead of conical ones. Moreover, we prove the convergence of the adapted algorithm by using a prismatical subdivision process together with an outer approximation procedure.We enounce the Minimal Norm Problem by using the concept of Least Deviation Decomposition in order to obtain the optimal d.c. representation of a polynomial function, which allows a more efficient implementation, by reducing the number of iterations of the adapted algorithm.A quadratic semi-infinite algorithm is described. We propose a build-up and down strategy, introduced by Den Hertog (1997) for standard linear programs that uses a logarithmic barrier method.Finally, computational results are given and conclusions are explained.
129

Analysis of dense colloidal dispersions with multiwavelength frequency domain photon migration measurements

Dali, Sarabjyot Singh 02 June 2009 (has links)
Frequency domain photon migration (FDPM) measurements are used to study the properties of dense colloidal dispersions with hard sphere and electrostatic interactions, which are otherwise difficult to analyze due to multiple scattering effects. Hard sphere interactions were studied using a theoretical model based upon a polydisperse mixture of particles using the hard sphere Percus Yevick theory. The particle size distribution and volume fraction were recovered by solving a non linear inverse problem using genetic algorithms. The mean sizes of the particles of 144 and 223 nm diameter were recovered within an error range of 0-15.53% of the mean diameters determined from dynamic light scattering measurements. The volume fraction was recovered within an error range of 0-24% of the experimentally determined volume fractions. At ionic strengths varying between 0.5 and 4 mM, multiple wavelength (660, 685, 785 and 828 nm) FDPM measurements of isotropic scattering coefficients were made of 144 and 223 nm diameter, monodisperse dispersions varying between 15% - 22% volume fraction, as well as of bidisperse mixtures of 144 and 223 nm diameter latex particles in 1:3, 1:1 and 3:1 mixtures varying between volume fractions of 15% - 24%. Structure factor models with Yukawa potential were computed by Monte Carlo (MC) simulations and numerical solution of the coupled Ornstein Zernike equations. In monodisperse dispersions of particle diameter 144 nm the isotropic scattering coefficient versus ionic strength show an increase with increasing ionic strength consistent with model predictions, whereas there was a reversal of trends and fluctuations for the particle diameter of 223 nm. In bidisperse mixtures for the case of maximum number of smaller particles, the isotropic scattering coefficient increased with increasing ionic strength and the trends were in conformity with MC simulations of binary Yukawa potential models. As the number of larger diameter particles increased in the dispersions, the isotropic scattering coefficients depicted fluctuations, and no match was found between the models and measurements for a number ratio of 1:3. The research lays the foundation for the determination of particle size distribution, volume fractions and an estimate of effective charge for high density of particles.
130

Problem decomposition by mutual information and force-based clustering

Otero, Richard Edward 28 March 2012 (has links)
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter-dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

Page generated in 0.094 seconds