• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 11
  • 7
  • 7
  • 2
  • 2
  • Tagged with
  • 70
  • 70
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Random Matrix Theory with Applications in Statistics and Finance

Saad, Nadia Abdel Samie Basyouni Kotb January 2013 (has links)
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS]. Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS]. Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
32

Génération automatique de problèmes d'optimisation pour la conception et la gestion des réseaux électriques de bâtiments intelligents multi-sources multi-charges / Automatic generation of optimization problems in the design and management of power systems of intelligent buildings multi load multi source

Warkozek, Ghaith 07 September 2011 (has links)
Le bâtiment devient de plus en plus un système complexe où les flux énergétiques doivent être gérés en fonction des usages : on parle de bâtiments intelligents. Il s'ensuit une complexité croissante pour les concepteurs, qui doivent s'intéresser autant au bâtiment lui-même (plusieurs sources électriques et multiplication des charges) qu'à ses équipements, sa gestion énergétique mais aussi aux interactions avec l'environnement extérieur (flux d'informations exogènes sur le marché d'énergie, prix d'achat et de revente, subventions à l'auto-consommation, etc...). Il est désormais nécessaire de coupler la phase de conception avec celle de gestion énergétique du bâtiment. Les travaux de cette thèse visent à proposer une démarche méthodologique permettant de formuler automatiquement les problèmes d'optimisation exploitables autant en conception qu'en exploitation du système bâtiment. La démarche est basée sur les concepts issus de l'ingénierie dirigée par les modèles (IDM). / The building is becoming increasingly a complex system where energy flows must be managed according to consumption: we talk about intelligent buildings. This means increasing complexity for designers who need to focus as much on building itself (several power sources and multiplication of charges) in its equipment, and in its energy management but also to interactions with the external environment (exogenous flow of information on the energy market, the purchase price and resale, subsidies for self-consumption, etc. ...). It is now necessary to tie the design phase with that of building energy management. The work of this thesis aims at proposing a methodological approach to automatically formulate optimization problems at design stage and under operation of building. The approach is based on concepts from the model-driven engineering (MDE).
33

Genetic algorithms for route planning of bank employee : master's thesis / Генетические алгоритмы планирование маршрутов банковских работников

Sadoon, A. M., Садун, А. М. January 2020 (has links)
Evolutionary algorithms are machine learning techniques that can be used in many applications of optimization problems in various fields. Banking route planning is a combinatorial optimization problem. The paper proposes a genetic algorithm for planning routes for bank employees. Computational experiments have been carried out, and the effectiveness of the proposed method has been shown. / Эволюционные алгоритмы - это методы машинного обучения, которые можно использовать во многих приложениях задач оптимизации в различных областях. Планирование банковских маршрутов представляет собой задачу комбинаторной оптимизации. В работе предложен генетический алгоритм планирования маршрутов банковских работников. Проведены вычислительные эксперименты, показана эффективность предложенного метода.
34

Some Population Set-Based Methods for Unconstrained Global Optimization

Kaelo, Professor 16 November 2006 (has links)
Student Number : 0214677F - PhD thesis - School of Camputational and Applied Mathematics - Faculty of Science / Many real-life problems are formulated as global optimization problems with continuous variables. These problems are in most cases nonsmooth, nonconvex and often simulation based, making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable and derivative-free global optimization methods for solving such problems are needed. In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization methods. In particular, we concentrate on improving some population set-based methods for unconstrained global optimization, mainly through hybridization. Hybridization has widely been recognized to be one of the most attractive areas of unconstrained global optimization. Experiments have shown that through hybridization, new methods that inherit the strength of the original elements but not their weakness can be formed. We suggest a number of new hybridized population set-based methods based on differential evolution (de), controlled random search (crs2) and real coded genetic algorithm (ga). We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called random localization, in the mutation phase of de. In the second version, we propose a localization in the acceptance phase of de. In the third version, we form a de hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm. The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose #2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2 by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically combine the point generation scheme of crs2 with the linear interpolation scheme of a trust-region based method. The fourth version is a crs hybrid that probabilistically combines the quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a local technique. We propose, in the second version, an integrated crossover rule that generates two children at a time using two different crossover rules. We introduce a local technique in the second version to obtain the third version. The fourth and #2;fth versions are based on the probabilistic adaptation of crossover rules. The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments using a large test suite of both simple and dif#2;cult problems from the literature. Results indicate that the new hybrids are much better than their original counterparts both in reliability and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many currently available stochastic algorithms for solving global optimization problems in which the gradient information is not readily available.
35

Diagnostic de systèmes non linéaires par analyse en composantes principales à noyau / Diagnosis of nonlinear systems using kernel Principal Component Analysis

Anani, Kwami Dodzivi 21 March 2019 (has links)
Dans cette thèse, le diagnostic d'un système non linéaire a été réalisé par une analyse de données. Initialement conçue pour analyser les données liées par des relations linéaires, l'Analyse en Composantes Principales (ACP) est couplée aux méthodes à noyau pour détecter, localiser et estimer l'amplitude des défauts sur des systèmes non linéaires. L'ACP à noyau consiste à projeter les données par l'intermédiaire d'une application non linéaire dans un espace de dimension élevée dénommé espace des caractéristiques où l'ACP linéaire est appliquée. Ayant fait la projection à l'aide de noyaux, la détection peut facilement être réalisée dans l'espace des caractéristiques. Cependant, l'estimation de l'amplitude du défaut nécessite la résolution d'un problème d'optimisation non linéaire. Une étude de contributions permet de localiser et d'estimer ces amplitudes. La variable ayant la plus grande contribution est susceptible d'être affectée par un défaut. Dans notre travail, nous avons proposé de nouvelles méthodes pour les phases de localisation et d'estimation des défauts pour lesquelles les travaux existants ont des limites. La nouvelle méthode proposée est basée sur les contributions sous contraintes permettant d'obtenir une reconstruction parcimonieuse des variables. L'efficacité des méthodes proposées est montrée sur un réacteur à agitation continue (CSTR). / In this thesis, the diagnosis of a nonlinear system was performed using data analysis. Initially developed to analyze linear system, Principal Component Analysis (PCA) is coupled with kernel methods for detection, isolation and estimation of faults' magnitude for nonlinear systems. Kernel PCA consists in projecting data using a nonlinear mapping function into a higher dimensional space called feature space where the linear PCA is applied. Due to the fact that the projections are done using kernels, the detection can be performed in the feature space. However, estimating the magnitude of the fault requires the resolution of a nonlinear optimization problem. The variables' contributions make it possible to isolate and estimate these magnitudes. The variable with the largest contribution may be considered as faulty. In our work, we proposed new methods for the isolation and estimation phases for which previous work has some limitations. The new proposed method in this thesis is based on contributions under constraints. The effectiveness of the developed methods is illustrated on the simulated continuous stirred tank reactor (CSTR).
36

Logické úlohy a hlavolamy jako optimalizační problémy / Logical puzzles and brainteasers as optimization problems

Lukesová, Kristýna January 2011 (has links)
This thesis applies classical optimization problems such as assignment or set-covering problem on logical puzzles or brainteasers. Listed in the first part are mathematical model, description and typical example of each optimization problem used in this thesis. The second part contains these models applied to the particular brainteasers for example Sudoku or Einstein's Puzzle. Exercises are divided into simpler and more complex ones. There is specification, source and a described method of solution stated for each of them. The calculation examples use Lingo or MS Excel or both. The aim is to show the possibility to address logical puzzles and brainteasers with the use of optimization problems, and thus confirm the wide possibilities of using these models. These examples can clarify and diversify the curriculum.
37

Estudo experimental de estabilidade e emissão de radiação térmica em chamas não pré-misturadas de gás natural diluídas com dióxido de carbono

Llanos, Luis Alberto Quezada January 2017 (has links)
Modelos algébricos para prever o comprimento de uma chama turbulenta têm sido foco de estudo de diversos grupos de pesquisa por suas aplicações na área de engenharia. O método experimental para obter o modelo varia desde visualizações simples, até técnicas fotográficas, este último com parâmetros fotográficos variando entre os autores. Técnicas fotográficas são usadas para estimar a altura de levantamento da base da chama, (Lift-Off) e o comprimento médio visível de chama (Visible Flame Length, VFL). Duas técnicas comuns que podem ser encontradas na literatura: por imagens de chama com baixo tempo de exposição e longo tempo de exposição, são comparados com um terceiro que se baseia na intensidade luminosa e na frequência de imagens de chama que ocupam um pixel. O melhor método foi utilizado para caracterizar o comportamento das chamas turbulentas de gás natural para diferentes regimes de velocidade do escoamento. Modelos algébricos que preveem o comprimento de chama, altura de levantamento e a velocidade crítica de extinção de chama são avaliados com os novos resultados experimentais. Logo após, os coeficientes numéricos dos melhores modelos algébricos são reajustados Finalmente, foram obtidos mapas de estabilidade relacionados à altura de levantamento e à velocidade crítica de extinção de chama para cada diâmetro em função da diluição com CO2 e do número adimensional de Reynolds. A terceira parte deste trabalho está focada no estudo da distribuição de radiação térmica. Em particular, foram consideradas três distâncias radiais medidas em comprimentos de chama (0,5 Lf, 1 Lf, 2 Lf) visando obter a distribuição do fluxo radiante experimental ao longo de um eixo vertical adjacente às chamas. Finalmente, os dados experimentais foram utilizados como dados de entrada em uma análise inversa com o objetivo de calcular os fatores de ponderação do modelo das múltiplas fontes ponderadas (por suas siglas em inglês WMPS). Nesta última parte, são apresentados frações radiantes e distribuições de fluxo de calor radiante de chamas de gás natural diluídas para diversas diluições com dióxido de carbono e diâmetros do queimador. / Predicting models for turbulent diffusion flame lengths have several applications driven the attention of many research groups. Since several studies use photographs to measure the flame length, with photographic parameters varying among authors, in other cases simple visualizations were used. It is important to explore possible discrepancies among measurement technics that could affect the results. Optical visualizations of turbulent diffusion flames are used to estimate the visible average flame length (VFL) and the lift-off. The study presents a study of three different methods to measure the VFL using optical techniques. The effect on the image of the main optic parameters such as focus, exposure time and ISO sensibility are analyzed. The VFL obtained with images in low exposure time and long exposure time are compared with a third optical method that is based on the luminous intensity and the frequency of flame images occupying a pixel. One method was used to characterize the behavior of turbulent diffusion flames of natural gas for a range of flames in function of the flow velocity. Universal non-dimensional models that describe the VFL, lift-off and the blow-out stability limit of gaseous jet diffusion flames in the still air have been compared with new experimental data. The numerical coefficients of the best models are adjusted. Finally, maps of stability related to lift-off and blow-out were obtained for each diameter in function of the dilution with CO2 and flow exit velocity expressed in non-dimensional Reynolds number The third part of this work focuses on the estimation of the thermal distribution of radiative flux from turbulent diffusion flames in laboratory-scale. The experimental measurements were gotten from the previous stability study. In particular, was considered three radial distances measured in flame lengths (0,5 Lf, 1 Lf, 2 Lf) aiming at obtaining the experimental radiant flux along a vertical axis adjacent to the flames. Finally, the experimental data was used as input data in an inverse analysis with the purpose of computing weight coefficients of the weighted multi-point source (WMPS) model. Then, experimental data that include: radiant fractions and radiative heat flux are presents for several flames with different dilutions with carbon dioxide and burner´s diameters.
38

Conceptual design of shapes by reusing existing heterogeneous shape data through a multi-layered shape description model and for VR applications / Design conceptuel de formes par exploitation de données hétérogènes au sein d’un modèle de description de forme multi-niveaux et pour des applications de RV

Li, Zongcheng 28 September 2015 (has links)
Les récentes avancées en matière de systèmes d'acquisition et de modélisation ont permis la mise à disposition d'une très grande quantité de données numériques (e.g. images, vidéos, modèles 3D) dans différents domaines d'application. En particulier, la création d'Environnements Virtuels (EVs) nécessite l'exploitation de données nu-mériques pour permettre des simulations et des effets proches de la réalité. Malgré ces avancées, la conception d'EVs dédiés à certaines applications requiert encore de nombreuses et parfois laborieuses étapes de modélisation et de traitement qui impliquent plusieurs experts (e.g. experts du domaine de l'application, experts en modélisation 3D et programmeur d'environnements virtuels, designers et experts communication/marketing). En fonction de l'application visée, le nombre et le profil des experts impliqués peuvent varier. Les limitations et difficultés d'au-jourd'hui sont principalement dues au fait qu'il n'existe aucune relation forte entre les experts du domaine qui ont des besoins, les experts du numérique ainsi que les outils et les modèles qui prennent part au processus de déve-loppement de l'EV. En fait, les outils existants focalisent sur des définitions souvent très détaillées des formes et ne sont pas capables de supporter les processus de créativité et d'innovation pourtant garants du succès d'un pro-duit ou d'une application. De plus, la grande quantité de données numériques aujourd'hui accessible n'est pas réellement exploitée. Clairement, les idées innovantes viennent souvent de la combinaison d'éléments et les don-nées numériques disponibles pourraient être mieux utilisées. Aussi, l'existence de nouveaux outils permettant la réutilisation et la combinaison de ces données serait d'une grande aide lors de la phase de conception conceptuelle de formes et d'EVs. Pour répondre à ces besoins, cette thèse propose une nouvelle approche et un nouvel outil pour la conception conceptuelle d'EVs exploitant au maximum des ressources existantes, en les intégrant et en les combinant tout en conservant leurs propriétés sémantiques. C'est ainsi que le Modèle de Description Générique de Formes (MDGF) est introduit. Ce modèle permet la combinaison de données multimodales (e.g. images et maillages 3D) selon trois niveaux : Conceptuel, Intermédiaire et Données. Le niveau Conceptuel exprime quelles sont les différentes parties de la forme ainsi que la façon dont elles sont combinées. Chaque partie est définie par un Elément qui peut être soit un Composant soit un Groupe de Composants lorsque ceux-ci possèdent des carac-téristiques communes (e.g. comportement, sens). Les Eléments sont liés par des Relations définies au niveau Con-ceptuel là où les experts du domaine interagissent. Chaque Composant est ensuite décrit au niveau Données par sa Géométrie, sa Structure et ses informations Sémantiques potentiellement attachées. Dans l'approche proposée, un Composant est une partie d'image ou une partie d'un maillage triangulaire 3D. Quatre Relations sont proposées (fusion, assemblage, shaping et localisation) et décomposées en un ensemble de Contraintes qui contrôlent la po-sition relative, l'orientation et le facteur d'échelle des Composants au sein de la scène graphique. Les Contraintes sont stockées au niveau Intermédiaire et agissent sur des Entités Clés (e.g. points, des lignes) attachées à la Géo-métrie ou à la Structure des Composants. Toutes ces contraintes sont résolues en minimisant une fonction énergie basée sur des grandeurs physiques. Les concepts du MDGF ont été implémentés et intégrés au sein d'un outil de design conceptuel développé par l'auteur. Différents exemples illustrent le potentiel de l'approche appliquée à différents domaines d'application. / Due to the great advances in acquisition devices and modeling tools, a huge amount of digital data (e.g. images, videos, 3D models) is becoming now available in various application domains. In particular, virtual envi-ronments make use of those digital data allowing more attractive and more effectual communication and simula-tion of real or not (yet) existing environments and objects. Despite those innovations, the design of application-oriented virtual environment still results from a long and tedious iterative modeling and modification process that involves several actors (e.g. experts of the domain, 3D modelers and VR programmers, designers or communica-tions/marketing experts). Depending of the targeted application, the number and the profiles of the involved actors may change. Today's limitations and difficulties are mainly due to the fact there exists no strong relationships between the expert of the domain with creative ideas, the digitally skilled actors, the tools and the shape models taking part to the virtual environment development process. Actually, existing tools mainly focus on the detailed geometric definition of the shapes and are not suitable to effectively support creativity and innovation, which are considered as key elements for successful products and applications. In addition, the huge amount of available digital data is not fully exploited. Clearly, those data could be used as a source of inspiration for new solutions, being innovative ideas frequently coming from the (unforeseen) combination of existing elements. Therefore, the availability of software tools allowing the re-use and combination of such digital data would be an effective support for the conceptual design phase of both single shapes and VR environments. To answer those needs, this thesis proposes a new approach and system for the conceptual design of VRs and associated digital assets by taking existing shape resources, integrating and combining them together while keeping their semantic meanings. To support this, a Generic Shape Description Model (GSDM) is introduced. This model allows the combination of multimodal data (e.g. images and 3D meshes) according to three levels: conceptual, intermediate and data levels. The conceptual level expresses what the different parts of a shape are, and how they are combined together. Each part of a shape is defined by an Element that can either be a Component or a Group of Components when they share common characteristics (e.g. behavior, meaning). Elements are linked with Relations defined at the Concep-tual level where the experts in the domain are acting and exchanging. Each Component is then further described at the data level with its associated Geometry, Structure and potentially attached Semantics. In the proposed ap-proach, a Component is a part of an image or a part of a 3D mesh. Four types of Relation are proposed (merging, assembly, shaping and location) and decomposed in a set of Constraints which control the relative position, orien-tation and scaling of the Components within the 3D viewer. Constraints are stored at the intermediate level and are acting on Key Entities (such as points, a lines, etc.) laying on the Geometry or Structure of the Components. All these constraints are finally solved while minimizing an additional physically-based energy function. At the end, most of the concepts of GSDM have been implemented and integrated into a user-oriented conceptual design tool totally developed by the author. Different examples have been created using this tool demonstrating the potential of the approach proposed in this document.
39

Portfolio Insurance Strategies

Guleroglu, Cigdem 01 September 2012 (has links) (PDF)
The selection of investment strategies and managing investment funds via employing portfolio insurance methods play an important role in asset liability management. Insurance strategies are designed to limit downside risk of portfolio while allowing some participation in potential gain of upside markets. In this thesis, we provide an extensive overview and investigation, particularly on the two most prominent portfolio insurance strategies: the Constant Proportion Portfolio Insurance (CPPI) and the Option-Based Portfolio Insurance (OBPI). The aim of the thesis is to examine, analyze and compare the portfolio insurance strategies in terms of their performances at maturity, via some of their statistical and dynamical properties, and of their optimality over the maximization of expected utility criterion. This thesis presents the financial market model in continuous-time containing no arbitrage opportunies, the CPPI and OBPI strategies with definitions and properties, and the analysis of these strategies in terms of comparing their performances at maturity, of their statistical properties and of their dynamical behaviour and sensitivities to the key parameters during the investment period as well as at the terminal date, with both formulations and simulations. Therefore, we investigate and compare optimal portfolio strategies which maximize the expected utility criterion. As a contribution on the optimality results existing in the literature, an extended study is provided by proving the existence and uniqueness of the appropriate number of shares invested in the unconstrained allocation in a wider interval.
40

Models and algorithms for the combinatorial optimization of WLAN-based indoor positioning system

Zheng, You 20 April 2012 (has links) (PDF)
Indoor Positioning Systems (IPS) using the existing WLAN have won growing interest in the last years, it can be a perfect supplement to provide location information of users in indoor environments where other positioning techniques such as GPS, are not much effective. The thesis manuscript proposes a new approach to define a WLAN-based indoor positioning system (WLAN-IPS) as a combinatorial optimization problem to guarantee the requested communication quality while optimizing the positioning error. This approach is characterised by several difficult issues we tackled in three steps.At first, we designed a WLAN-IPS and implemented it as a test framework. Using this framework, we looked at the system performance under various experimental constraints. Through these experiments, we went as far as possible in analysing the relationships between the positioning error and the external environmental factors. These relationships were considered as evaluation indicators of the positioning error. Secondly, we proposed a model that defines all major parameters met in the WLAN-IPS from the literature. As the original purpose of the WLAN infrastructures is to provide radio communication access, we introduced an additional purpose which is to minimize the location error within IPS context. Two main indicators were defined in order to evaluate the network Quality of Service (QoS) and the positioning error for Location-Based Service (LBS). Thirdly, after defining the mathematical formulation of the optimisation problem and the key performance indicators, we proposed a mono-objective algorithm and a multi-objective algorithm which are based on Tabu Search metaheuristic to provide good solutions within a reasonable amount of time. The simulations demonstrate that these two algorithms are highly efficient for the indoor positioning optimization problem.

Page generated in 0.1163 seconds