• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Systematic ensemble learning and extensions for regression / Méthodes d'ensemble systématiques et extensions en apprentissage automatique pour la régression

Aldave, Roberto January 2015 (has links)
Abstract : The objective is to provide methods to improve the performance, or prediction accuracy of standard stacking approach, which is an ensemble method composed of simple, heterogeneous base models, through the integration of the diversity generation, combination and/or selection stages for regression problems. In Chapter 1, we propose to combine a set of level-1 learners into a level-2 learner, or ensemble. We also propose to inject a diversity generation mechanism into the initial cross-validation partition, from which new cross-validation partitions are generated, and sub-sequent ensembles are trained. Then, we propose an algorithm to select best partition, or corresponding ensemble. In Chapter 2, we formulate the partition selection as a Pareto-based multi-criteria optimization problem, as well as an algorithm to make the partition selection iterative with the aim to improve more the ensemble prediction accuracy. In Chapter 3, we propose to generate multiple populations or partitions by injecting a diversity mechanism to the original dataset. Then, an algorithm is proposed to select the best partition among all partitions generated by the multiple populations. All methods designed and implemented in this thesis get encouraging, and favorably results across different dataset against both state-of-the-art models, and ensembles for regression. / Résumé : L’objectif est de fournir des techniques permettant d’améliorer la performance de l’algorithme de stacking, une méthode ensembliste composée de modèles de base simples et hétérogènes, à travers l’intégration de la génération de la diversité, la sélection et combinaison des modèles. Dans le chapitre 1, nous proposons de combiner différents sous-ensembles de modèles de base obtenus au primer niveau. Nous proposons un mécanisme pour injecter de la diversité dans la partition croisée initiale, à partir de laquelle de nouvelles partitions de validation croisée sont générées, et les ensembles correspondant sont formés. Ensuite, nous proposons un algorithme pour sélectionner la meilleure partition. Dans le chapitre 2, nous formulons la sélection de la partition comme un problème d’optimisation multi-objectif fondé sur un principe de Pareto, ainsi que d’un algorithme pour faire une application itérative de la sélection avec l’objectif d’améliorer d’avantage la précision d’ensemble. Dans le chapitre 3, nous proposons de générer plusieurs populations en injectant un mécanisme de diversité à l’ensemble de données original. Ensuite, un algorithme est proposé pour sélectionner la meilleur partition entre toutes les partitions produite par les multiples populations. Nous avons obtenu des résultats encourageants avec ces algorithmes lors de comparaisons avec des modèles reconnus sur plusieurs bases de données.
122

An empirical comparison of extreme value modelling procedures for the estimation of high quantiles

Engberg, Alexander January 2016 (has links)
The peaks over threshold (POT) method provides an attractive framework for estimating the risk of extreme events such as severe storms or large insurance claims. However, the conventional POT procedure, where the threshold excesses are modelled by a generalized Pareto distribution, suffers from small samples and subjective threshold selection. In recent years, two alternative approaches have been proposed in the form of mixture models that estimate the threshold and a folding procedure that generates larger tail samples. In this paper the empirical performances of the conventional POT procedure, the folding procedure and a mixture model are compared by modelling data sets on fire insurance claims and hurricane damage costs. The results show that the folding procedure gives smaller standard errors of the parameter estimates and in some cases more stable quantile estimates than the conventional POT procedure. The mixture model estimates are dependent on the starting values in the numerical maximum likelihood estimation, and are therefore difficult to compare with those from the other procedures. The conclusion is that none of the procedures is overall better than the others but that there are situations where one method may be preferred.
123

Application of Multiobjective Optimization in Chemical Engineering Design and Operation

Fettaka, Salim 24 August 2012 (has links)
The purpose of this research project is the design and optimization of complex chemical engineering problems, by employing evolutionary algorithms (EAs). EAs are optimization techniques which mimic the principles of genetics and natural selection. Given their population-based approach, EAs are well suited for solving multiobjective optimization problems (MOOPs) to determine Pareto-optimal solutions. The Pareto front refers to the set of non-dominated solutions which highlight trade-offs among the different objectives. A broad range of applications have been studied, all of which are drawn from the chemical engineering field. The design of an industrial packed bed styrene reactor is initially studied with the goal of maximizing the productivity, yield and selectivity of styrene. The dual population evolutionary algorithm (DPEA) was used to circumscribe the Pareto domain of two and three objective optimization case studies for three different configurations of the reactor: adiabatic, steam-injected and isothermal. The Pareto domains were then ranked using the net flow method (NFM), a ranking algorithm that incorporates the knowledge and preferences of an expert into the optimization routine. Next, a multiobjective optimization of the heat transfer area and pumping power of a shell-and-tube heat exchanger is considered to provide the designer with multiple Pareto-optimal solutions which capture the trade-off between the two objectives. The optimization was performed using the fast and elitist non-dominated sorting genetic algorithm (NSGA-II) on two case studies from the open literature. The algorithm was also used to determine the impact of using discrete standard values of the tube length, diameter and thickness rather than using continuous values to obtain the optimal heat transfer area and pumping power. In addition, a new hybrid algorithm called the FP-NSGA-II, is developed in this thesis by combining a front prediction algorithm with the fast and elitist non-dominated sorting genetic algorithm-II (NSGA-II). Due to the significant computational time of evaluating objective functions in real life engineering problems, the aim of this hybrid approach is to better approximate the Pareto front of difficult constrained and unconstrained problems while keeping the computational cost similar to NSGA-II. The new algorithm is tested on benchmark problems from the literature and on a heat exchanger network problem.
124

Reliability applied to maintenance

Sherwin, David J. January 1979 (has links)
The thesis covers studies conducted during 1976-79 under a Science Research Council contract to examine the uses of reliability information in decision-making in maintenance in the process industries. After a discussion of the ideal data system, four practical studies of process plants are described involving both Pareto and distribution analysis. In two of these studies the maintenance policy was changed and the effect on failure modes and frequency observed. Hyper-exponentially distributed failure intervals were found to be common and were explained after observation of maintenance work practices and development of theory as being due to poor workmanship and parts. The fallacy that constant failure rate necessarily implies the optimality of maintenance only at failure is discussed. Two models for the optimisation of inspection intervals are developed; both assume items give detectable warning of impending failure. The first is based upon constant risk of failure between successive inspections 'and Weibull base failure distribution~ Results show that an inspection/on-condition maintenance regime can be cost effective even when the failure rate is falling and may be better than periodiC renewals for an increasing failure situation. The second model is first-order Markov. Transition rate matrices are developed and solved to compare continuous monitoring with inspections/on-condition maintenance an a cost basis. The models incorporate planning delay in starting maintenance after impending failure is detected. The relationships between plant output and maintenance policy as affected by the presence of redundancy and/or storage between stages are examined, mainly through the literature but with some original theoretical proposals. It is concluded that reliability techniques have many applications in the improvement of plant maintenance policy. Techniques abound, but few firms are willing to take the step of faith to set up, even temporarily, the data-collection facilities required to apply them. There are over 350 references, many of which are reviewed in the text, divided into chapter-related sectionso Appendices include a review of Reliability Engineering Theory, based on the author's draft for BS 5760(2) a discussion of the 'bath-tub curves' applicability to maintained systems and the theory connecting hyper-exponentially distributed failures with poor maintenance practices.
125

Visualisation de l'ensemble de Pareto pour un problème linéaire bicritère : application à un problème de fonderie

Ntigura Habingabwa, Marie Emmanuel January 2017 (has links)
Dans ce mémoire nous avons, en premier lieu, identifié et présenté deux méthodes de calcul de l'ensemble de Pareto.Ces méthodes ne demandent qu'un logiciel élémentaire de programmation linéaire. En second lieu nous avons analysé une problématique issue du domaine de la fonderie. Nous avons présenté un problème de base de mélange classique. Ensuite nous avons considéré un problème bicritère de correction d'un prémélange et calculé son ensemble de Pareto. En troisième lieu nous avons constaté qu'il était possible de présenter une formulation unifiée des deux problèmes précédents sous forme d'un second problème bicritère. Cette unification fait intervenir une transformation géométrique. Finalement nous avons terminé ce mémoire par une étude sommaire de cette transformation géométrique mettant en évidence les cônes de préférence associés aux ensembles de Pareto de ces différents problèmes.
126

Pricing Financial Option as a Multi-Objective Optimization Problem Using Firefly Algorithms

Singh, Gobind Preet 01 September 2016 (has links)
An option, a type of a financial derivative, is a contract that creates an opportunity for a market player to avoid risks involved in investing, especially in equities. An investor desires to know the accurate value of an option before entering into a contract to buy/sell the underlying asset (stock). There are various techniques that try to simulate real market conditions in order to price or evaluate an option. However, most of them achieved limited success due to high uncertainty in price behavior of the underlying asset. In this study, I propose two new Firefly variant algorithms to compute accurate worth for European and American option contracts and compare them with popular option pricing models (such as Black-Scholes-Merton, binomial lattice, Monte-Carlo, etc.) and real market data. In my study, I have first modelled the option pricing as a multi-objective optimization problem, where I introduced the pay-off and probability of achieving that pay-off as the main optimization objectives. Then, I proposed to use a latest nature-inspired algorithm that uses the bioluminescence of Fireflies to simulate the market conditions, a first attempt in the literature. For my thesis, I have proposed adaptive weighted-sum based Firefly algorithm and non-dominant sorting Firefly algorithm to find Pareto optimal solutions for the option pricing problem. Using my algorithm(s), I have successfully computed complete Pareto front of option prices for a number of option contracts from the real market (Bloomberg data). Also, I have shown that one of the points on the Pareto front represents the option value within 1-2 % error of the real data (Bloomberg). Moreover, with my experiments, I have shown that any investor may utilize the results in the Pareto fronts for deciding to get into an option contract and can evaluate the worth of a contract tuned to their risk ability. This implies that my proposed multi-objective model and Firefly algorithm could be used in real markets for pricing options at different levels of accuracy. To the best of my knowledge, modelling option pricing problem as a multi-objective optimization problem and using newly developed Firefly algorithm for solving it is unique and novel. / October 2016
127

Analyse statistique de la pauvreté et des inégalités

Diouf, Mame Astou January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
128

Modèles Pareto hybrides pour distributions asymétriques et à queues lourdes

Carreau, Julie January 2007 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
129

Modelování velkých škod / Modelování velkých škod

Zuzáková, Barbora January 2013 (has links)
Title: Large claims modeling Author: Barbora Zuzáková Department: Department of Probability and Mathematical Statistics Supervisor: RNDr. Michal Pešta, Ph.D. Abstract: This thesis discusses a statistical modeling approach based on the extreme value theory to describe the behaviour of large claims of an insurance portfolio. We focus on threshold models which analyze exceedances of a high threshold. This approach has gained in popularity in recent years, as compared with the much older methods based directly on the extreme value distributions. The method is illustated using the group medical claims database recorded over the periods 1997, 1998 and 1999 maintained by the Society of Actuaries. We aim to demonstrate that the proposed model outperforms classical parametric distri- butions and thus enables to estimate high quantiles or the probable maximum loss more precisely. Keywords: threshold models, generalized Pareto distribution, large claims. 1
130

Šikmost v teorii optimalizace a eficience portfolia / Šikmost v teorii optimalizace a eficience portfolia

Mikulík, Petra January 2015 (has links)
In this thesis we study models, which search for an optimal portfolio from a set of stocks. On the contrary to the classical approach focusing only on expected return and variance, we examine models where an additional crite- rion of skewness is included. Furthermore we formulate a model for measuring performance of a portfolio defined as the distance from the Pareto efficient frontier. In numerical experiments we apply the models on historical prices and stock data from the electronic stock market NASDAQ. We analyze the stock data from companies listed in the index NASDAQ-100. We conclude by comparing of optimal portfolios created using different models among each other, with trivial single-stock portfolios and the with NASDAQ-100 index itself.

Page generated in 0.029 seconds