• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 5
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 12
  • 10
  • 9
  • 9
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Clustering of observations from finite mixtures with structural information

Blåfield, Eero, January 1980 (has links)
Thesis (doctoral)--Jyväskylän yliopisto, 1980. / Summary in Finnish. Includes bibliographical references (p. 46-49).
2

Clustering of observations from finite mixtures with structural information

Blåfield, Eero, January 1980 (has links)
Thesis (doctoral)--Jyväskylän yliopisto, 1980. / Summary in Finnish. Includes bibliographical references (p. 46-49).
3

Multivariate Markov networks for fitness modelling in an estimation of distribution algorithm

Brownlee, Alexander Edward Ian January 2009 (has links)
A well-known paradigm for optimisation is the evolutionary algorithm (EA). An EA maintains a population of possible solutions to a problem which converges on a global optimum using biologically-inspired selection and reproduction operators. These algorithms have been shown to perform well on a variety of hard optimisation and search problems. A recent development in evolutionary computation is the Estimation of Distribution Algorithm (EDA) which replaces the traditional genetic reproduction operators (crossover and mutation) with the construction and sampling of a probabilistic model. While this can often represent a significant computational expense, the benefit is that the model contains explicit information about the fitness function. This thesis expands on recent work using a Markov network to model fitness in an EDA, resulting in what we call the Markov Fitness Model (MFM). The work has explored the theoretical foundations of the MFM approach which are grounded in Walsh analysis of fitness functions. This has allowed us to demonstrate a clear relationship between the fitness model and the underlying dynamics of the problem. A key achievement is that we have been able to show how the model can be used to predict fitness and have devised a measure of fitness modelling capability called the fitness prediction correlation (FPC). We have performed a series of experiments which use the FPC to investigate the effect of population size and selection operator on the fitness modelling capability. The results and analysis of these experiments are an important addition to other work on diversity and fitness distribution within populations. With this improved understanding of fitness modelling we have been able to extend the framework Distribution Estimation Using Markov networks (DEUM) to use a multivariate probabilistic model. We have proposed and demonstrated the performance of a number of algorithms based on this framework which lever the MFM for optimisation, which can now be added to the EA toolbox. As part of this we have investigated existing techniques for learning the structure of the MFM; a further contribution which results from this is the introduction of precision and recall as measures of structure quality. We have also proposed a number of possible directions that future work could take.
4

Estimation of fiber size distribution in 3D X-ray µCT image datasets

Mozaffari, Alireza, Varaiya, Kunal January 2010 (has links)
The project is a thesis work in master program of Intelligent Systems that’s done by Alireza Mozaffari and Kunal Varaiya with supervising of Dr Kenneth Nilsson and Dr Cristofer Englund. In this project we are estimating the depth distribution of different sizes of fibers in a press felt sample. Press felt is a product that is being used in paper industry. In order to evaluate the production process when press felts are made, it is necessary to be able to estimate the fiber sizes in product. For this goal, we developed a program in Matlab to process X-ray images of a press felt, scanned by micro-CT scanner that is able to find the fibers of two different known sizes of fibers and estimates the depth distribution of the different fibers. / The project is done in Matlab which is estimating the distribution of different sizes of fibers in press felt.
5

Nonparametric density estimation via regularization

Lin, Mu. January 2009 (has links)
Thesis (M. Sc.)--University of Alberta, 2009. / Title from pdf file main screen (viewed on Dec. 11, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science in Statistics, Department of Mathematical and Statistical Sciences, University of Alberta." Includes bibliographical references.
6

GMMEDA : A demonstration of probabilistic modeling in continuous metaheuristic optimization using mixture models

Naveen Kumar Unknown Date (has links)
Optimization problems are common throughout science, engineering and commerce. The desire to continually improve solutions and resolve larger, complex problems has given prominence to this field of research for several decades and has led to the development of a range of optimization algorithms for different class of problems. The Estimation of Distribution Algorithms (EDAs) are a relatively recent class of metaheuristic optimization algorithms based on using probabilistic modeling techniques to control the search process. Within the general EDA framework, a number of different probabilistic models have been previously proposed for both discrete and continuous optimization problems. This thesis focuses on GMMEDAs; continuous EDAs based on the Gaussian Mixture Models (GMM) with parameter estimation performed using the Expectation Maximization (EM) algorithm. To date, this type of model has only received limited attention in the literature. There are few previous experimental studies of the algorithms. Furthermore, a number of implementation details of Continuous Iterated Density Estimation Algorithm based on Gaussian Mixture Model have not been previously documented. This thesis intends to provide a clear description of the GMMEDAs, discuss the implementation decisions and details and provides experimental study to evaluate the performance of the algorithms. The effectiveness of the GMMEDAs with varying model complexity (structure of covariance matrices and number of components) was tested against five benchmark functions (Sphere, Rastrigin, Griewank, Ackley and Rosenbrock) with varying dimensionality (2−, 10− and 30−D). The effect of the selection pressure parameters is also studied in this experiment. The results of the 2D experiments show that a variant of the GMMEDA with moderate complexity (Diagonal GMMEDA) was able to optimize both unimodal and multimodal functions. Further, experimental analysis of the 10 and 30D functions optimized results indicates that the simpler variant of the GMMEDA (Spherical GMMEDA) was most effective of all three variants of the algorithm. However, a greater consistency in the results of these functions is achieved when the most complex variant of the algorithm (Full GMMEDA) is used. The comparison of the results for four artificial test functions - Sphere, Griewank, Ackley and Rosenbrock - showed that the GMMEDA variants optimized most of complex functions better than existing continuous EDAs. This was achieved because of the ability of the GMM components to model the functions effectively. The analysis of the results evaluated by variants of the GMMEDA showed that number of the components and the selection pressure does affect the optimum value of artificial test function. The convergence of the GMMEDA variants to the respective functions best local optimum has been caused more by the complexity in the GMM components. The complexity of GMMEDA because of the number of components increases as the complexity owing to the structure of the covariance matrices increase. However, while finding optimum value of complex functions the increased complexity in GMMEDA due to complex covariance structure overrides the complexity due to increase in number of components. Additionally, the affect on the convergence due to the number of components decreases for most functions when the selection pressure increased. These affects have been noticed in the results in the form of stability of the results related to the functions. Other factors that affect the convergence of the model to the local optima are the initialization of the GMM parameters, the number of the EM components, and the reset condition. The initialization of the GMM components, though not visible graphically in the 10D optimization has shown: for different initialization of the GMM parameters in 2D, the optimum value of the functions is affected. The initialization of the population in the Evolutionary Algorithms has shown to affect the convergence of the algorithm to the functions global optimum. The observation of similar affects due to initialization of GMM parameters on the optimization of the 2D functions indicates that the convergence of the GMM in the 10D could be affected, which in turn, could affect the optimum value of respective functions. The estimated values related to the covariance and mean over the EM iteration in the 2D indicated that some functions needed a greater number of EM iterations while finding their optimum value. This indicates that lesser number of EM iterations could affect the fitting of the components to the selected population in the 10D and the fitting can affect the effective modeling of functions with varying complexity. Finally, the reset condition has shown as resetting the covariance and the best fitness value of individual in each generation in 2D. This condition is certain to affect the convergence of the GMMEDA variants to the respective functions best local optimum. The rate at which the reset condition was invoked could certainly have caused the GMM components covariance values to reset to their initials values and thus the model fitting the percentage of the selected population could have been affected. Considering all the affects caused by the different factors, the results indicates that a smaller number of the components and percentage of the selected population with a simpler S-GMMEDA modeled most functions with a varying complexity.
7

L’estimation de distribution à l'aide d'un autoencodeur

Germain, Mathieu January 2015 (has links)
Ce mémoire introduit MADE, un nouveau modèle génératif spécifiquement développé pour l’estimation de distribution de probabilité pour données binaires. Ce modèle se base sur le simple autoencodeur et le modifie de telle sorte que sa sortie puisse être considérée comme des probabilités conditionnelles. Il a été testé sur une multitude d’ensembles de données et atteint des performances comparables à l’état de l’art, tout en étant plus rapide. Pour faciliter la description de ce modèle, plusieurs concepts de base de l’apprentissage automatique seront décrits ainsi que d’autres modèles d’estimation de distribution. Comme son nom l’indique, l’estimation de distribution est simplement la tâche d’estimer une distribution statistique à l’aide d’exemples tirés de cette dernière. Bien que certains considèrent ce problème comme étant le Saint Graal de l’apprentissage automatique, il a longtemps été négligé par le domaine puisqu’il était considéré trop difficile. Une raison pour laquelle cette tâche est tenue en si haute estime est qu’une fois la distribution des données connue, elle peut être utilisée pour réaliser la plupart des autres tâches de l’apprentissage automatique, de la classification en passant par la régression jusqu’à la génération. L’information est divisée en trois chapitres principaux. Le premier donne un survol des connaissances requises pour comprendre le nouveau modèle. Le deuxième présente les précurseurs qui ont tenu le titre de l’état de l’art et finalement le troisième explique en détail le modèle proposé.
8

TEDA : a Targeted Estimation of Distribution Algorithm

Neumann, Geoffrey K. January 2014 (has links)
This thesis discusses the development and performance of a novel evolutionary algorithm, the Targeted Estimation of Distribution Algorithm (TEDA). TEDA takes the concept of targeting, an idea that has previously been shown to be effective as part of a Genetic Algorithm (GA) called Fitness Directed Crossover (FDC), and introduces it into a novel hybrid algorithm that transitions from a GA to an Estimation of Distribution Algorithm (EDA). Targeting is a process for solving optimisation problems where there is a concept of control points, genes that can be said to be active, and where the total number of control points found within a solution is as important as where they are located. When generating a new solution an algorithm that uses targeting must first of all choose the number of control points to set in the new solution before choosing which to set. The hybrid approach is designed to take advantage of the ability of EDAs to exploit patterns within the population to effectively locate the global optimum while avoiding the tendency of EDAs to prematurely converge. This is achieved by initially using a GA to effectively explore the search space before transitioning into an EDA as the population converges on the region of the global optimum. As targeting places an extra restriction on the solutions produced by specifying their size, combining it with the hybrid approach allows TEDA to produce solutions that are of an optimal size and of a higher quality than would be found using a GA alone without risking a loss of diversity. TEDA is tested on three different problem domains. These are optimal control of cancer chemotherapy, network routing and Feature Subset Selection (FSS). Of these problems, TEDA showed consistent advantage over standard EAs in the routing problem and demonstrated that it is able to find good solutions faster than untargeted EAs and non evolutionary approaches at the FSS problem. It did not demonstrate any advantage over other approaches when applied to chemotherapy. The FSS domain demonstrated that in large and noisy problems TEDA’s targeting derived ability to reduce the size of the search space significantly increased the speed with which good solutions could be found. The routing domain demonstrated that, where the ideal number of control points is deceptive, both targeting and the exploitative capabilities of an EDA are needed, making TEDA a more effective approach than both untargeted approaches and FDC. Additionally, in none of the problems was TEDA seen to perform significantly worse than any alternative approaches.
9

Moderní evoluční algoritmy pro hledání oblastí s vysokou fitness / Moderní evoluční algoritmy pro hledání oblastí s vysokou fitness

Káldy, Martin January 2011 (has links)
Evolutionary algorithms are optimization techniques inspired by the actual evolution of biological species. They use conceptually simple process of two repeating phases of reproduction and fitness-based selection, that iteratively evolves each time better solutions. Evolutionary algorithms receive a lot of attention for being able to solve very hard optimization problems, where other optimization techniques might fail due to existence of many local optima. Wide range of different variants of evolutionary algorithms have been proposed. In this thesis, we will focus on the area of Estimation of Distribution Algorithms (EDA). When creating the next generation, EDAs transform the selected high-fitness population into a probability distribution. New generation is obtained by sampling the estimated distribution. We will design and and implement combinations of existing EDAs that will operate in business-specific environment, that can be characterized as tree-like structure of both discrete and continuous variables. Also, additional linear inequality constraints are specified to applicable solutions. Implemented application communicates with provided interfaces, retrieving the problem model specification and storing populations into database. Database is used to assign externally computed fitness values from...
10

Algoritmo de otimização bayesiano com detecção de comunidades / Bayesian optimization algorithm with community detection

Crocomo, Márcio Kassouf 02 October 2012 (has links)
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva / ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation

Page generated in 0.1839 seconds