• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 5
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 9
  • 8
  • 8
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Perfectionnement des algorithmes d'optimisation par essaim particulaire : applications en segmentation d'images et en électronique / Improvement of particle swarm optimization algorithms : applications in image segmentation and electronics

El Dor, Abbas 05 December 2012 (has links)
La résolution satisfaisante d'un problème d'optimisation difficile, qui comporte un grand nombre de solutions sous-optimales, justifie souvent le recours à une métaheuristique puissante. La majorité des algorithmes utilisés pour résoudre ces problèmes d'optimisation sont les métaheuristiques à population. Parmi celles-ci, nous intéressons à l'Optimisation par Essaim Particulaire (OEP, ou PSO en anglais) qui est apparue en 1995. PSO s'inspire de la dynamique d'animaux se déplaçant en groupes compacts (essaims d'abeilles, vols groupés d'oiseaux, bancs de poissons). Les particules d'un même essaim communiquent entre elles tout au long de la recherche pour construire une solution au problème posé, et ce en s'appuyant sur leur expérience collective. L'algorithme PSO, qui est simple à comprendre, à programmer et à utiliser, se révèle particulièrement efficace pour les problèmes d'optimisation à variables continues. Cependant, comme toutes les métaheuristiques, PSO possède des inconvénients, qui rebutent encore certains utilisateurs. Le problème de convergence prématurée, qui peut conduire les algorithmes de ce type à stagner dans un optimum local, est un de ces inconvénients. L'objectif de cette thèse est de proposer des mécanismes, incorporables à PSO, qui permettent de remédier à cet inconvénient et d'améliorer les performances et l'efficacité de PSO. Nous proposons dans cette thèse deux algorithmes, nommés PSO-2S et DEPSO-2S, pour remédier au problème de la convergence prématurée. Ces algorithmes utilisent des idées innovantes et se caractérisent par de nouvelles stratégies d'initialisation dans plusieurs zones, afin d'assurer une bonne couverture de l'espace de recherche par les particules. Toujours dans le cadre de l'amélioration de PSO, nous avons élaboré une nouvelle topologie de voisinage, nommée Dcluster, qui organise le réseau de communication entre les particules. Les résultats obtenus sur un jeu de fonctions de test montrent l'efficacité des stratégies mises en oeuvre par les différents algorithmes proposés. Enfin, PSO-2S est appliqué à des problèmes pratiques, en segmentation d'images et en électronique / The successful resolution of a difficult optimization problem, comprising a large number of sub optimal solutions, often justifies the use of powerful metaheuristics. A wide range of algorithms used to solve these combinatorial problems belong to the class of population metaheuristics. Among them, Particle Swarm Optimization (PSO), appeared in 1995, is inspired by the movement of individuals in a swarm, like a bee swarm, a bird flock or a fish school. The particles of the same swarm communicate with each other to build a solution to the given problem. This is done by relying on their collective experience. This algorithm, which is easy to understand and implement, is particularly effective for optimization problems with continuous variables. However, like several metaheuristics, PSO shows some drawbacks that make some users avoid it. The premature convergence problem, where the algorithm converges to some local optima and does not progress anymore in order to find better solutions, is one of them. This thesis aims at proposing alternative methods, that can be incorporated in PSO to overcome these problems, and to improve the performance and the efficiency of PSO. We propose two algorithms, called PSO-2S and DEPSO-2S, to cope with the premature convergence problem. Both algorithms use innovative ideas and are characterized by new initialization strategies in several areas to ensure good coverage of the search space by particles. To improve the PSO algorithm, we have also developed a new neighborhood topology, called Dcluster, which can be seen as the communication network between the particles. The obtained experimental results for some benchmark cases show the effectiveness of the strategies implemented in the proposed algorithms. Finally, PSO-2S is applied to real world problems in both image segmentation and electronics fields
12

Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage / adaptative metaheuristics for continuous optimization based on learning methods

Ghoumari, Asmaa 10 December 2018 (has links)
Les problèmes d'optimisation continue sont nombreux, en économie, en traitement de signal, en réseaux de neurones, etc. L'une des solutions les plus connues et les plus employées est l'algorithme évolutionnaire, métaheuristique basée sur les théories de l'évolution qui emprunte des mécanismes stochastiques et qui a surtout montré de bonnes performances dans la résolution des problèmes d'optimisation continue. L’utilisation de cette famille d'algorithmes est très populaire, malgré les nombreuses difficultés qui peuvent être rencontrées lors de leur conception. En effet, ces algorithmes ont plusieurs paramètres à régler et plusieurs opérateurs à fixer en fonction des problèmes à résoudre. Dans la littérature, on trouve pléthore d'opérateurs décrits, et il devient compliqué pour l'utilisateur de savoir lesquels sélectionner afin d'avoir le meilleur résultat possible. Dans ce contexte, cette thèse avait pour objectif principal de proposer des méthodes permettant de remédier à ces problèmes sans pour autant détériorer les performances de ces algorithmes. Ainsi nous proposons deux algorithmes :- une méthode basée sur le maximum a posteriori qui utilise les probabilités de diversité afin de sélectionner les opérateurs à appliquer, et qui remet ce choix régulièrement en jeu,- une méthode basée sur un graphe dynamique d'opérateurs représentant les probabilités de passages entre les opérateurs, et en s'appuyant sur un modèle de la fonction objectif construit par un réseau de neurones pour mettre régulièrement à jour ces probabilités. Ces deux méthodes sont détaillées, ainsi qu'analysées via un benchmark d'optimisation continue / The problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
13

Aplicação do método do Gradiente Espectral Projetado ao problema de Compressive Sensing / Applications of the Spectral Prjected Gradient for Compressive Sensing theory

Chullo Llave, Boris 19 September 2012 (has links)
A teoria de Compressive Sensing proporciona uma nova estratégia de aquisição e recuperação de dados com bons resultados na área de processamento de imagens. Esta teoria garante recuperar um sinal com alta probabilidade a partir de uma taxa reduzida de amostragem por debaixo do limite de Nyquist-Shanon. O problema de recuperar o sinal original a partir das amostras consiste em resolver um problema de otimização. O método de Gradiente Espectral Projetado é um método para minimizar funções suaves em conjuntos convexos que tem sido aplicado com frequência ao problema de recuperar o sinal original a partir do sinal amostrado. Este trabalho dedica-se ao estudo da aplicação do Método do Gradiente Espectral Projetado ao problema de Compressive Sensing. / The theory of compressive sensing has provided a new acquisition strategy and data recovery with good results in the image processing area. This theory guarantees to recover a signal with high probability from a reduced sampling rate below the Nyquist-Shannon limit. The problem of recovering the original signal from the samples consists in solving an optimization problem. The Spectral Projected Gradient (SPG) is a method to minimize continuous functions over convex sets which often has been applied to the problem of recovering the original signal from sampled signals. This work is dedicated to the study and application of the Spectral Projected Gradient method to Compressive Sensing problems.
14

Markov chain Analysis of Evolution Strategies / Analyse Markovienne des Stratégies d'Evolution

Chotard, Alexandre 24 September 2015 (has links)
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées. / In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored.
15

Aplicação do método do Gradiente Espectral Projetado ao problema de Compressive Sensing / Applications of the Spectral Prjected Gradient for Compressive Sensing theory

Boris Chullo Llave 19 September 2012 (has links)
A teoria de Compressive Sensing proporciona uma nova estratégia de aquisição e recuperação de dados com bons resultados na área de processamento de imagens. Esta teoria garante recuperar um sinal com alta probabilidade a partir de uma taxa reduzida de amostragem por debaixo do limite de Nyquist-Shanon. O problema de recuperar o sinal original a partir das amostras consiste em resolver um problema de otimização. O método de Gradiente Espectral Projetado é um método para minimizar funções suaves em conjuntos convexos que tem sido aplicado com frequência ao problema de recuperar o sinal original a partir do sinal amostrado. Este trabalho dedica-se ao estudo da aplicação do Método do Gradiente Espectral Projetado ao problema de Compressive Sensing. / The theory of compressive sensing has provided a new acquisition strategy and data recovery with good results in the image processing area. This theory guarantees to recover a signal with high probability from a reduced sampling rate below the Nyquist-Shannon limit. The problem of recovering the original signal from the samples consists in solving an optimization problem. The Spectral Projected Gradient (SPG) is a method to minimize continuous functions over convex sets which often has been applied to the problem of recovering the original signal from sampled signals. This work is dedicated to the study and application of the Spectral Projected Gradient method to Compressive Sensing problems.
16

Cartoon-Residual Image Decompositions with Application in Fingerprint Recognition

Richter, Robin 06 November 2019 (has links)
No description available.
17

A New Contribution To Nonlinear Robust Regression And Classification With Mars And Its Applications To Data Mining For Quality Control In Manufacturing

Yerlikaya, Fatma 01 September 2008 (has links) (PDF)
Multivariate adaptive regression spline (MARS) denotes a modern methodology from statistical learning which is very important in both classification and regression, with an increasing number of applications in many areas of science, economy and technology. MARS is very useful for high dimensional problems and shows a great promise for fitting nonlinear multivariate functions. MARS technique does not impose any particular class of relationship between the predictor variables and outcome variable of interest. In other words, a special advantage of MARS lies in its ability to estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the response variable. The function fitted by MARS is continuous, whereas the one fitted by classical classification methods (CART) is not. Herewith, MARS becomes an alternative to CART. The MARS algorithm for estimating the model function consists of two complementary algorithms: the forward and backward stepwise algorithms. In the first step, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm is began by removing the least significant basis functions from the model. In this study, we propose not to use the backward stepwise algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a Tikhonov regularization problem, which is also known as ridge regression. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming which is an area of convex optimization that is very well-structured, herewith, resembling linear programming and, hence, permitting the use of interior point methods. The boundaries of this optimization problem are determined by the multiobjective optimization approach which provides us many alternative solutions. Based on these theoretical and algorithmical studies, this MSc thesis work also contains applications on the data investigated in a T&Uuml / BiTAK project on quality control. By these applications, MARS and our new method are compared.
18

A Mathematical Contribution Of Statistical Learning And Continuous Optimization Using Infinite And Semi-infinite Programming To Computational Statistics

Ozogur-akyuz, Sureyya 01 February 2009 (has links) (PDF)
A subfield of artificial intelligence, machine learning (ML), is concerned with the development of algorithms that allow computers to &ldquo / learn&rdquo / . ML is the process of training a system with large number of examples, extracting rules and finding patterns in order to make predictions on new data points (examples). The most common machine learning schemes are supervised, semi-supervised, unsupervised and reinforcement learning. These schemes apply to natural language processing, search engines, medical diagnosis, bioinformatics, detecting credit fraud, stock market analysis, classification of DNA sequences, speech and hand writing recognition in computer vision, to encounter just a few. In this thesis, we focus on Support Vector Machines (SVMs) which is one of the most powerful methods currently in machine learning. As a first motivation, we develop a model selection tool induced into SVM in order to solve a particular problem of computational biology which is prediction of eukaryotic pro-peptide cleavage site applied on the real data collected from NCBI data bank. Based on our biological example, a generalized model selection method is employed as a generalization for all kinds of learning problems. In ML algorithms, one of the crucial issues is the representation of the data. Discrete geometric structures and, especially, linear separability of the data play an important role in ML. If the data is not linearly separable, a kernel function transforms the nonlinear data into a higher-dimensional space in which the nonlinear data are linearly separable. As the data become heterogeneous and large-scale, single kernel methods become insufficient to classify nonlinear data. Convex combinations of kernels were developed to classify this kind of data [8]. Nevertheless, selection of the finite combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of &ldquo / infinite&rdquo / kernel combinations for learning problems with the help of infinite and semi-infinite programming regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogeneous data in real-world applications. Combination of kernels can be done, e.g., along a homotopy parameter or a more specific parameter. Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann-Stieltjes) integral constraint due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We analyze the regularity conditions which satisfy the Reduction Ansatz and discuss the type of distribution functions within the structure of the constraints and our bilevel optimization problem. Finally, we adapted well known numerical methods of semiinfinite programming to our new kernel machine. We improved the discretization method for our specific model and proposed two new algorithms. We proved the convergence of the numerical methods and we analyzed the conditions and assumptions of these convergence theorems such as optimality and convergence.
19

On continuous maximum flow image segmentation algorithm

Marak, Laszlo 28 March 2012 (has links) (PDF)
In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
20

Perfectionnement de métaheuristiques pour l'optimisation continue / Improvement of metaheuristics for continuous optimization

Boussaid, Ilhem 29 June 2013 (has links)
Les métaheuristiques sont des algorithmes génériques, souvent inspirés de la nature, conçues pour résoudre des problèmes d'optimisation complexes. Parmi les métaheuristiques les plus récentes, nous retenons celle basée sur la théorie de la biogéographie insulaire: Biogeography-based optimization (BBO).Dans cette thèse, nous considérons à la fois les problèmes d'optimisation globale à variables continues avec et sans contraintes. De nouvelles versions hybrides de BBO sont proposées comme des solutions très prometteuses pour résoudre les problèmes considérés. Les méthodes proposées visent à pallier les inconvénients de la convergence lente et du manque de diversité de l'algorithme BBO. Dans la première partie de cette thèse, nous présentons la méthode que nous avons développée, issue d'une hybridation de BBO avec l'évolution différentielle (DE) pour résoudre des problèmes d'optimisation sans contraintes. Nous montrons que les résultats de l'algorithme proposé sont plus précis, notamment pour des problèmes multimodaux, qui sont parmi les problèmes les plus difficiles pour de nombreux algorithmes d'optimisation. Pour résoudre des problèmes d'optimisation sous contraintes, nous proposons trois nouvelles variantes de BBO. Des expérimentations ont été menées pour rendre compte de l'utilité des méthodes proposées. Dans une deuxième partie, nous nous intéressons à l'étude des capacités des méthodes proposées à résoudre des problèmes d'optimisation, issus du monde réel. Nous nous proposons d'abord de résoudre le problème d'allocation optimale de puissance pour la détection décentralisée d'un signal déterministe dans un réseau de capteurs sans fil, compte tenu des fortes contraintes en ressources énergétiques et en bande passante des noeuds répartis. L'objectif est de minimiser la puissance totale allouée aux capteurs, tout en gardant la probabilité d'erreur de détection au dessous d'un seuil requis. Dans un deuxième temps, nous nous focalisons sur la segmentation d'images en niveaux de gris par seuillage multi-niveaux. Les seuils sont déterminés de manière à maximiser l'entropie floue. Ce problème d'optimisation est résolu en appliquant une variante de BBO (DBBO-Fuzzy) que nous avons développée. Nous montrons l'efficacité de la méthode proposée aux travers de résultats expérimentaux / Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems. Among representative metaheuristics, Biogeography-based optimization (BBO) has been recently proposed as a viable stochastic optimization algorithm. In this PhD thesis, both unconstrained and constrained global optimization problems in a continuous space are considered. New hybrid versions of BBO are proposed as promising solvers for the considered problems. The proposed methods aim to overcome the drawbacks of slow convergence and the lack of diversity of the BBO algorithm. In the first part of this thesis, we present the method we developed, based on an hybridization of BBO with the differential evolution (DE) algorithm, to solve unconstrained optimization problems. We show that the results of the proposed algorithm are more accurate, especially for multimodal problems, which are amongst the most difficult-to-handle class of problems for many optimization algorithms. To solve constrained optimization problems, we propose three new variations of BBO. Our extensive experimentations successfully demonstrate the usefulness of all these modifications proposed for the BBO algorithm. In the second part, we focus on the applications of the proposed algorithms to solve real-world optimization problems. We first address the problem of optimal power scheduling for the decentralized detection of a deterministic signal in a wireless sensor network, with power and bandwidth constrained distributed nodes. The objective is to minimize the total power spent by the whole sensor network while keeping the detection error probability below a required threshold. In a second time, image segmentation of gray-level images is performed by multilevel thresholding. The optimal thresholds for this purpose are found by maximizing the fuzzy entropy. The optimization is conducted by a newly-developed BBO variants (DBBO-Fuzzy). We show the efficiency of the proposed method through experimental results

Page generated in 0.105 seconds