• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 18
  • 7
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 100
  • 100
  • 21
  • 20
  • 19
  • 19
  • 16
  • 16
  • 14
  • 13
  • 13
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Constrained graph-based semi-supervised learning with higher order regularization / Aprendizado semissupervisionado restrito baseado em grafos com regularização de ordem elevada

Sousa, Celso Andre Rodrigues de 10 August 2017 (has links)
Graph-based semi-supervised learning (SSL) algorithms have been widely studied in the last few years. Most of these algorithms were designed from unconstrained optimization problems using a Laplacian regularizer term as smoothness functional in an attempt to reflect the intrinsic geometric structure of the datas marginal distribution. Although a number of recent research papers are still focusing on unconstrained methods for graph-based SSL, a recent statistical analysis showed that many of these algorithms may be unstable on transductive regression. Therefore, we focus on providing new constrained methods for graph-based SSL. We begin by analyzing the regularization framework of existing unconstrained methods. Then, we incorporate two normalization constraints into the optimization problem of three of these methods. We show that the proposed optimization problems have closed-form solution. By generalizing one of these constraints to any distribution, we provide generalized methods for constrained graph-based SSL. The proposed methods have a more flexible regularization framework than the corresponding unconstrained methods. More precisely, our methods can deal with any graph Laplacian and use higher order regularization, which is effective on general SSL taks. In order to show the effectiveness of the proposed methods, we provide comprehensive experimental analyses. Specifically, our experiments are subdivided into two parts. In the first part, we evaluate existing graph-based SSL algorithms on time series data to find their weaknesses. In the second part, we evaluate the proposed constrained methods against six state-of-the-art graph-based SSL algorithms on benchmark data sets. Since the widely used best case analysis may hide useful information concerning the SSL algorithms performance with respect to parameter selection, we used recently proposed empirical evaluation models to evaluate our results. Our results show that our methods outperforms the competing methods on most parameter settings and graph construction methods. However, we found a few experimental settings in which our methods showed poor performance. In order to facilitate the reproduction of our results, the source codes, data sets, and experimental results are freely available. / Algoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
62

Algorithmes distribués d'allocation de ressources dans les réseaux sans fil

Akbarzadeh, Sara 20 September 2010 (has links) (PDF)
La connectivité totale offerte par la communication sans fil pose un grand nombre d'avantages et de défis pour les concepteurs de la future génération des réseaux sans fil. Un des principaux défis qui se posent est lié à l'interference au niveau des récepteurs. Il est bien reconnu que ce défi réside dans la conception des systèmes d'allocation des ressources qui offrent le meilleur compromis entre l'efficacité et la complexité. L'exploration de ce compromis nécessite des choix judicieux d'indicateurs de performance et des modèles mathématiques. À cet égard, cette thèse est consacrée à certains aspects techniques et mathématiques d'allocation des ressources dans les réseaux sans fil. En particulier, nous demontrons que l'allocation de ressources efficace dans les réseaux sans fil doit prendre en compte les paramètres suivants: (i) le taux de changement de l'environnement, (ii) le modèle de trafic, et (iii) la quantité d'informations disponibles aux émetteurs. Comme modeles mathématiques dans cet étude, nous utilisons la théorie d'optimisation et la théorie des jeux. Nous sommes particulièrement intéressés à l'allocation distribuée des ressources dans les réseaux avec des canaux à évanouissement lent et avec des informations partielles du canal aux émetteurs. Les émetteurs avec information partielle disposent d'informations exactes de leur propre canal ainsi que la connaissance statistique des autres canaux. Dans un tel contexte, le système est fondamentalement détérioré par une probabilité outage non nul. Nous proposons des algorithmes distribués à faible complexité d'allocation conjointe du débit et de la puissance visant à maximiser le "throughput" individuel.
63

A New Look Into Image Classification: Bootstrap Approach

Ochilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.
64

Full-waveform inversion in three-dimensional PML-truncated elastic media : theory, computations, and field experiments

Fathi, Arash 03 September 2015 (has links)
We are concerned with the high-fidelity subsurface imaging of the soil, which commonly arises in geotechnical site characterization and geophysical explorations. Specifically, we attempt to image the spatial distribution of the Lame parameters in semi-infinite, three-dimensional, arbitrarily heterogeneous formations, using surficial measurements of the soil's response to probing elastic waves. We use the complete waveforms of the medium's response to drive the inverse problem. Specifically, we use a partial-differential-equation (PDE)-constrained optimization approach, directly in the time-domain, to minimize the misfit between the observed response of the medium at select measurement locations, and a computed response corresponding to a trial distribution of the Lame parameters. We discuss strategies that lend algorithmic robustness to the proposed inversion schemes. To limit the computational domain to the size of interest, we employ perfectly-matched-layers (PMLs). The PML is a buffer zone that surrounds the domain of interest, and enforces the decay of outgoing waves. In order to resolve the forward problem, we present a hybrid finite element approach, where a displacement-stress formulation for the PML is coupled to a standard displacement-only formulation for the interior domain, thus leading to a computationally cost-efficient scheme. We discuss several time-integration schemes, including an explicit Runge-Kutta scheme, which is well-suited for large-scale problems on parallel computers. We report numerical results demonstrating stability and efficacy of the forward wave solver, and also provide examples attesting to the successful reconstruction of the two Lame parameters for both smooth and sharp profiles, using synthetic records. We also report the details of two field experiments, whose records we subsequently used to drive the developed inversion algorithms in order to characterize the sites where the field experiments took place. We contrast the full-waveform-based inverted site profile against a profile obtained using the Spectral-Analysis-of-Surface-Waves (SASW) method, in an attempt to compare our methodology against a widely used concurrent inversion approach. We also compare the inverted profiles, at select locations, with the results of independently performed, invasive, Cone Penetrometer Tests (CPTs). Overall, whether exercised by synthetic or by physical data, the full-waveform inversion method we discuss herein appears quite promising for the robust subsurface imaging of near-surface deposits in support of geotechnical site characterization investigations.
65

Objective-driven discriminative training and adaptation based on an MCE criterion for speech recognition and detection

Shin, Sung-Hwan 13 January 2014 (has links)
Acoustic modeling in state-of-the-art speech recognition systems is commonly based on discriminative criteria. Different from the paradigm of the conventional distribution estimation such as maximum a posteriori (MAP) and maximum likelihood (ML), the most popular discriminative criteria such as MCE and MPE aim at direct minimization of the empirical error rate. As recent ASR applications become diverse, it has been increasingly recognized that realistic applications often require a model that can be optimized for a task-specific goal or a particular scenario beyond the general purposes of the current discriminative criteria. These specific requirements cannot be directly handled by the current discriminative criteria since the objective of the criteria is to minimize the overall empirical error rate. In this thesis, we propose novel objective-driven discriminative training and adaptation frameworks, which are generalized from the minimum classification error (MCE) criterion, for various tasks and scenarios of speech recognition and detection. The proposed frameworks are constructed to formulate new discriminative criteria which satisfy various requirements of the recent ASR applications. In this thesis, each objective required by an application or a developer is directly embedded into the learning criterion. Then, the objective-driven discriminative criterion is used to optimize an acoustic model in order to achieve the required objective. Three task-specific requirements that the recent ASR applications often require in practice are mainly taken into account in developing the objective-driven discriminative criteria. First, an issue of individual error minimization of speech recognition is addressed and we propose a direct minimization algorithm for each error type of speech recognition. Second, a rapid adaptation scenario is embedded into formulating discriminative linear transforms under the MCE criterion. A regularized MCE criterion is proposed to efficiently improve the generalization capability of the MCE estimate in a rapid adaptation scenario. Finally, the particular operating scenario that requires a system model optimized at a given specific operating point is discussed over the conventional receiver operating characteristic (ROC) optimization. A constrained discriminative training algorithm which can directly optimize a system model for any particular operating need is proposed. For each of the developed algorithms, we provide an analytical solution and an appropriate optimization procedure.
66

Design of nearly linear-phase recursive digital filters by constrained optimization

Guindon, David Leo 24 December 2007 (has links)
The design of nearly linear-phase recursive digital filters using constrained optimization is investigated. The design technique proposed is expected to be useful in applications where both magnitude and phase response specifications need to be satisfied. The overall constrained optimization method is formulated as a quadratic programming problem based on Newton’s method. The objective function, its gradient vector and Hessian matrix as well as a set of linear constraints are derived. In this analysis, the independent variables are assumed to be the transfer function coefficients. The filter stability issue and convergence efficiency, as well as a ‘real axis attraction’ problem are solved by integrating the corresponding bounds into the linear constraints of the optimization method. Also, two initialization techniques for providing efficient starting points for the optimization are investigated and the relation between the zero and pole positions and the group delay are examined. Based on these ideas, a new objective function is formulated in terms of the zeros and poles of the transfer function expressed in polar form and integrated into the optimization process. The coefficient-based and polar-based objective functions are tested and compared and it is shown that designs using the polar-based objective function produce improved results. Finally, several other modern methods for the design of nearly linear-phase recursive filters are compared with the proposed method. These include an elliptic design combined with an optimal equalization technique that uses a prescribed group delay, an optimal design method with robust stability using conic-quadratic-programming updates, and an unconstrained optimization technique that uses parameterization to guarantee filter stability. It was found that the proposed method generates similar or improved results in all comparative examples suggesting that the new method is an attractive alternative for linear-phase recursive filters of orders up to about 30.
67

A New Look Into Image Classification: Bootstrap Approach

Ochilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.
68

An integrated evolutionary system for solving optimization problems

Barkat Ullah, Abu Saleh Shah Muhammad, Engineering & Information Technology, Australian Defence Force Academy, UNSW January 2009 (has links)
Many real-world decision processes require solving optimization problems which may involve different types of constraints such as inequality and equality constraints. The hurdles in solving these Constrained Optimization Problems (COPs) arise from the challenge of searching a huge variable space in order to locate feasible points with acceptable solution quality. Over the last decades Evolutionary Algorithms (EAs) have brought a tremendous advancement in the area of computer science and optimization with their ability to solve various problems. However, EAs have inherent difficulty in dealing with constraints when solving COPs. This thesis presents a new Agent-based Memetic Algorithm (AMA) for solving COPs, where the agents have the ability to independently select a suitable Life Span Learning Process (LSLP) from a set of LSLPs. Each agent represents a candidate solution of the optimization problem and tries to improve its solution through cooperation with other agents. Evolutionary operators consist of only crossover and one of the self-adaptively selected LSLPs. The performance of the proposed algorithm is tested on benchmark problems, and the experimental results show convincing performance. The quality of individuals in the initial population influences the performance of evolutionary algorithms, especially when the feasible region of the constrained optimization problems is very tiny in comparison to the entire search space. This thesis proposes a method that improves the quality of randomly generated initial solutions by sacrificing very little in diversity of the population. The proposed Search Space Reduction Technique (SSRT) is tested using five different existing EAs, including AMA, by solving a number of state-of-the-art test problems and a real world case problem. The experimental results show SSRT improves the solution quality, and speeds up the performance of the algorithms. The handling of equality constraints has long been a difficult issue for evolutionary optimization methods, although several methods are available in the literature for handling functional constraints. In any optimization problems with equality constraints, to satisfy the condition of feasibility and optimality the solution points must lie on each and every equality constraint. This reduces the size of the feasible space and makes it difficult for EAs to locate feasible and optimal solutions. A new Equality Constraint Handling Technique (ECHT) is presented in this thesis, to enhance the performance of AMA in solving constrained optimization problems with equality constraints. The basic concept is to reach a point on the equality constraint from its current position by the selected individual solution and then explore on the constraint landscape. The technique is used as an agent learning process in AMA. The experimental results confirm the improved performance of the proposed algorithm. This thesis also proposes a Modified Genetic Algorithm (MGA) for solving COPs with equality constraints. After achieving inspiring performance in AMA when dealing with equality constraints, the new technique is used in the design of MGA. The experimental results show that the proposed algorithm overcomes the limitations of GA in solving COPs with equality constraints, and provides good quality solutions.
69

Design of nearly linear-phase recursive digital filters by constrained optimization

Guindon, David Leo 24 December 2007 (has links)
The design of nearly linear-phase recursive digital filters using constrained optimization is investigated. The design technique proposed is expected to be useful in applications where both magnitude and phase response specifications need to be satisfied. The overall constrained optimization method is formulated as a quadratic programming problem based on Newton’s method. The objective function, its gradient vector and Hessian matrix as well as a set of linear constraints are derived. In this analysis, the independent variables are assumed to be the transfer function coefficients. The filter stability issue and convergence efficiency, as well as a ‘real axis attraction’ problem are solved by integrating the corresponding bounds into the linear constraints of the optimization method. Also, two initialization techniques for providing efficient starting points for the optimization are investigated and the relation between the zero and pole positions and the group delay are examined. Based on these ideas, a new objective function is formulated in terms of the zeros and poles of the transfer function expressed in polar form and integrated into the optimization process. The coefficient-based and polar-based objective functions are tested and compared and it is shown that designs using the polar-based objective function produce improved results. Finally, several other modern methods for the design of nearly linear-phase recursive filters are compared with the proposed method. These include an elliptic design combined with an optimal equalization technique that uses a prescribed group delay, an optimal design method with robust stability using conic-quadratic-programming updates, and an unconstrained optimization technique that uses parameterization to guarantee filter stability. It was found that the proposed method generates similar or improved results in all comparative examples suggesting that the new method is an attractive alternative for linear-phase recursive filters of orders up to about 30.
70

Hybridation d’algorithmes évolutionnaires et de méthodes d’intervalles pour l’optimisation de problèmes difficiles / Hybridization of evolutionary algorithms and interval-based methods for optimizing difficult problems

Vanaret, Charlie 27 January 2015 (has links)
L’optimisation globale fiable est dédiée à la recherche d’un minimum global en présence d’erreurs d’arrondis. Les seules approches fournissant une preuve numérique d’optimalité sont des méthodes d’intervalles qui partitionnent l’espace de recherche et éliminent les sous-espaces qui ne peuvent contenir de solution optimale. Ces méthodes exhaustives, appelées branch and bound par intervalles, sont étudiées depuis les années 60 et ont récemment intégré des techniques de réfutation et de contraction, issues des communautés d’analyse par intervalles et de programmation par contraintes. Il est d’une importance cruciale de calculer i) un encadrement précis de la fonction objectif et des contraintes sur un sous-domaine ; ii) une bonne approximation (un majorant) du minimum global. Les solveurs de pointe sont généralement des méthodes intégratives : ils invoquent sur chaque sous-domaine des algorithmes d’optimisation locale afin d’obtenir une bonne approximation du minimum global. Dans ce document, nous nous intéressons à un cadre coopératif combinant des méthodes d’intervalles et des algorithmes évolutionnaires. Ces derniers sont des algorithmes stochastiques faisant évoluer une population de solutions candidates (individus) dans l’espace de recherche de manière itérative, dans l’espoir de converger vers des solutions satisfaisantes. Les algorithmes évolutionnaires, dotés de mécanismes permettant de s’échapper des minima locaux, sont particulièrement adaptés à la résolution de problèmes difficiles pour lesquels les méthodes traditionnelles peinent à converger. Au sein de notre solveur coopératif Charibde, l’algorithme évolutionnaire et l’algorithme sur intervalles exécutés en parallèle échangent bornes, solutions et espace de recherche par passage de messages. Une stratégie couplant une heuristique d’exploration géométrique et un opérateur de réduction de domaine empêche la convergence prématurée de la population vers des minima locaux et évite à l’algorithme évolutionnaire d’explorer des sous-espaces sous-optimaux ou non réalisables. Une comparaison de Charibde avec des solveurs de pointe (GlobSol, IBBA, Ibex) sur une base de problèmes difficiles montre un gain de temps d’un ordre de grandeur. De nouveaux résultats optimaux sont fournis pour cinq problèmes multimodaux pour lesquels peu de solutions, même approchées, sont connues dans la littérature. Nous proposons une application aéronautique dans laquelle la résolution de conflits est modélisée par un problème d’optimisation sous contraintes universellement quantifiées, et résolue par des techniques d’intervalles spécifiques. Enfin, nous certifions l’optimalité de la meilleure solution connue pour le cluster de Lennard-Jones à cinq atomes, un problème ouvert en dynamique moléculaire. / Reliable global optimization is dedicated to finding a global minimum in the presence of rounding errors. The only approaches for achieving a numerical proof of optimality in global optimization are interval-based methods that interleave branching of the search-space and pruning of the subdomains that cannot contain an optimal solution. The exhaustive interval branch and bound methods have been widely studied since the 1960s and have benefitted from the development of refutation methods and filtering algorithms, stemming from the interval analysis and interval constraint programming communities. It is of the utmost importance: i) to compute sharp enclosures of the objective function and the constraints on a given subdomain; ii) to find a good approximation (an upper bound) of the global minimum. State-of-the-art solvers are generally integrative methods, that is they embed local optimization algorithms to compute a good upper bound of the global minimum over each subspace. In this document, we propose a cooperative framework in which interval methods cooperate with evolutionary algorithms. The latter are stochastic algorithms in which a population of individuals (candidate solutions) iteratively evolves in the search-space to reach satisfactory solutions. Evolutionary algorithms, endowed with operators that help individuals escape from local minima, are particularly suited for difficult problems on which traditional methods struggle to converge. Within our cooperative solver Charibde, the evolutionary algorithm and the intervalbased algorithm run in parallel and exchange bounds, solutions and search-space via message passing. A strategy combining a geometric exploration heuristic and a domain reduction operator prevents premature convergence toward local minima and prevents the evolutionary algorithm from exploring suboptimal or unfeasible subspaces. A comparison of Charibde with state-of-the-art solvers based on interval analysis (GlobSol, IBBA, Ibex) on a benchmark of difficult problems shows that Charibde converges faster by an order of magnitude. New optimality results are provided for five multimodal problems, for which few solutions were available in the literature. We present an aeronautical application in which conflict solving between aircraft is modeled by an universally quantified constrained optimization problem, and solved by specific interval contractors. Finally, we certify the optimality of the putative solution to the Lennard-Jones cluster problem for five atoms, an open problem in molecular dynamics.

Page generated in 0.1362 seconds