• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 18
  • 7
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 103
  • 103
  • 21
  • 20
  • 19
  • 19
  • 16
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Stéréophotométrie non-calibrée de surfaces non-Lambertiennes. Application à la reconstruction de surface de colonies microbiennes / Uncalibrated non-Lambertian photometric stereo. Application to microbial colonies surface reconstruction.

Kyrgyzova, Khrystyna 22 July 2014 (has links)
La thèse est dédiée au problème de la stéréophotométrie non-Lambertienne sans connaissance a priori sur les conditions d’illumination et son application aux images de boîte de Pétri. Pour obtenir une bonne reconstruction de surfaces non-Lambertiennes, il est proposé de traiter une séquence d’entrée en deux étapes: premièrement il faut supprimer les effets spéculaires et obtenir ainsi des images de surface ’pseudo-Lambertienne’. Ensuite dans une deuxième étape à partir de ces images une reconstruction stéréophotométrique Lambertienne sans aucune information préalable sur les directions d’illumination est effectuée. Dans ce travail nous proposons deux méthodes originales respectivement pour la suppression de spécularités et la reconstruction de surface sans information a priori. Les méthodes proposées sont appliquées pour la caractérisation des colonies microbiennes.La spécularités est un effet optique lié à la nature physique complexe des objets. Il est utile pour la perception humaine des objets 3D mais il gêne le processus de traitement automatique d’images. Pour pouvoir appliquer le modèle Lambertien à la stéréophotométrie, les spécularités doivent être supprimées des images d’entrée. Nous proposons donc une méthode originale pour la correction des zones spéculaires adaptée pour une reconstruction ultérieure. L’algorithme proposé est capable de détecter les spécularités comme des valeurs anormalement élevées d’intensité dans une image de la séquence d’entrée, et de les corriger en utilisant les informations des autres images de la séquence et une fonction de correction continue. Cette méthode permet de faire la suppression des spécularités en préservant toutes les autres particularités de distribution de lumière qui sont importantes pour la reconstruction de surface.Après nous proposons une technique de reconstruction stéréophotométrique de surface Lambertienne sans connaissance a priori sur l’illumination. Le modèle mis en œuvre consiste en quatre composantes, deux composantes (albédo et normales) permettent de d´écrire des propriétés de surface et deux autres (intensités des sources de lumière et leurs directions) décrivent illumination. L’algorithme proposé de reconstruction utilise le principe de l’optimisation alternée. Chaque composante du modèle est trouvée itérativement en fixant toutes les variables sauf une et en appliquant des contraintes de structures, valeurs et qualité pour la fonction d’optimisation. Un schéma original de résolution permet de séparer les différents types d’information inclus dans les images d’entrée. Grâce à cette factorisation de matrices, la reconstruction de surface est faite sans connaissance préalable sur les directions de lumière et les propriétés de l’objet reconstruit. L’applicabilité de l’algorithme est prouvée pour des donnés artificielles et des images de bases publiques pour lesquelles la vérité terrain sur les surfaces des objets est disponible.La dernière partie de la thèse est dédiée à l’application de la chaine complète proposée pour le traitement d’images de boîte de Pétri. Ces images sont obtenues en utilisant les sources de lumières complexes qui sont supposées être inconnues pour le processus de reconstruction. L’évaluation de surfaces de colonies microbiennes s’est révélée être une étape importante pour l'analyse visuelle et automatique des colonies. La chaine proposée est efficace pour ce type de données et permet de compléter les informations d'images par de la surface 3D. / The PhD thesis work is dedicated to the problem of uncalibrated non-Lambertian photometric stereo surface reconstruction. The proposed approach consists in two phases: first we correct images of the input sequence from specularities in order to obtain images of pseudo-Lambertian surfaces, and then realize Lambertian photometric stereo reconstruction. In this work we proposed two original methods, respectively, for specularity correction and surface reconstruction with no prior information neither on light sources nor on surface properties. We apply the novel processing to Petri dish images for microbial colonies surface reconstruction.Specularity is an optical effect of a complex physical nature. This effect is useful for human 3D objects perception but it affects automated image processing. In order to be able to apply the Lambertian photometric stereo model, specularities should be removed from the input images. We propose an original method for specular zones correction adapted to estimation of pseudo-Lambertian surface images and further reconstruction. This algorithm is able to detect specularities as abnormally elevated pixel intensity values in an image of the input sequence and to correct the found zones using information from all other images of the sequence and a specific continuous correcting function. This method allows removing specularities while still preserving all other particularities of shading important for the further surface reconstruction.We then propose an original stereo photometric method for Lambertian surface reconstruction with no prior on illuminations. The implemented photometric stereo model consists of four components, two of them (albedo and normals) describe surface properties and the others (light sources intensities and directions) describe illumination. The proposed algorithm of the photometric stereo reconstruction uses the alternating optimization principle. Each model component is found iteratively fixing all variables but one and applying value and quality constraints for the optimization function. The original scheme of resolution allows separating of different information types included in input images. Thanks to such matrix factorization, the surface reconstruction is made with no prior information on lighting directions and the reconstructed objects properties. The applicability of the algorithm is proved using artificially created and open data-sets for which the ground truth information is available.The last part of the thesis is dedicated to the application of the proposed uncalibrated non- Lambertian photometric stereo approach to the Petri dish images. Images are obtained using illuminating sources which are supposed to be unknown for the reconstruction process. Moreover, the reconstructed microbial colonies are very diverse, generally have small size, can be Lambertian or not, and their surface properties are not defined in advance. The results of reconstruction for such complex real-world data add value and importance to the developed approach.
62

Multi-Robot Task Allocation and Scheduling with Spatio-Temporal and Energy Constraints

Dutia, Dharini 24 April 2019 (has links)
Autonomy in multi-robot systems is bounded by coordination among its agents. Coordination implies simultaneous task decomposition, task allocation, team formation, task scheduling and routing; collectively termed as task planning. In many real-world applications of multi-robot systems such as commercial cleaning, delivery systems, warehousing and inventory management: spatial & temporal constraints, variable execution time, and energy limitations need to be integrated into the planning module. Spatial constraints comprise of the location of the tasks, their reachability, and the structure of the environment; temporal constraints express task completion deadlines. There has been significant research in multi-robot task allocation involving spatio-temporal constraints. However, limited attention has been paid to combine them with team formation and non- instantaneous task execution time. We achieve team formation by including quota constraints which ensure to schedule the number of robots required to perform the task. We introduce and integrate task activation (time) windows with the team effort of multiple robots in performing tasks for a given duration. Additionally, while visiting tasks in space, energy budget affects the robots operation time. We map energy depletion as a function of time to ensure long-term operation by periodically visiting recharging stations. Research on task planning approaches which combines all these conditions is still lacking. In this thesis, we propose two variants of Team Orienteering Problem with task activation windows and limited energy budget to formulate the simultaneous task allocation and scheduling as an optimization problem. A complete mixed integer linear programming (MILP) formulation for both variants is presented in this work, implemented using Gurobi Optimizer and analyzed for scalability. This work compares the different objectives of the formulation like maximizing the number of tasks visited, minimizing the total distance travelled, and/or maximizing the reward, to suit various applications. Finally, analysis of optimal solutions discover trends in task selection based on the travel cost, task completion rewards, robot's energy level, and the time left to task inactivation.
63

Constrained graph-based semi-supervised learning with higher order regularization / Aprendizado semissupervisionado restrito baseado em grafos com regularização de ordem elevada

Sousa, Celso Andre Rodrigues de 10 August 2017 (has links)
Graph-based semi-supervised learning (SSL) algorithms have been widely studied in the last few years. Most of these algorithms were designed from unconstrained optimization problems using a Laplacian regularizer term as smoothness functional in an attempt to reflect the intrinsic geometric structure of the datas marginal distribution. Although a number of recent research papers are still focusing on unconstrained methods for graph-based SSL, a recent statistical analysis showed that many of these algorithms may be unstable on transductive regression. Therefore, we focus on providing new constrained methods for graph-based SSL. We begin by analyzing the regularization framework of existing unconstrained methods. Then, we incorporate two normalization constraints into the optimization problem of three of these methods. We show that the proposed optimization problems have closed-form solution. By generalizing one of these constraints to any distribution, we provide generalized methods for constrained graph-based SSL. The proposed methods have a more flexible regularization framework than the corresponding unconstrained methods. More precisely, our methods can deal with any graph Laplacian and use higher order regularization, which is effective on general SSL taks. In order to show the effectiveness of the proposed methods, we provide comprehensive experimental analyses. Specifically, our experiments are subdivided into two parts. In the first part, we evaluate existing graph-based SSL algorithms on time series data to find their weaknesses. In the second part, we evaluate the proposed constrained methods against six state-of-the-art graph-based SSL algorithms on benchmark data sets. Since the widely used best case analysis may hide useful information concerning the SSL algorithms performance with respect to parameter selection, we used recently proposed empirical evaluation models to evaluate our results. Our results show that our methods outperforms the competing methods on most parameter settings and graph construction methods. However, we found a few experimental settings in which our methods showed poor performance. In order to facilitate the reproduction of our results, the source codes, data sets, and experimental results are freely available. / Algoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
64

Algorithmes distribués d'allocation de ressources dans les réseaux sans fil

Akbarzadeh, Sara 20 September 2010 (has links) (PDF)
La connectivité totale offerte par la communication sans fil pose un grand nombre d'avantages et de défis pour les concepteurs de la future génération des réseaux sans fil. Un des principaux défis qui se posent est lié à l'interference au niveau des récepteurs. Il est bien reconnu que ce défi réside dans la conception des systèmes d'allocation des ressources qui offrent le meilleur compromis entre l'efficacité et la complexité. L'exploration de ce compromis nécessite des choix judicieux d'indicateurs de performance et des modèles mathématiques. À cet égard, cette thèse est consacrée à certains aspects techniques et mathématiques d'allocation des ressources dans les réseaux sans fil. En particulier, nous demontrons que l'allocation de ressources efficace dans les réseaux sans fil doit prendre en compte les paramètres suivants: (i) le taux de changement de l'environnement, (ii) le modèle de trafic, et (iii) la quantité d'informations disponibles aux émetteurs. Comme modeles mathématiques dans cet étude, nous utilisons la théorie d'optimisation et la théorie des jeux. Nous sommes particulièrement intéressés à l'allocation distribuée des ressources dans les réseaux avec des canaux à évanouissement lent et avec des informations partielles du canal aux émetteurs. Les émetteurs avec information partielle disposent d'informations exactes de leur propre canal ainsi que la connaissance statistique des autres canaux. Dans un tel contexte, le système est fondamentalement détérioré par une probabilité outage non nul. Nous proposons des algorithmes distribués à faible complexité d'allocation conjointe du débit et de la puissance visant à maximiser le "throughput" individuel.
65

A New Look Into Image Classification: Bootstrap Approach

Ochilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.
66

Full-waveform inversion in three-dimensional PML-truncated elastic media : theory, computations, and field experiments

Fathi, Arash 03 September 2015 (has links)
We are concerned with the high-fidelity subsurface imaging of the soil, which commonly arises in geotechnical site characterization and geophysical explorations. Specifically, we attempt to image the spatial distribution of the Lame parameters in semi-infinite, three-dimensional, arbitrarily heterogeneous formations, using surficial measurements of the soil's response to probing elastic waves. We use the complete waveforms of the medium's response to drive the inverse problem. Specifically, we use a partial-differential-equation (PDE)-constrained optimization approach, directly in the time-domain, to minimize the misfit between the observed response of the medium at select measurement locations, and a computed response corresponding to a trial distribution of the Lame parameters. We discuss strategies that lend algorithmic robustness to the proposed inversion schemes. To limit the computational domain to the size of interest, we employ perfectly-matched-layers (PMLs). The PML is a buffer zone that surrounds the domain of interest, and enforces the decay of outgoing waves. In order to resolve the forward problem, we present a hybrid finite element approach, where a displacement-stress formulation for the PML is coupled to a standard displacement-only formulation for the interior domain, thus leading to a computationally cost-efficient scheme. We discuss several time-integration schemes, including an explicit Runge-Kutta scheme, which is well-suited for large-scale problems on parallel computers. We report numerical results demonstrating stability and efficacy of the forward wave solver, and also provide examples attesting to the successful reconstruction of the two Lame parameters for both smooth and sharp profiles, using synthetic records. We also report the details of two field experiments, whose records we subsequently used to drive the developed inversion algorithms in order to characterize the sites where the field experiments took place. We contrast the full-waveform-based inverted site profile against a profile obtained using the Spectral-Analysis-of-Surface-Waves (SASW) method, in an attempt to compare our methodology against a widely used concurrent inversion approach. We also compare the inverted profiles, at select locations, with the results of independently performed, invasive, Cone Penetrometer Tests (CPTs). Overall, whether exercised by synthetic or by physical data, the full-waveform inversion method we discuss herein appears quite promising for the robust subsurface imaging of near-surface deposits in support of geotechnical site characterization investigations.
67

Objective-driven discriminative training and adaptation based on an MCE criterion for speech recognition and detection

Shin, Sung-Hwan 13 January 2014 (has links)
Acoustic modeling in state-of-the-art speech recognition systems is commonly based on discriminative criteria. Different from the paradigm of the conventional distribution estimation such as maximum a posteriori (MAP) and maximum likelihood (ML), the most popular discriminative criteria such as MCE and MPE aim at direct minimization of the empirical error rate. As recent ASR applications become diverse, it has been increasingly recognized that realistic applications often require a model that can be optimized for a task-specific goal or a particular scenario beyond the general purposes of the current discriminative criteria. These specific requirements cannot be directly handled by the current discriminative criteria since the objective of the criteria is to minimize the overall empirical error rate. In this thesis, we propose novel objective-driven discriminative training and adaptation frameworks, which are generalized from the minimum classification error (MCE) criterion, for various tasks and scenarios of speech recognition and detection. The proposed frameworks are constructed to formulate new discriminative criteria which satisfy various requirements of the recent ASR applications. In this thesis, each objective required by an application or a developer is directly embedded into the learning criterion. Then, the objective-driven discriminative criterion is used to optimize an acoustic model in order to achieve the required objective. Three task-specific requirements that the recent ASR applications often require in practice are mainly taken into account in developing the objective-driven discriminative criteria. First, an issue of individual error minimization of speech recognition is addressed and we propose a direct minimization algorithm for each error type of speech recognition. Second, a rapid adaptation scenario is embedded into formulating discriminative linear transforms under the MCE criterion. A regularized MCE criterion is proposed to efficiently improve the generalization capability of the MCE estimate in a rapid adaptation scenario. Finally, the particular operating scenario that requires a system model optimized at a given specific operating point is discussed over the conventional receiver operating characteristic (ROC) optimization. A constrained discriminative training algorithm which can directly optimize a system model for any particular operating need is proposed. For each of the developed algorithms, we provide an analytical solution and an appropriate optimization procedure.
68

Design of nearly linear-phase recursive digital filters by constrained optimization

Guindon, David Leo 24 December 2007 (has links)
The design of nearly linear-phase recursive digital filters using constrained optimization is investigated. The design technique proposed is expected to be useful in applications where both magnitude and phase response specifications need to be satisfied. The overall constrained optimization method is formulated as a quadratic programming problem based on Newton’s method. The objective function, its gradient vector and Hessian matrix as well as a set of linear constraints are derived. In this analysis, the independent variables are assumed to be the transfer function coefficients. The filter stability issue and convergence efficiency, as well as a ‘real axis attraction’ problem are solved by integrating the corresponding bounds into the linear constraints of the optimization method. Also, two initialization techniques for providing efficient starting points for the optimization are investigated and the relation between the zero and pole positions and the group delay are examined. Based on these ideas, a new objective function is formulated in terms of the zeros and poles of the transfer function expressed in polar form and integrated into the optimization process. The coefficient-based and polar-based objective functions are tested and compared and it is shown that designs using the polar-based objective function produce improved results. Finally, several other modern methods for the design of nearly linear-phase recursive filters are compared with the proposed method. These include an elliptic design combined with an optimal equalization technique that uses a prescribed group delay, an optimal design method with robust stability using conic-quadratic-programming updates, and an unconstrained optimization technique that uses parameterization to guarantee filter stability. It was found that the proposed method generates similar or improved results in all comparative examples suggesting that the new method is an attractive alternative for linear-phase recursive filters of orders up to about 30.
69

A New Look Into Image Classification: Bootstrap Approach

Ochilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.
70

An integrated evolutionary system for solving optimization problems

Barkat Ullah, Abu Saleh Shah Muhammad, Engineering & Information Technology, Australian Defence Force Academy, UNSW January 2009 (has links)
Many real-world decision processes require solving optimization problems which may involve different types of constraints such as inequality and equality constraints. The hurdles in solving these Constrained Optimization Problems (COPs) arise from the challenge of searching a huge variable space in order to locate feasible points with acceptable solution quality. Over the last decades Evolutionary Algorithms (EAs) have brought a tremendous advancement in the area of computer science and optimization with their ability to solve various problems. However, EAs have inherent difficulty in dealing with constraints when solving COPs. This thesis presents a new Agent-based Memetic Algorithm (AMA) for solving COPs, where the agents have the ability to independently select a suitable Life Span Learning Process (LSLP) from a set of LSLPs. Each agent represents a candidate solution of the optimization problem and tries to improve its solution through cooperation with other agents. Evolutionary operators consist of only crossover and one of the self-adaptively selected LSLPs. The performance of the proposed algorithm is tested on benchmark problems, and the experimental results show convincing performance. The quality of individuals in the initial population influences the performance of evolutionary algorithms, especially when the feasible region of the constrained optimization problems is very tiny in comparison to the entire search space. This thesis proposes a method that improves the quality of randomly generated initial solutions by sacrificing very little in diversity of the population. The proposed Search Space Reduction Technique (SSRT) is tested using five different existing EAs, including AMA, by solving a number of state-of-the-art test problems and a real world case problem. The experimental results show SSRT improves the solution quality, and speeds up the performance of the algorithms. The handling of equality constraints has long been a difficult issue for evolutionary optimization methods, although several methods are available in the literature for handling functional constraints. In any optimization problems with equality constraints, to satisfy the condition of feasibility and optimality the solution points must lie on each and every equality constraint. This reduces the size of the feasible space and makes it difficult for EAs to locate feasible and optimal solutions. A new Equality Constraint Handling Technique (ECHT) is presented in this thesis, to enhance the performance of AMA in solving constrained optimization problems with equality constraints. The basic concept is to reach a point on the equality constraint from its current position by the selected individual solution and then explore on the constraint landscape. The technique is used as an agent learning process in AMA. The experimental results confirm the improved performance of the proposed algorithm. This thesis also proposes a Modified Genetic Algorithm (MGA) for solving COPs with equality constraints. After achieving inspiring performance in AMA when dealing with equality constraints, the new technique is used in the design of MGA. The experimental results show that the proposed algorithm overcomes the limitations of GA in solving COPs with equality constraints, and provides good quality solutions.

Page generated in 0.1165 seconds