• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 9
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 57
  • 57
  • 14
  • 14
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Parallel paradigms in optimal structural design

Van Huyssteen, Salomon Stephanus 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Modern-day processors are not getting any faster. Due to the power consumption limit of frequency scaling, parallel processing is increasingly being used to decrease computation time. In this thesis, several parallel paradigms are used to improve the performance of commonly serial SAO programs. Four novelties are discussed: First, replacing double precision solvers with single precision solvers. This is attempted in order to take advantage of the anticipated factor 2 speed increase that single precision computations have over that of double precision computations. However, single precision routines present unpredictable performance characteristics and struggle to converge to required accuracies, which is unfavourable for optimization solvers. Second, QP and dual are statements pitted against one another in a parallel environment. This is done because it is not always easy to see which is best a priori. Therefore both are started in parallel and the competing threads are cancelled as soon as one returns a valid point. Parallel QP vs. dual statements prove to be very attractive, converging within the minimum number of outer iterations. The most appropriate solver is selected as the problem properties change during the iteration steps. Thread cancellation poses problems caused by threads having to wait to arrive at appropriate checkpoints, thus su ering from unnecessarily long wait times because of struggling competing routines. Third, multiple global searches are started in parallel on a shared memory system. Problems see a speed increase of nearly 4x for all problems. Dynamically scheduled threads alleviate the need for set thread amounts, as in message passing implementations. Lastly, the replacement of existing matrix-vector multiplication routines with optimized BLAS routines, especially BLAS routines targeted at GPGPU technologies (graphics processing units), proves to be superior when solving large matrix-vector products in an iterative environment. These problems scale well within the hardware capabilities and speedups of up to 36x are recorded. / AFRIKAANSE OPSOMMING: Hedendaagse verwerkers word nie vinniger nie as gevolg van kragverbruikingslimiet soos die verwerkerfrekwensie op-skaal. Parallelle prosesseering word dus meer dikwels gebruik om berekeningstyd te laat daal. Verskeie parallelle paradigmas word gebruik om die prestasie van algemeen sekwensiële optimeringsprogramme te verbeter. Vier ontwikkelinge word bespreek: Eerste, is die vervanging van dubbel presisie roetines met enkel presisie roetines. Dit poog om voordeel te trek uit die faktor 2 spoed verbetering wat enkele presisie berekeninge het oor dubbel presisie berekeninge. Enkele presisie roetines is onvoorspelbaar en sukkel in meeste gevalle om die korrekte akkuraatheid te vind. Tweedens word QP teen duale algoritmes in ’n parallel omgewing gebruik. Omdat dit nie altyd voor die tyd maklik is om te sien watter een die beste gaan presteer nie, word almal in parallel begin en die mededingers word dan gekanselleer sodra een terugkeer met ’n geldige KKT punt. Parallele QP teen duale algoritmes blyk om baie aantreklik te wees. Konvergensie gebeur in alle gevalle binne die minimum aantal iterasies. Die mees geskikte algoritme word op elke iterasie gebruik soos die probleem eienskappe verander gedurende die iterasie stappe. “Thread” kanseleering hou probleme in en word veroorsaak deur “threads” wat moet wag om die kontrolepunte te bereik, dus ly die beste roetines onnodig as gevolg van meededinger roetines was sukkel. Derdens, verskeie globale optimerings word in parallel op ’n “shared memory” stelsel begin. Probleme bekom ’n spoed verhoging van byna vier maal vir alle probleme. Dinamiese geskeduleerde “threads” verlig die behoefte aan voorafbepaalde “threads” soos gebruik word in die “message passing” implementerings. Laastens is die vervanging van die bestaande matriks-vektor vermenigvuldiging roetines met geoptimeerde BLAS roetines, veral BLAS roetines wat gerig is op GPGPU tegnologië. Die GPU roetines bewys om superieur te wees wanneer die oplossing van groot matrix-vektor produkte in ’n iteratiewe omgewing gebruik word. Hierdie probleme skaal ook goed binne die hardeware se vermoëns, vir die grootste probleme wat getoets word, word ’n versnelling van 36 maal bereik.
32

Trajectory generation for autonomous unmanned aircraft using inverse dynamics

Drury, R. G. January 2010 (has links)
The problem addressed in this research is the in-flight generation of trajectories for autonomous unmanned aircraft, which requires a method of generating pseudo-optimal trajectories in near-real-time, on-board the aircraft, and without external intervention. The focus of this research is the enhancement of a particular inverse dynamics direct method that is a candidate solution to the problem. This research introduces the following contributions to the method. A quaternion-based inverse dynamics model is introduced that represents all orientations without singularities, permits smooth interpolation of orientations, and generates more accurate controls than the previous Euler-angle model. Algorithmic modifications are introduced that: overcome singularities arising from parameterization and discretization; combine analytic and finite difference expressions to improve the accuracy of controls and constraints; remove roll ill-conditioning when the normal load factor is near zero, and extend the method to handle negative-g orientations. It is also shown in this research that quadratic interpolation improves the accuracy and speed of constraint evaluation. The method is known to lead to a multimodal constrained nonlinear optimization problem. The performance of the method with four nonlinear programming algorithms was investigated: a differential evolution algorithm was found to be capable of over 99% successful convergence, to generate solutions with better optimality than the quasi- Newton and derivative-free algorithms against which it was tested, but to be up to an order of magnitude slower than those algorithms. The effects of the degree and form of polynomial airspeed parameterization on optimization performance were investigated, and results were obtained that quantify the achievable optimality as a function of the parameterization degree. Overall, it was found that the method is a potentially viable method of on-board near- real-time trajectory generation for unmanned aircraft but for this potential to be realized in practice further improvements in computational speed are desirable. Candidate optimization strategies are identified for future research.
33

Statistical and numerical optimization for speckle blind structured illumination microscopy / Optimisation numérique et statistique pour la microscopie à éclairement structuré non contrôlé

Liu, Penghuan 25 May 2018 (has links)
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0<q<=1. La norme lp,q est introduite afin de prendre en compte une hypothèse de parcimonie sur l’objet. Dans l’approche marginale, nous reconstruisons uniquement l’objet et les motifs d’éclairement sont traités comme des paramètres de nuisance. Notre contribution est double. Premièrement, une analyse théorique démontre que l’exploitation des statistiques d’ordre deux des données permet d’accéder à un facteur de super résolution de deux, lorsque le support de la densité spectrale du speckle correspond au support fréquentiel de la fonction de transfert du microscope. Ensuite, nous abordons le problème du calcul numérique de la solution. Afin de réduire à la fois le coût de calcul et les ressources en mémoire, nous proposons un estimateur marginal à base de patches. L’élément clé de cette méthode à patches est de négliger l’information de corrélation entre les pixels appartenant à différents patches. Des résultats de simulations et en application à des données réelles démontrent la capacité de super résolution de nos méthodes. De plus, celles-ci peuvent être appliquées aussi bien sur des problèmes de reconstruction 2D d’échantillons fins, mais également sur des problèmes d’imagerie 3D d’objets plus épais. / Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0<q<=1. The lp,q-norm is introduced based on the sparsity assumption of the object. In the marginal approach, we only reconstruct the object, while the unknown speckle patterns are considered as nuisance parameters. Our contribution is two fold. First, a theoretical analysis demonstrates that using the second order statistics of the data, blind-speckle-SIM yields a super-resolution factor of two, provided that the support of the speckle spectral density equals the frequency support of the microscope point spread function. Then, numerical implementation is addressed. In order to reduce the computational burden and the memory requirement of the marginal approach, a patch-based marginal estimator is proposed. The key idea behind the patch-based estimator consists of neglecting the correlation information between pixels from different patches. Simulation results and experiments with real data demonstrate the super-resolution capacity of our methods. Moreover, our proposed methods can not only be applied in 2D super-resolution problems with thin samples, but are also compatible with 3D imaging problems of thick samples.
34

Trajectory generation for autonomous unmanned aircraft using inverse dynamics

Drury, R. G. 09 1900 (has links)
The problem addressed in this research is the in-flight generation of trajectories for autonomous unmanned aircraft, which requires a method of generating pseudo-optimal trajectories in near-real-time, on-board the aircraft, and without external intervention. The focus of this research is the enhancement of a particular inverse dynamics direct method that is a candidate solution to the problem. This research introduces the following contributions to the method. A quaternion-based inverse dynamics model is introduced that represents all orientations without singularities, permits smooth interpolation of orientations, and generates more accurate controls than the previous Euler-angle model. Algorithmic modifications are introduced that: overcome singularities arising from parameterization and discretization; combine analytic and finite difference expressions to improve the accuracy of controls and constraints; remove roll ill-conditioning when the normal load factor is near zero, and extend the method to handle negative-g orientations. It is also shown in this research that quadratic interpolation improves the accuracy and speed of constraint evaluation. The method is known to lead to a multimodal constrained nonlinear optimization problem. The performance of the method with four nonlinear programming algorithms was investigated: a differential evolution algorithm was found to be capable of over 99% successful convergence, to generate solutions with better optimality than the quasi- Newton and derivative-free algorithms against which it was tested, but to be up to an order of magnitude slower than those algorithms. The effects of the degree and form of polynomial airspeed parameterization on optimization performance were investigated, and results were obtained that quantify the achievable optimality as a function of the parameterization degree. Overall, it was found that the method is a potentially viable method of on-board near- real-time trajectory generation for unmanned aircraft but for this potential to be realized in practice further improvements in computational speed are desirable. Candidate optimization strategies are identified for future research.
35

Μιμιδικοί και εξελικτικοί αλγόριθμοι στην αριθμητική βελτιστοποίηση και στη μη γραμμική δυναμική

Πεταλάς, Ιωάννης 18 September 2008 (has links)
Το κύριο στοιχείο της διατριβής είναι οι Εξελικτικοί Αλγόριθμοι. Στο πρώτο μέρος παρουσιάζονται οι Μιμιδικοί Αλγόριθμοι. Οι Μιμιδικοί Αλγόριθμοι είναι υβριδικά σχήματα που συνδυάζουν τους Εξελιτκικούς Αλγορίθμους με μεθόδους τοπικής αναζήτησης. Οι Μιμιδικοί Αλγόριθμοι συγκρίθηκαν με τους Εξελικτικούς Αλγορίθμους σε πληθώρα προβλημάτων ολικής βελτιστοποίησης και είχαν καλύτερα αποτελέσματα. Στο δεύτερο μέρος μελετήθηκαν προβλήματα μη γραμμικής δυναμικής. Αυτά ήταν η εκτίμηση της περιοχής ευστάθειας διατηρητικών απεικονίσεων, η ανίχνευση συντονισμών και ο υπολογισμός περιοδικών τροχιών. Τα αποτελέσματα ήταν ικανοποιητικά. / The main objective of the thesis was the study of Evolutionary Algorithms. At the first part, Memetic Algorithms were introduced. Memetic Algorithms are hybrid schemes that combine Evolutionary Algorithms and local search methods. Memetic Algorithms were compared to Evolutionary Algorithms in various problems of global optimization and they had better performance. At the second part, problems from nonlinear dynamics were studied. These were the estimation of the stability region of conservative maps, the detection of resonances and the computation of periodic orbits. The results were satisfactory.
36

Two Optimization Problems in Genetics : Multi-dimensional QTL Analysis and Haplotype Inference

Nettelblad, Carl January 2012 (has links)
The existence of new technologies, implemented in efficient platforms and workflows has made massive genotyping available to all fields of biology and medicine. Genetic analyses are no longer dominated by experimental work in laboratories, but rather the interpretation of the resulting data. When billions of data points representing thousands of individuals are available, efficient computational tools are required. The focus of this thesis is on developing models, methods and implementations for such tools. The first theme of the thesis is multi-dimensional scans for quantitative trait loci (QTL) in experimental crosses. By mating individuals from different lines, it is possible to gather data that can be used to pinpoint the genetic variation that influences specific traits to specific genome loci. However, it is natural to expect multiple genes influencing a single trait to interact. The thesis discusses model structure and model selection, giving new insight regarding under what conditions orthogonal models can be devised. The thesis also presents a new optimization method for efficiently and accurately locating QTL, and performing the permuted data searches needed for significance testing. This method has been implemented in a software package that can seamlessly perform the searches on grid computing infrastructures. The other theme in the thesis is the development of adapted optimization schemes for using hidden Markov models in tracing allele inheritance pathways, and specifically inferring haplotypes. The advances presented form the basis for more accurate and non-biased line origin probabilities in experimental crosses, especially multi-generational ones. We show that the new tools are able to reconstruct haplotypes and even genotypes in founder individuals and offspring alike, based on only unordered offspring genotypes. The tools can also handle larger populations than competing methods, resolving inheritance pathways and phase in much larger and more complex populations. Finally, the methods presented are also applicable to datasets where individual relationships are not known, which is frequently the case in human genetics studies. One immediate application for this would be improved accuracy for imputation of SNP markers within genome-wide association studies (GWAS). / eSSENCE
37

[en] DEVELOPMENT OF UNIMODAL AND MULTIMODAL OPTIMIZATION ALGORITHMS BASED ON MULTI-GENE GENETIC PROGRAMMING / [pt] DESENVOLVIMENTO DE ALGORITMOS DE OTIMIZAÇÃO UNIMODAL E MULTIMODAL COM BASE EM PROGRAMAÇÃO GENÉTICA MULTIGÊNICA

ROGERIO CORTEZ BRITO LEITE POVOA 29 August 2018 (has links)
[pt] As técnicas de programação genética permitem flexibilidade no processo de otimização, possibilitando sua aplicação em diferentes áreas do conhecimento e fornecendo novas maneiras para que especialistas avancem em suas áreas com mais rapidez. Parameter mapping approach é um método de otimização numérica que utiliza a programação genética para mapear valores iniciais em parâmetros ótimos para um sistema. Embora esta abordagem produza bons resultados para problemas com soluções triviais, o uso de grandes equações/árvores pode ser necessário para tornar este mapeamento apropriado em sistemas mais complexos.A fim de aumentar a flexibilidade e aplicabilidade do método a sistemas de diferentes níveis de complexidade, este trabalho introduz uma generalização utilizando a programação genética multigênica, para realizar um mapeamento multivariado, evitando grandes estruturas complexas. Foram considerados três conjuntos de funções de benchmark, variando em complexidade e dimensionalidade. Análises estatísticas foram realizadas, sugerindo que este novo método é mais flexível e mais eficiente (em média), considerando funções de benchmark complexas e de grande dimensionalidade. Esta tese também apresenta uma abordagem do novo algoritmo para otimização numérica multimodal.Este segundo algoritmo utiliza algumas técnicas de niching, baseadas no procedimento chamado de clearing, para manter a diversidade da população. Um conjunto benchmark de funções multimodais, com diferentes características e níveis de dificuldade,foi utilizado para avaliar esse novo algoritmo. A análise estatística sugeriu que esse novo método multimodal, que também utiliza programação genética multigênica,pode ser aplicado para problemas que requerem mais do que uma única solução. Como forma de testar esses métodos em problemas do mundo real, uma aplicação em nanotecnologia é proposta nesta tese: ao timização estrutural de fotodetectores de infravermelho de poços quânticos a partir de uma energia desejada. Os resultados apresentam novas estruturas melhores do que as conhecidas na literatura (melhoria de 59,09 por cento). / [en] Genetic programming techniques allow flexibility in the optimization process, making it possible to use them in different areas of knowledge and providing new ways for specialists to advance in their areas more quickly and more accurately.Parameter mapping approach is a numerical optimization method that uses genetic programming to find an appropriate mapping scheme among initial guesses to optimal parameters for a system. Although this approach yields good results for problems with trivial solutions, the use of large equations/trees may be required to make this mapping appropriate for more complex systems.In order to increase the flexibility and applicability of the method to systems of different levels of complexity, this thesis introduces a generalization by thus using multi-gene genetic programming to perform a multivariate mapping, avoiding large complex structures.Three sets of benchmark functions, varying in complexity and dimensionality, were considered. Statistical analyses carried out suggest that this new method is more flexible and performs better on average, considering challenging benchmark functions of increasing dimensionality.This thesis also presents an improvement of this new method for multimodal numerical optimization.This second algorithm uses some niching techniques based on the clearing procedure to maintain the population diversity. A multimodal benchmark set with different characteristics and difficulty levels to evaluate this new algorithm is used. Statistical analysis suggested that this new multimodal method using multi-gene genetic programming can be used for problems that requires more than a single solution. As a way of testing real-world problems for these methods, one application in nanotechnology is proposed in this thesis: the structural optimization of quantum well infrared photodetector from a desired energy.The results present new structures better than those known in the literature with improvement of 59.09 percent.
38

Výpočetní metody v jednomolekulové lokalizační mikroskopii / Computational methods in single molecule localization microscopy

Ovesný, Martin January 2016 (has links)
Computational methods in single molecule localization microscopy Abstract Fluorescence microscopy is one of the chief tools used in biomedical research as it is a non invasive, non destructive, and highly specific imaging method. Unfortunately, an optical microscope is a diffraction limited system. Maximum achievable spatial resolution is approximately 250 nm laterally and 500 nm axially. Since most of the structures in cells researchers are interested in are smaller than that, increasing resolution is of prime importance. In recent years, several methods for imaging beyond the diffraction barrier have been developed. One of them is single molecule localization microscopy, a powerful method reported to resolve details as small as 5 nm. This approach to fluorescence microscopy is very computationally intensive. Developing methods to analyze single molecule data and to obtain super-resolution images are the topics of this thesis. In localization microscopy, a super-resolution image is reconstructed from a long sequence of conventional images of sparsely distributed single photoswitchable molecules that need to be sys- tematically localized with sub-diffraction precision. We designed, implemented, and experimentally verified a set of methods for automated processing, analysis and visualization of data acquired...
39

Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variables

Mourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
40

Méthodes primales-duales régularisées pour l'optimisation non linéaire avec contraintes / Regularized primal-dual methods for nonlinearly constrained optimization

Omheni, Riadh 14 November 2014 (has links)
Cette thèse s’inscrit dans le cadre de la conception, l’analyse et la mise en œuvre d’algorithmes efficaces et fiables pour la résolution de problèmes d’optimisation non linéaire avec contraintes. Nous présentons trois nouveaux algorithmes fortement primaux-duaux pour résoudre ces problèmes. La première caractéristique de ces algorithmes est que le contrôle des itérés s’effectue dans l’espace primal-dual tout au long du processus de la minimisation, d’où l’appellation “fortement primaux-duaux”. En particulier, la globalisation est effectuée par une méthode de recherche linéaire qui utilise une fonction de mérite primale-duale. La deuxième caractéristique est l’introduction d’une régularisation naturelle du système linéaire qui est résolu à chaque itération pour calculer une direction de descente. Ceci permet à nos algorithmes de bien se comporter pour résoudre les problèmes dégénérés pour lesquels la jacobienne des contraintes n’est pas de plein rang. La troisième caractéristique est que le paramètre de pénalisation est autorisé à augmenter au cours des itérations internes, alors qu’il est généralement maintenu constant. Cela permet de réduire le nombre d’itérations internes. Une étude théorique détaillée incluant l’analyse de convergence globale des itérations internes et externes, ainsi qu’une analyse asymptotique a été présentée pour chaque algorithme. En particulier, nous montrons qu’ils jouissent d’un taux de convergence rapide, superlinéaire ou quadratique. Ces algorithmes sont implémentés dans un nouveau solveur d’optimisation non linéaire qui est appelé SPDOPT. Les bonnes performances de ce solveur ont été montrées en effectuant des comparaisons avec les codes de références IPOPT, ALGENCAN et LANCELOT sur une large collection de problèmes. / This thesis focuses on the design, analysis, and implementation of efficient and reliable algorithms for solving nonlinearly constrained optimization problems. We present three new strongly primal-dual algorithms to solve such problems. The first feature of these algorithms is that the control of the iterates is done in both primal and dual spaces during the whole minimization process, hence the name “strongly primal-dual”. In particular, the globalization is performed by applying a backtracking line search algorithm based on a primal-dual merit function. The second feature is the introduction of a natural regularization of the linear system solved at each iteration to compute a descent direction. This allows our algorithms to perform well when solving degenerate problems for which the Jacobian of constraints is rank deficient. The third feature is that the penalty parameter is allowed to increase along the inner iterations, while it is usually kept constant. This allows to reduce the number of inner iterations. A detailed theoretical study including the global convergence analysis of both inner and outer iterations, as well as an asymptotic convergence analysis is presented for each algorithm. In particular, we prove that these methods have a high rate of convergence : superlinear or quadratic. These algorithms have been implemented in a new solver for nonlinear optimization which is called SPDOPT. The good practical performances of this solver have been demonstrated by comparing it to the reference codes IPOPT, ALGENCAN and LANCELOT on a large collection of test problems.

Page generated in 0.1918 seconds