Spelling suggestions: "subject:"anumerical aptimization"" "subject:"anumerical anoptimization""
41 |
Μιμιδικοί και εξελικτικοί αλγόριθμοι στην αριθμητική βελτιστοποίηση και στη μη γραμμική δυναμικήΠεταλάς, Ιωάννης 18 September 2008 (has links)
Το κύριο στοιχείο της διατριβής είναι οι Εξελικτικοί Αλγόριθμοι. Στο πρώτο μέρος παρουσιάζονται οι Μιμιδικοί Αλγόριθμοι. Οι Μιμιδικοί Αλγόριθμοι είναι υβριδικά σχήματα που συνδυάζουν τους Εξελιτκικούς Αλγορίθμους με μεθόδους τοπικής αναζήτησης. Οι Μιμιδικοί Αλγόριθμοι συγκρίθηκαν με τους Εξελικτικούς Αλγορίθμους σε πληθώρα προβλημάτων ολικής βελτιστοποίησης και είχαν καλύτερα αποτελέσματα. Στο δεύτερο μέρος μελετήθηκαν προβλήματα μη γραμμικής δυναμικής. Αυτά ήταν η εκτίμηση της περιοχής ευστάθειας διατηρητικών απεικονίσεων, η ανίχνευση συντονισμών και ο υπολογισμός περιοδικών τροχιών. Τα αποτελέσματα ήταν ικανοποιητικά. / The main objective of the thesis was the study of Evolutionary Algorithms. At the first part, Memetic Algorithms were introduced. Memetic Algorithms are hybrid schemes that combine Evolutionary Algorithms and local search methods. Memetic Algorithms were compared to Evolutionary Algorithms in various problems of global optimization and they had better performance. At the second part, problems from nonlinear dynamics were studied. These were the estimation of the stability region of conservative maps, the detection of resonances and the computation of periodic orbits. The results were satisfactory.
|
42 |
Two Optimization Problems in Genetics : Multi-dimensional QTL Analysis and Haplotype InferenceNettelblad, Carl January 2012 (has links)
The existence of new technologies, implemented in efficient platforms and workflows has made massive genotyping available to all fields of biology and medicine. Genetic analyses are no longer dominated by experimental work in laboratories, but rather the interpretation of the resulting data. When billions of data points representing thousands of individuals are available, efficient computational tools are required. The focus of this thesis is on developing models, methods and implementations for such tools. The first theme of the thesis is multi-dimensional scans for quantitative trait loci (QTL) in experimental crosses. By mating individuals from different lines, it is possible to gather data that can be used to pinpoint the genetic variation that influences specific traits to specific genome loci. However, it is natural to expect multiple genes influencing a single trait to interact. The thesis discusses model structure and model selection, giving new insight regarding under what conditions orthogonal models can be devised. The thesis also presents a new optimization method for efficiently and accurately locating QTL, and performing the permuted data searches needed for significance testing. This method has been implemented in a software package that can seamlessly perform the searches on grid computing infrastructures. The other theme in the thesis is the development of adapted optimization schemes for using hidden Markov models in tracing allele inheritance pathways, and specifically inferring haplotypes. The advances presented form the basis for more accurate and non-biased line origin probabilities in experimental crosses, especially multi-generational ones. We show that the new tools are able to reconstruct haplotypes and even genotypes in founder individuals and offspring alike, based on only unordered offspring genotypes. The tools can also handle larger populations than competing methods, resolving inheritance pathways and phase in much larger and more complex populations. Finally, the methods presented are also applicable to datasets where individual relationships are not known, which is frequently the case in human genetics studies. One immediate application for this would be improved accuracy for imputation of SNP markers within genome-wide association studies (GWAS). / eSSENCE
|
43 |
[en] DEVELOPMENT OF UNIMODAL AND MULTIMODAL OPTIMIZATION ALGORITHMS BASED ON MULTI-GENE GENETIC PROGRAMMING / [pt] DESENVOLVIMENTO DE ALGORITMOS DE OTIMIZAÇÃO UNIMODAL E MULTIMODAL COM BASE EM PROGRAMAÇÃO GENÉTICA MULTIGÊNICAROGERIO CORTEZ BRITO LEITE POVOA 29 August 2018 (has links)
[pt] As técnicas de programação genética permitem flexibilidade no processo de otimização, possibilitando sua aplicação em diferentes áreas do conhecimento e fornecendo novas maneiras para que especialistas avancem em suas áreas com mais rapidez. Parameter mapping approach é um método de otimização
numérica que utiliza a programação genética para mapear valores iniciais em parâmetros ótimos para um sistema. Embora esta abordagem produza bons resultados para problemas com soluções triviais, o uso de grandes equações/árvores pode ser necessário para tornar este mapeamento apropriado em sistemas mais complexos.A fim de aumentar a flexibilidade e aplicabilidade do método a sistemas de diferentes níveis de complexidade, este trabalho introduz uma generalização utilizando a programação genética multigênica, para realizar um mapeamento multivariado, evitando grandes estruturas complexas. Foram considerados três conjuntos de funções de benchmark, variando em complexidade e dimensionalidade. Análises estatísticas foram realizadas, sugerindo que este novo método é mais flexível e mais eficiente (em média), considerando funções de benchmark complexas e de grande dimensionalidade. Esta tese também apresenta uma abordagem do novo algoritmo para otimização numérica multimodal.Este segundo algoritmo utiliza algumas técnicas de niching, baseadas no procedimento chamado de clearing, para manter a diversidade da população. Um conjunto benchmark de funções multimodais, com diferentes características e níveis de dificuldade,foi utilizado para avaliar esse novo algoritmo. A análise estatística sugeriu que esse novo método multimodal, que também utiliza programação genética multigênica,pode ser aplicado para problemas que requerem mais do que uma única solução. Como forma de testar esses métodos em problemas do mundo real, uma aplicação em nanotecnologia é proposta nesta tese: ao timização estrutural de fotodetectores de infravermelho de poços quânticos a partir de uma energia desejada. Os resultados apresentam novas estruturas melhores do que as conhecidas na literatura (melhoria de 59,09 por cento). / [en] Genetic programming techniques allow flexibility in the optimization process, making it possible to use them in different areas of knowledge and providing new ways for specialists to advance in their areas more quickly and more accurately.Parameter mapping approach is a numerical optimization method that uses genetic programming to find an appropriate mapping scheme among initial guesses to optimal parameters for a system. Although this approach yields good results for problems with trivial solutions, the use of
large equations/trees may be required to make this mapping appropriate for more complex systems.In order to increase the flexibility and applicability of the method to systems of different levels of complexity, this thesis introduces a generalization by thus using multi-gene genetic programming to perform a multivariate mapping, avoiding large complex structures.Three sets of benchmark functions, varying in complexity and dimensionality, were considered. Statistical analyses carried out suggest that this new method is more flexible and performs better on average, considering challenging benchmark functions of increasing dimensionality.This thesis also presents an improvement of this new method for multimodal numerical optimization.This second algorithm uses some niching techniques based on the clearing procedure to maintain the population diversity. A multimodal benchmark set with different characteristics and difficulty levels to evaluate this new algorithm is used. Statistical analysis suggested that this new multimodal method using multi-gene genetic programming can be used for problems that requires more than a single solution. As a way of testing real-world problems for these methods, one application in nanotechnology is proposed in this thesis: the structural optimization of quantum well infrared photodetector from a desired energy.The results present new structures better than those known in the literature with improvement of 59.09 percent.
|
44 |
Výpočetní metody v jednomolekulové lokalizační mikroskopii / Computational methods in single molecule localization microscopyOvesný, Martin January 2016 (has links)
Computational methods in single molecule localization microscopy Abstract Fluorescence microscopy is one of the chief tools used in biomedical research as it is a non invasive, non destructive, and highly specific imaging method. Unfortunately, an optical microscope is a diffraction limited system. Maximum achievable spatial resolution is approximately 250 nm laterally and 500 nm axially. Since most of the structures in cells researchers are interested in are smaller than that, increasing resolution is of prime importance. In recent years, several methods for imaging beyond the diffraction barrier have been developed. One of them is single molecule localization microscopy, a powerful method reported to resolve details as small as 5 nm. This approach to fluorescence microscopy is very computationally intensive. Developing methods to analyze single molecule data and to obtain super-resolution images are the topics of this thesis. In localization microscopy, a super-resolution image is reconstructed from a long sequence of conventional images of sparsely distributed single photoswitchable molecules that need to be sys- tematically localized with sub-diffraction precision. We designed, implemented, and experimentally verified a set of methods for automated processing, analysis and visualization of data acquired...
|
45 |
Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variablesMourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
|
46 |
Méthodes primales-duales régularisées pour l'optimisation non linéaire avec contraintes / Regularized primal-dual methods for nonlinearly constrained optimizationOmheni, Riadh 14 November 2014 (has links)
Cette thèse s’inscrit dans le cadre de la conception, l’analyse et la mise en œuvre d’algorithmes efficaces et fiables pour la résolution de problèmes d’optimisation non linéaire avec contraintes. Nous présentons trois nouveaux algorithmes fortement primaux-duaux pour résoudre ces problèmes. La première caractéristique de ces algorithmes est que le contrôle des itérés s’effectue dans l’espace primal-dual tout au long du processus de la minimisation, d’où l’appellation “fortement primaux-duaux”. En particulier, la globalisation est effectuée par une méthode de recherche linéaire qui utilise une fonction de mérite primale-duale. La deuxième caractéristique est l’introduction d’une régularisation naturelle du système linéaire qui est résolu à chaque itération pour calculer une direction de descente. Ceci permet à nos algorithmes de bien se comporter pour résoudre les problèmes dégénérés pour lesquels la jacobienne des contraintes n’est pas de plein rang. La troisième caractéristique est que le paramètre de pénalisation est autorisé à augmenter au cours des itérations internes, alors qu’il est généralement maintenu constant. Cela permet de réduire le nombre d’itérations internes. Une étude théorique détaillée incluant l’analyse de convergence globale des itérations internes et externes, ainsi qu’une analyse asymptotique a été présentée pour chaque algorithme. En particulier, nous montrons qu’ils jouissent d’un taux de convergence rapide, superlinéaire ou quadratique. Ces algorithmes sont implémentés dans un nouveau solveur d’optimisation non linéaire qui est appelé SPDOPT. Les bonnes performances de ce solveur ont été montrées en effectuant des comparaisons avec les codes de références IPOPT, ALGENCAN et LANCELOT sur une large collection de problèmes. / This thesis focuses on the design, analysis, and implementation of efficient and reliable algorithms for solving nonlinearly constrained optimization problems. We present three new strongly primal-dual algorithms to solve such problems. The first feature of these algorithms is that the control of the iterates is done in both primal and dual spaces during the whole minimization process, hence the name “strongly primal-dual”. In particular, the globalization is performed by applying a backtracking line search algorithm based on a primal-dual merit function. The second feature is the introduction of a natural regularization of the linear system solved at each iteration to compute a descent direction. This allows our algorithms to perform well when solving degenerate problems for which the Jacobian of constraints is rank deficient. The third feature is that the penalty parameter is allowed to increase along the inner iterations, while it is usually kept constant. This allows to reduce the number of inner iterations. A detailed theoretical study including the global convergence analysis of both inner and outer iterations, as well as an asymptotic convergence analysis is presented for each algorithm. In particular, we prove that these methods have a high rate of convergence : superlinear or quadratic. These algorithms have been implemented in a new solver for nonlinear optimization which is called SPDOPT. The good practical performances of this solver have been demonstrated by comparing it to the reference codes IPOPT, ALGENCAN and LANCELOT on a large collection of test problems.
|
47 |
An approach to optimize the design of hydraulic reservoirsWohlers, Alexander, Backes, Alexander, Schönfeld, Dirk January 2016 (has links)
Increasing demands regarding performance, safety and environmental compatibility of hydraulic mobile machines in combination with rising cost pressures create a growing need for specialized optimization of hydraulic systems; particularly with regard to hydraulic reservoirs. In addition to the secondary function of cooling the oil, two main functions of the hydraulic reservoir are oil storage and de-aeration of the hydraulic oil. While designing hydraulic reservoirs regarding oil storage is quite simple, the design regarding de-aeration can be quite difficult. The author presents an approach to a system optimization of hydraulic reservoirs which combines experimental and numerical techniques to resolve some challenges facing hydraulic tank design. Specialized numerical tools are used in order to characterize the de-aeration performance of hydraulic tanks. Further the simulation of heat transfer is used to study the cooling function of hydraulic tank systems with particular attention to plastic tank solutions. To accompany the numerical tools, experimental test rigs have been built up to validate the simulation results and to provide additional insight into the design and optimization of hydraulic tanks which will be presented as well.
|
48 |
Výpočetní metody v jednomolekulové lokalizační mikroskopii / Computational methods in single molecule localization microscopyOvesný, Martin January 2016 (has links)
Computational methods in single molecule localization microscopy Abstract Fluorescence microscopy is one of the chief tools used in biomedical research as it is a non invasive, non destructive, and highly specific imaging method. Unfortunately, an optical microscope is a diffraction limited system. Maximum achievable spatial resolution is approximately 250 nm laterally and 500 nm axially. Since most of the structures in cells researchers are interested in are smaller than that, increasing resolution is of prime importance. In recent years, several methods for imaging beyond the diffraction barrier have been developed. One of them is single molecule localization microscopy, a powerful method reported to resolve details as small as 5 nm. This approach to fluorescence microscopy is very computationally intensive. Developing methods to analyze single molecule data and to obtain super-resolution images are the topics of this thesis. In localization microscopy, a super-resolution image is reconstructed from a long sequence of conventional images of sparsely distributed single photoswitchable molecules that need to be sys- tematically localized with sub-diffraction precision. We designed, implemented, and experimentally verified a set of methods for automated processing, analysis and visualization of data acquired...
|
49 |
Activation of the carbonaceous material from the pyrolysis of waste tires for wastewater treatment.Malise, Lucky 07 1900 (has links)
M.Tech. (Department of Chemical Engineering, Faculty of Engineering and Technology), Vaal University of Technology. / The generation of waste tires is one of the most serious environmental problems in the modern world due to the increased use of auto mobiles all over the world. Currently there is a problem with the disposal of waste tires generated since there are strict regulations concerning their disposal through landfill sites. Therefore, there is a need to find ways of disposing these waste tires which pose serious health and environmental problem. The pyrolysis of the waste tires has been recognised as the most promising method to dispose the waste tires because it can reduce the weight of the waste tires to 10% of its original weight and produce products such as pyrolysis oil, pyrolysis char, and pyrolysis char. These products can be further processed to produce value added products. The char produced from the pyrolysis of waste tires can be further activated to produce activated carbon.
This study is based on the chemical activation of waste tire pyrolysis char to produce activated carbon for the removal of lead ions from aqueous solution. This was done by impregnating the waste tire pyrolysis char with Potassium hydroxide and activating it inside a tube furnace under inert conditions to produce waste tire activated carbon. Adsorbent characterisation techniques (SEM, FTIR, TGA, XRF, XRD, BET, and Proximate analysis) were performed on the waste tire pyrolysis char and the activated carbon produced to make a comparison between the two samples. The results showed that the waste tire activated carbon produced has better physical and chemical properties compared to the raw waste tire pyrolysis char.
Adsorption results revealed that waste tire activated carbon achieves higher removal percentages of lead ions from aqueous solution compared to waste tire pyrolysis char. The results also showed the effect of various process variables on the adsorption process. Adsorption isotherms, kinetics, and thermodynamics were also studied. The adsorption of lead ions agreed with the Freundlich isotherm model for both the waste tire pyrolysis char and waste tire activated carbon. In terms of adsorption kinetics, the experimental data provided best fits for the pseudo-first order kinetic model for both the waste tire pyrolysis char and the waste tire activated carbon. The adsorption thermodynamics study revealed that the process is an exothermic process and spontaneous in nature.
Response surface methodology was used to determine the combined effect of process variables on the adsorption of lead ions onto waste tire activated carbon and to optimise the process using numerical optimisation. The optimum conditions were found to be adsorbent dosage = 1g/100ml, pH = 7, contact time = 115.2 min, initial meta concentration = 100 mg/l, and temperature = 25°C to achieve a maximum adsorption capacity of 93.176 mg/l.
|
50 |
Quelques applications de l’optimisation numérique aux problèmes d’inférence et d’apprentissage / Few applications of numerical optimization in inference and learningKannan, Hariprasad 28 September 2018 (has links)
Les relaxations en problème d’optimisation linéaire jouent un rôle central en inférence du maximum a posteriori (map) dans les champs aléatoires de Markov discrets. Nous étudions ici les avantages offerts par les méthodes de Newton pour résoudre efficacement le problème dual (au sens de Lagrange) d’une reformulation lisse du problème. Nous comparons ces dernières aux méthodes de premier ordre, à la fois en terme de vitesse de convergence et de robustesse au mauvais conditionnement du problème. Nous exposons donc un cadre général pour l’apprentissage non-supervisé basé sur le transport optimal et les régularisations parcimonieuses. Nous exhibons notamment une approche prometteuse pour résoudre le problème de la préimage dans l’acp à noyau. Du point de vue de l’optimisation, nous décrivons le calcul du gradient d’une version lisse de la norme p de Schatten et comment cette dernière peut être utilisée dans un schéma de majoration-minimisation. / Numerical optimization and machine learning have had a fruitful relationship, from the perspective of both theory and application. In this thesis, we present an application oriented take on some inference and learning problems. Linear programming relaxations are central to maximum a posteriori (MAP) inference in discrete Markov Random Fields (MRFs). Especially, inference in higher-order MRFs presents challenges in terms of efficiency, scalability and solution quality. In this thesis, we study the benefit of using Newton methods to efficiently optimize the Lagrangian dual of a smooth version of the problem. We investigate their ability to achieve superior convergence behavior and to better handle the ill-conditioned nature of the formulation, as compared to first order methods. We show that it is indeed possible to obtain an efficient trust region Newton method, which uses the true Hessian, for a broad range of MAP inference problems. Given the specific opportunities and challenges in the MAP inference formulation, we present details concerning (i) efficient computation of the Hessian and Hessian-vector products, (ii) a strategy to damp the Newton step that aids efficient and correct optimization, (iii) steps to improve the efficiency of the conjugate gradient method through a truncation rule and a pre-conditioner. We also demonstrate through numerical experiments how a quasi-Newton method could be a good choice for MAP inference in large graphs. MAP inference based on a smooth formulation, could greatly benefit from efficient sum-product computation, which is required for computing the gradient and the Hessian. We show a way to perform sum-product computation for trees with sparse clique potentials. This result could be readily used by other algorithms, also. We show results demonstrating the usefulness of our approach using higher-order MRFs. Then, we discuss potential research topics regarding tightening the LP relaxation and parallel algorithms for MAP inference.Unsupervised learning is an important topic in machine learning and it could potentially help high dimensional problems like inference in graphical models. We show a general framework for unsupervised learning based on optimal transport and sparse regularization. Optimal transport presents interesting challenges from an optimization point of view with its simplex constraints on the rows and columns of the transport plan. We show one way to formulate efficient optimization problems inspired by optimal transport. This could be done by imposing only one set of the simplex constraints and by imposing structure on the transport plan through sparse regularization. We show how unsupervised learning algorithms like exemplar clustering, center based clustering and kernel PCA could fit into this framework based on different forms of regularization. We especially demonstrate a promising approach to address the pre-image problem in kernel PCA. Several methods have been proposed over the years, which generally assume certain types of kernels or have too many hyper-parameters or make restrictive approximations of the underlying geometry. We present a more general method, with only one hyper-parameter to tune and with some interesting geometric properties. From an optimization point of view, we show how to compute the gradient of a smooth version of the Schatten p-norm and how it can be used within a majorization-minimization scheme. Finally, we present results from our various experiments.
|
Page generated in 0.1217 seconds