• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fast, exact and stable reconstruction of multivariate algebraic polynomials in Chebyshev form

Potts, Daniel, Volkmer, Toni 16 February 2015 (has links) (PDF)
We describe a fast method for the evaluation of an arbitrary high-dimensional multivariate algebraic polynomial in Chebyshev form at the nodes of an arbitrary rank-1 Chebyshev lattice. Our main focus is on conditions on rank-1 Chebyshev lattices allowing for the exact reconstruction of such polynomials from samples along such lattices and we present an algorithm for constructing suitable rank-1 Chebyshev lattices based on a component-by-component approach. Moreover, we give a method for the fast, exact and stable reconstruction.
2

Conception d'heuristiques d'optimisation pour les problèmes de grande dimension : application à l'analyse de données de puces à ADN / Heuristics implementation for high-dimensional problem optimization : application in microarray data analysis

Gardeux, Vincent 30 November 2011 (has links)
Cette thèse expose la problématique récente concernant la résolution de problèmes de grande dimension. Nous présentons les méthodes permettant de les résoudre ainsi que leurs applications, notamment pour la sélection de variables dans le domaine de la fouille de données. Dans la première partie de cette thèse, nous exposons les enjeux de la résolution de problèmes de grande dimension. Nous nous intéressons principalement aux méthodes de recherche linéaire, que nous jugeons particulièrement adaptées pour la résolution de tels problèmes. Nous présentons ensuite les méthodes que nous avons développées, basées sur ce principe : CUS, EUS et EM323. Nous soulignons en particulier la très grande vitesse de convergence de CUS et EUS, ainsi que leur simplicité de mise en oeuvre. La méthode EM323 est issue d'une hybridation entre la méthode EUS et un algorithme d'optimisation unidimensionnel développé par F. Glover : l'algorithme 3-2-3. Nous montrons que ce dernier algorithme obtient des résultats d'une plus grande précision, notamment pour les problèmes non séparables, qui sont le point faible des méthodes issues de la recherche linéaire. Dans une deuxième partie, nous nous intéressons aux problèmes de fouille de données, et plus particulièrement l'analyse de données de puces à ADN. Le but est de classer ces données et de prédire le comportement de nouveaux exemples. Dans un premier temps, une collaboration avec l'hôpital Tenon nous permet d'analyser des données privées concernant le cancer du sein. Nous développons alors une méthode exacte, nommée delta-test, enrichie par la suite d'une méthode permettant la sélection automatique du nombre de variables. Dans un deuxième temps, nous développons une méthode heuristique de sélection de variables, nommée ABEUS, basée sur l'optimisation des performances du classifieur DLDA. Les résultats obtenus sur des données publiques montrent que nos méthodes permettent de sélectionner des sous-ensembles de variables de taille très faible,ce qui est un critère important permettant d'éviter le sur-apprentissage / This PhD thesis explains the recent issue concerning the resolution of high-dimensional problems. We present methods designed to solve them, and their applications for feature selection problems, in the data mining field. In the first part of this thesis, we introduce the stakes of solving high-dimensional problems. We mainly investigate line search methods, because we consider them to be particularly suitable for solving such problems. Then, we present the methods we developed, based on this principle : CUS, EUS and EM323. We emphasize, in particular, the very high convergence speed of CUS and EUS, and their simplicity of implementation. The EM323 method is based on an hybridization between EUS and a one-dimensional optimization algorithm developed by F. Glover : the 3-2-3 algorithm. We show that the results of EM323 are more accurate, especially for non-separable problems, which are the weakness of line search based methods. In the second part, we focus on data mining problems, and especially those concerning microarray data analysis. The objectives are to classify data and to predict the behavior of new samples. A collaboration with the Tenon Hospital in Paris allows us to analyze their private breast cancer data. To this end, we develop an exact method, called delta-test, enhanced by a method designed to automatically select the optimal number of variables. In a second time, we develop an heuristic, named ABEUS, based on the optimization of the DLDA classifier performances. The results obtained from publicly available data show that our methods manage to select very small subsets of variables, which is an important criterion to avoid overfitting
3

Fast, exact and stable reconstruction of multivariate algebraic polynomials in Chebyshev form

Potts, Daniel, Volkmer, Toni 16 February 2015 (has links)
We describe a fast method for the evaluation of an arbitrary high-dimensional multivariate algebraic polynomial in Chebyshev form at the nodes of an arbitrary rank-1 Chebyshev lattice. Our main focus is on conditions on rank-1 Chebyshev lattices allowing for the exact reconstruction of such polynomials from samples along such lattices and we present an algorithm for constructing suitable rank-1 Chebyshev lattices based on a component-by-component approach. Moreover, we give a method for the fast, exact and stable reconstruction.
4

Tensor product methods in numerical simulation of high-dimensional dynamical problems

Dolgov, Sergey 08 September 2014 (has links) (PDF)
Quantification of stochastic or quantum systems by a joint probability density or wave function is a notoriously difficult computational problem, since the solution depends on all possible states (or realizations) of the system. Due to this combinatorial flavor, even a system containing as few as ten particles may yield as many as $10^{10}$ discretized states. None of even modern supercomputers are capable to cope with this curse of dimensionality straightforwardly, when the amount of quantum particles, for example, grows up to more or less interesting order of hundreds. A traditional approach for a long time was to avoid models formulated in terms of probabilistic functions, and simulate particular system realizations in a randomized process. Since different times in different communities, data-sparse methods came into play. Generally, they aim to define all data points indirectly, by a map from a low amount of representers, and recast all operations (e.g. linear system solution) from the initial data to the effective parameters. The most advanced techniques can be applied (at least, tried) to any given array, and do not rely explicitly on its origin. The current work contributes further progress to this area in the particular direction: tensor product methods for separation of variables. The separation of variables has a long history, and is based on the following elementary concept: a function of many variables may be expanded as a product of univariate functions. On the discrete level, a function is encoded by an array of its values, or a tensor. Therefore, instead of a huge initial array, the separation of variables allows to work with univariate factors with much less efforts. The dissertation contains a short overview of existing tensor representations: canonical PARAFAC, Hierarchical Tucker, Tensor Train (TT) formats, as well as the artificial tensorisation, resulting in the Quantized Tensor Train (QTT) approximation method. The contribution of the dissertation consists in both theoretical constructions and practical numerical algorithms for high-dimensional models, illustrated on the examples of the Fokker-Planck and the chemical master equations. Both arise from stochastic dynamical processes in multiconfigurational systems, and govern the evolution of the probability function in time. A special focus is put on time propagation schemes and their properties related to tensor product methods. We show that these applications yield large-scale systems of linear equations, and prove analytical separable representations of the involved functions and operators. We propose a new combined tensor format (QTT-Tucker), which descends from the TT format (hence TT algorithms may be generalized smoothly), but provides complexity reduction by an order of magnitude. We develop a robust iterative solution algorithm, constituting most advantageous properties of the classical iterative methods from numerical analysis and alternating density matrix renormalization group (DMRG) techniques from quantum physics. Numerical experiments confirm that the new method is preferable to DMRG algorithms. It is as fast as the simplest alternating schemes, but as reliable and accurate as the Krylov methods in linear algebra.
5

Tensor product methods in numerical simulation of high-dimensional dynamical problems

Dolgov, Sergey 20 August 2014 (has links)
Quantification of stochastic or quantum systems by a joint probability density or wave function is a notoriously difficult computational problem, since the solution depends on all possible states (or realizations) of the system. Due to this combinatorial flavor, even a system containing as few as ten particles may yield as many as $10^{10}$ discretized states. None of even modern supercomputers are capable to cope with this curse of dimensionality straightforwardly, when the amount of quantum particles, for example, grows up to more or less interesting order of hundreds. A traditional approach for a long time was to avoid models formulated in terms of probabilistic functions, and simulate particular system realizations in a randomized process. Since different times in different communities, data-sparse methods came into play. Generally, they aim to define all data points indirectly, by a map from a low amount of representers, and recast all operations (e.g. linear system solution) from the initial data to the effective parameters. The most advanced techniques can be applied (at least, tried) to any given array, and do not rely explicitly on its origin. The current work contributes further progress to this area in the particular direction: tensor product methods for separation of variables. The separation of variables has a long history, and is based on the following elementary concept: a function of many variables may be expanded as a product of univariate functions. On the discrete level, a function is encoded by an array of its values, or a tensor. Therefore, instead of a huge initial array, the separation of variables allows to work with univariate factors with much less efforts. The dissertation contains a short overview of existing tensor representations: canonical PARAFAC, Hierarchical Tucker, Tensor Train (TT) formats, as well as the artificial tensorisation, resulting in the Quantized Tensor Train (QTT) approximation method. The contribution of the dissertation consists in both theoretical constructions and practical numerical algorithms for high-dimensional models, illustrated on the examples of the Fokker-Planck and the chemical master equations. Both arise from stochastic dynamical processes in multiconfigurational systems, and govern the evolution of the probability function in time. A special focus is put on time propagation schemes and their properties related to tensor product methods. We show that these applications yield large-scale systems of linear equations, and prove analytical separable representations of the involved functions and operators. We propose a new combined tensor format (QTT-Tucker), which descends from the TT format (hence TT algorithms may be generalized smoothly), but provides complexity reduction by an order of magnitude. We develop a robust iterative solution algorithm, constituting most advantageous properties of the classical iterative methods from numerical analysis and alternating density matrix renormalization group (DMRG) techniques from quantum physics. Numerical experiments confirm that the new method is preferable to DMRG algorithms. It is as fast as the simplest alternating schemes, but as reliable and accurate as the Krylov methods in linear algebra.
6

Numerical approximations with tensor-based techniques for high-dimensional problems

Mora Jiménez, María 29 January 2024 (has links)
Tesis por compendio / [ES] La idea de seguir una secuencia de pasos para lograr un resultado deseado es inherente a la naturaleza humana: desde que empezamos a andar, siguiendo una receta de cocina o aprendiendo un nuevo juego de cartas. Desde la antigüedad se ha seguido este esquema para organizar leyes, corregir escritos, e incluso asignar diagnósticos. En matemáticas a esta forma de pensar se la denomina 'algoritmo'. Formalmente, un algoritmo es un conjunto de instrucciones definidas y no-ambiguas, ordenadas y finitas, que permite solucionar un problema. Desde pequeños nos enfrentamos a ellos cuando aprendemos a multiplicar o dividir, y a medida que crecemos, estas estructuras nos permiten resolver diferentes problemas cada vez más complejos: sistemas lineales, ecuaciones diferenciales, problemas de optimización, etcétera. Hay multitud de algoritmos que nos permiten hacer frente a este tipo de problemas, como métodos iterativos, donde encontramos el famoso Método de Newton para buscar raíces; algoritmos de búsqueda para localizar un elemento con ciertas propiedades en un conjunto mayor; o descomposiciones matriciales, como la descomposición LU para resolver sistemas lineales. Sin embargo, estos enfoques clásicos presentan limitaciones cuando se enfrentan a problemas de grandes dimensiones, problema que se conoce como `la maldición de la dimensionalidad'. El avance de la tecnología, el uso de redes sociales y, en general, los nuevos problemas que han aparecido con el desarrollo de la Inteligencia Artificial, ha puesto de manifiesto la necesidad de manejar grandes cantidades de datos, lo que requiere el diseño de nuevos mecanismos que permitan su manipulación. En la comunidad científica, este hecho ha despertado el interés por las estructuras tensoriales, ya que éstas permiten trabajar eficazmente con problemas de grandes dimensiones. Sin embargo, la mayoría de métodos clásicos no están pensados para ser empleados junto a estas operaciones, por lo que se requieren herramientas específicas que permitan su tratamiento, lo que motiva un proyecto como este. El presente trabajo se divide de la siguiente manera: tras revisar algunas definiciones necesarias para su comprensión, en el Capítulo 3, se desarrolla la teoría de una nueva descomposición tensorial para matrices cuadradas. A continuación, en el Capítulo 4, se muestra una aplicación de dicha descomposición a grafos regulares y redes de mundo pequeño. En el Capítulo 5, se plantea una implementación eficiente del algoritmo que proporciona la nueva descomposición matricial, y se estudian como aplicación algunas EDP de orden dos. Por último, en los Capítulos 6 y 7 se exponen unas breves conclusiones y se enumeran algunas de las referencias consultadas, respectivamente. / [CA] La idea de seguir una seqüència de passos per a aconseguir un resultat desitjat és inherent a la naturalesa humana: des que comencem a caminar, seguint una recepta de cuina o aprenent un nou joc de cartes. Des de l'antiguitat s'ha seguit aquest esquema per a organitzar lleis, corregir escrits, i fins i tot assignar diagnòstics. En matemàtiques a aquesta manera de pensar se la denomina algorisme. Formalment, un algorisme és un conjunt d'instruccions definides i no-ambigües, ordenades i finites, que permet solucionar un problema. Des de xicotets ens enfrontem a ells quan aprenem a multiplicar o dividir, i a mesura que creixem, aquestes estructures ens permeten resoldre diferents problemes cada vegada més complexos: sistemes lineals, equacions diferencials, problemes d'optimització, etcètera. Hi ha multitud d'algorismes que ens permeten fer front a aquesta mena de problemes, com a mètodes iteratius, on trobem el famós Mètode de Newton per a buscar arrels; algorismes de cerca per a localitzar un element amb unes certes propietats en un conjunt major; o descomposicions matricials, com la descomposició DL. per a resoldre sistemes lineals. No obstant això, aquests enfocaments clàssics presenten limitacions quan s'enfronten a problemes de grans dimensions, problema que es coneix com `la maledicció de la dimensionalitat'. L'avanç de la tecnologia, l'ús de xarxes socials i, en general, els nous problemes que han aparegut amb el desenvolupament de la Intel·ligència Artificial, ha posat de manifest la necessitat de manejar grans quantitats de dades, la qual cosa requereix el disseny de nous mecanismes que permeten la seua manipulació. En la comunitat científica, aquest fet ha despertat l'interés per les estructures tensorials, ja que aquestes permeten treballar eficaçment amb problemes de grans dimensions. No obstant això, la majoria de mètodes clàssics no estan pensats per a ser emprats al costat d'aquestes operacions, per la qual cosa es requereixen eines específiques que permeten el seu tractament, la qual cosa motiva un projecte com aquest. El present treball es divideix de la següent manera: després de revisar algunes definicions necessàries per a la seua comprensió, en el Capítol 3, es desenvolupa la teoria d'una nova descomposició tensorial per a matrius quadrades. A continuació, en el Capítol 4, es mostra una aplicació d'aquesta descomposició a grafs regulars i xarxes de món xicotet. En el Capítol 5, es planteja una implementació eficient de l'algorisme que proporciona la nova descomposició matricial, i s'estudien com a aplicació algunes EDP d'ordre dos. Finalment, en els Capítols 6 i 7 s'exposen unes breus conclusions i s'enumeren algunes de les referències consultades, respectivament. / [EN] The idea of following a sequence of steps to achieve a desired result is inherent in human nature: from the moment we start walking, following a cooking recipe or learning a new card game. Since ancient times, this scheme has been followed to organize laws, correct writings, and even assign diagnoses. In mathematics, this way of thinking is called an algorithm. Formally, an algorithm is a set of defined and unambiguous instructions, ordered and finite, that allows for solving a problem. From childhood, we face them when we learn to multiply or divide, and as we grow, these structures will enable us to solve different increasingly complex problems: linear systems, differential equations, optimization problems, etc. There is a multitude of algorithms that allow us to deal with this type of problem, such as iterative methods, where we find the famous Newton Method to find roots; search algorithms to locate an element with specific properties in a more extensive set; or matrix decompositions, such as the LU decomposition to solve some linear systems. However, these classical approaches have limitations when faced with large-dimensional problems, a problem known as the `curse of dimensionality'. The advancement of technology, the use of social networks and, in general, the new problems that have appeared with the development of Artificial Intelligence, have revealed the need to handle large amounts of data, which requires the design of new mechanisms that allow its manipulation. This fact has aroused interest in the scientific community in tensor structures since they allow us to work efficiently with large-dimensional problems. However, most of the classic methods are not designed to be used together with these operations, so specific tools are required to allow their treatment, which motivates work like this. This work is divided as follows: after reviewing some definitions necessary for its understanding, in Chapter 3, the theory of a new tensor decomposition for square matrices is developed. Next, Chapter 4 shows an application of said decomposition to regular graphs and small-world networks. In Chapter 5, an efficient implementation of the algorithm provided by the new matrix decomposition is proposed, and some order two PDEs are studied as an application. Finally, Chapters 6 and 7 present some brief conclusions and list some of the references consulted. / María Mora Jiménez acknowledges funding from grant (ACIF/2020/269) funded by the Generalitat Valenciana and the European Social Found / Mora Jiménez, M. (2023). Numerical approximations with tensor-based techniques for high-dimensional problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202604 / Compendio

Page generated in 0.0976 seconds