• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 39
  • 13
  • 8
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 183
  • 183
  • 48
  • 36
  • 33
  • 30
  • 28
  • 26
  • 25
  • 22
  • 22
  • 21
  • 21
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Alunos que completaram um curso de extensão em álgebra linear e suas concepções sobre base de um espaço vetorial

Prado, Eneias de Almeida 07 May 2010 (has links)
Made available in DSpace on 2016-04-27T16:59:05Z (GMT). No. of bitstreams: 1 Eneias de Almeida Prado.pdf: 1754113 bytes, checksum: 4f347ae679598e17485474c1d80187f6 (MD5) Previous issue date: 2010-05-07 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The purpose of this study was to indentify the basis conception of the IR vetorial space finitely generated by students who concluded an extension course in Linear Algebra. The relevance of the research is in the importance attributed to this discipline in the professional education of Exact Sciences and others, and in the need to investigate its teaching and its learning, according to the opinion of many researchers, as Dubinsky (1991;2001); Dorier et al (1997); Machado e Bianchini (2009). For such, the theoretical ground used was APOS, developed by Dubinsky and collaborators that allowed the refinement of a genetical decomposition for the notion of basis which approached three points of view from this notion: maximal group of vectorials linearly independent; minimal group of generating vectors and justaposition between the two latters. The data survey was held by semistructured interviews to 10 subjects graduating from the same extension course, characterizing it as a qualitative case study. The analysis held indicates that five students built an object conception and incorporated the notion of dimension to their scheme, using indistincvely the dimension to one of three notions of basis. A student was able to build a process conception and another, an action conception. After two courses of Linear Algebra, the studens conceived basis, mainly, with being the independent linearly generating group, and only two of the interviewed perceived the equivalence among the points of view approached / Este estudo teve o objetivo de identificar a concepção de base de um IR-espaço vetorial finitamente gerado de alunos que concluíram um curso de extensão em Álgebra Linear. A relevância da pesquisa reside na importância atribuída a essa disciplina na formação de profissionais das Ciências Exatas e afins, e na necessidade de investigar seu ensino e sua aprendizagem, conforme opinião de vários pesquisadores, como Dubinsky (1991; 2001); Dorier et al. (1997); Machado e Bianchini (2009). Para tanto, utilizou-se o aporte da teoria APOS, desenvolvida por Dubinsky e colaboradores que permitiu o refinamento de uma decomposição genética para a noção de base que abordou os três pontos de vista dessa noção: conjunto maximal de vetores linearmente independentes; conjunto minimal de vetores gerador e a justaposição entre as duas anteriores. A coleta de dados foi realizada por meio de entrevistas semiestruturadas a 10 sujeitos concluintes de um mesmo curso de extensão, caracterizando-se como um estudo qualitativo de caso. A análise realizada indica que cinco estudantes construíram uma concepção objeto e incorporaram a noção de dimensão a seu esquema, utilizando indistintamente a dimensão a uma das três noções de base. Um estudante mostrou ter construído concepção processo e outro, concepção ação. Após dois cursos de Álgebra Linear, os estudantes concebem base, sobretudo, como sendo um conjunto gerador linearmente independente, e só dois dos entrevistados perceberam a equivalência entre os pontos de vista abordados
152

ATIVIDADE DE ESTUDO DO CONCEITO DE TRANSFORMAÇÃO LINEAR NA PERSPECTIVA DA TEORIA DO ENSINO DESENVOLVIMENTAL DE V. V. DAVYDOV

Assis, Aline Mota de Mesquita 30 August 2018 (has links)
Submitted by admin tede (tede@pucgoias.edu.br) on 2018-11-05T16:55:20Z No. of bitstreams: 1 ALINE MOTA DE MESQUITA ASSIS.pdf: 6688952 bytes, checksum: 94c9e4c183a133d7dbafd52f7e741501 (MD5) / Made available in DSpace on 2018-11-05T16:55:20Z (GMT). No. of bitstreams: 1 ALINE MOTA DE MESQUITA ASSIS.pdf: 6688952 bytes, checksum: 94c9e4c183a133d7dbafd52f7e741501 (MD5) Previous issue date: 2018-08-30 / This work falls into the category of research into Theories of Education and Pedagogical Processes, and has as its main investigative focus, the teaching-learning process according to the algebraic concept of linear transformation, based on V.V. Davydov´s theory of developmental teaching. The question it seeks to clarify is : what are the repercussions for teaching the concept of linear transformation, based on the historical-cultural theory, in specific, Davydov's developmental theory, in the process of concept formation by students? Specifically, it aims to analyze the history of the logical development of the concept of linear transformation in order to grasp the relations present in it and the forms of mental movement displayed, towards identifying the mental actions to be contemplated in the planning and conduct of the activity of study; to carry out the study activity through the development of a didactic formation experiment to understand, in the course of the teaching-learning process of the concept of linear transformation, elements that indicate qualitative and quantitative changes in the development of student thinking. To this end research was carried out that consisted of a teaching experiment in a class of Linear Algebra at the Federal Institute of Education, Science and Technology of Goiás - Câmpus Goiânia, based on the assumptions of Davydov. This was completed with fourteen students of the Bachelor in Electrical Engineering graduate course and done so according to the structure of the study activity proposed by Davydov. The procedures for collecting the data were as follows: a written record of semistructured interviews with the teacher, socio-cultural questionnaires completed by the students, a diagnostic instrument for evaluation, an experimental teaching plan and the notes from non-participant direct observers. Data analysis focuses on the process of concept formation and the elements involved in this process from the following categories: transformation of task data into the identification of the general principle of the concept of linear transformation; from modeling to transformation of a model to the concept of linear transformation and the use of the concept of linear transformation as a mental tool. The results showed: the motivation of students during the experimental teaching; an understanding of algebraic concepts after logical-historical analysis by the majority of the research subjects; indicators of the zone of proximal development of the students in relation to the concepts of matrix, function and vector space - considered here as the prerequisites for the formation of the concept of linear transformation, developing the ability to think Mathematically according to the logic of this science; evidence of qualitative changes in the development of theoretical thinking of the research subjects, again, regarding the concept of linear transformation. The main contribution of this research was to show an alternative way of organizing the teaching of the concept of linear transformation, and consequently Linear Algebra. It is believed that even with the contradictions present in the curricular structure of the courses in the areas of the exact and world sciences and in engineering, as well as in the students' school formation, it is possible to carry out teaching based on the theory of developmental teaching and contribute to the theoretical thought formation in the majority of students. / Este trabalho, inscrito na linha de pesquisa Teorias da Educação e Processos Pedagógicos, tem como principal foco investigativo o processo de ensino-aprendizagem do conceito algébrico de transformação linear, fundamentando-se na teoria do ensino desenvolvimental de V. V. Davydov. A questão que se buscou esclarecer foi: que repercussões teriam, no processo de formação de conceitos pelos alunos, o ensino do conceito de transformação linear fundamentado na teoria histórico-cultural, em específico, na teoria do ensino desenvolvimental de Davydov? Especificamente, objetiva-se: analisar a história do desenvolvimento lógico do conceito de transformação linear a fim de apreender as relações nele presentes e o tipo de movimento mental que ele contém para identificar as ações mentais a serem contempladas no planejamento e na condução da atividade de estudo; proceder à realização da atividade de estudo mediante o desenvolvimento de um experimento didático formativo; apreender, no decorrer processo de ensino-aprendizagem do conceito de transformação linear, elementos que indicam mudanças qualitativas e quantitativas no desenvolvimento do pensamento do aluno. Para tanto, realizou-se uma pesquisa que consistiu em um experimento de ensino, baseado nos pressupostos de Davydov, em uma turma de Álgebra Linear do Instituto Federal de Educação, Ciência e Tecnologia de Goiás – Câmpus Goiânia, desenvolvido com quatorze alunos do curso de Bacharelado em Engenharia Elétrica e seguindo a estrutura da atividade de estudo proposta por Davydov. Os procedimentos para a coleta dos dados foram: roteiro de entrevista semiestruturada com o professor, questionário sociocultural dos alunos, instrumento de avaliação diagnóstica, plano de ensino experimental e roteiro de observação direta não participante. A análise dos dados enfoca o processo de formação de conceitos e os elementos intervenientes nesse processo a partir das seguintes categorias: transformação dos dados da tarefa na condução da identificação do princípio geral do conceito de transformação linear; da modelação à transformação de um modelo para o conceito de transformação linear e o uso do conceito de transformação linear como ferramenta mental. Os resultados obtidos revelaram: motivação dos alunos durante o ensino experimental; compreensão dos conceitos algébricos, após a análise lógico-histórica, pela maioria dos sujeitos da pesquisa; indícios de progresso da zona de desenvolvimento proximal dos alunos no que tange aos conceitos de matriz, função e espaço vetorial, considerados aqui como os pré-requisitos para a formação do conceito de transformação linear, desenvolvendo a capacidade de pensar a Matemática de acordo com a forma de pensar desta ciência; indícios de mudanças qualitativas no desenvolvimento do pensamento teórico dos sujeitos da pesquisa quanto ao conceito de transformação linear. A principal contribuição desta pesquisa consistiu em mostrar um caminho alternativo de organização do ensino do conceito de transformação linear, consequentemente, da Álgebra Linear. Acredita-se que, mesmo com as contradições presentes na estrutura curricular dos cursos das áreas de Ciências Exatas e da Terra e Engenharias, bem como na formação escolar dos alunos, é possível realizar um ensino embasado na teoria do ensino desenvolvimental e contribuir para a formação do pensamento teórico da maioria dos alunos
153

Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle

Sharify, Meisam 01 September 2011 (has links) (PDF)
L'Algèbre tropicale peut être considérée comme un domaine relativement nouveau en mathématiques. Elle apparait dans plusieurs domaines telles que l'optimisation, la synchronisation de la production et du transport, les systèmes à événements discrets, le contrôle optimal, la recherche opérationnelle, etc. La première partie de ce manuscrit est consacrée a l'étude des applications de l'algèbre tropicale à l'analyse numérique matricielle. Nous considérons tout d'abord le problème classique de l'estimation des racines d'un polynôme univarié. Nous prouvons plusieurs nouvelles bornes pour la valeur absolue des racines d'un polynôme en exploitant les méthodes tropicales. Ces résultats sont particulièrement utiles lorsque l'on considère des polynômes dont les coefficients ont des ordres de grandeur différents. Nous examinons ensuite le problème du calcul des valeurs propres d'une matrice polynomiale. Ici, nous introduisons une technique de mise à l'échelle générale, basée sur l'algèbre tropicale, qui s'applique en particulier à la forme compagnon. Cette mise à l'échelle est basée sur la construction d'une fonction polynomiale tropicale auxiliaire, ne dépendant que de la norme des matrices. Les raciness (les points de non-différentiabilité) de ce polynôme tropical fournissent une pré-estimation de la valeur absolue des valeurs propres. Ceci se justifie en particulier par un nouveau résultat montrant que sous certaines hypothèses faites sur le conditionnement, il existe un groupe de valeurs propres bornées en norme. L'ordre de grandeur de ces bornes est fourni par la plus grande racine du polynôme tropical auxiliaire. Un résultat similaire est valable pour un groupe de petites valeurs propres. Nous montrons expérimentalement que cette mise à l'échelle améliore la stabilité numérique, en particulier dans des situations où les données ont des ordres de grandeur différents. Nous étudions également le problème du calcul des valeurs propres tropicales (les points de non-différentiabilité du polynôme caractéristique) d'une matrice polynômiale tropicale. Du point de vue combinatoire, ce problème est équivalent à trouver une fonction de couplage: la valeur d'un couplage de poids maximum dans un graphe biparti dont les arcs sont valués par des fonctions convexes et linéaires par morceaux. Nous avons développé un algorithme qui calcule ces valeurs propres tropicales en temps polynomial. Dans la deuxième partie de cette thèse, nous nous intéressons à la résolution de problèmes d'affectation optimale de très grande taille, pour lesquels les algorithms séquentiels classiques ne sont pas efficaces. Nous proposons une nouvelle approche qui exploite le lien entre le problème d'affectation optimale et le problème de maximisation d'entropie. Cette approche conduit à un algorithme de prétraitement pour le problème d'affectation optimale qui est basé sur une méthode itérative qui élimine les entrées n'appartenant pas à une affectation optimale. Nous considérons deux variantes itératives de l'algorithme de prétraitement, l'une utilise la méthode Sinkhorn et l'autre utilise la méthode de Newton. Cet algorithme de prétraitement ramène le problème initial à un problème beaucoup plus petit en termes de besoins en mémoire. Nous introduisons également une nouvelle méthode itérative basée sur une modification de l'algorithme Sinkhorn, dans lequel un paramètre de déformation est lentement augmenté. Nous prouvons que cette méthode itérative(itération de Sinkhorn déformée) converge vers une matrice dont les entrées non nulles sont exactement celles qui appartiennent aux permutations optimales. Une estimation du taux de convergence est également présentée.
154

Matrix Algebra for Quantum Chemistry

Rubensson, Emanuel H. January 2008 (has links)
This thesis concerns methods of reduced complexity for electronic structure calculations.  When quantum chemistry methods are applied to large systems, it is important to optimally use computer resources and only store data and perform operations that contribute to the overall accuracy. At the same time, precarious approximations could jeopardize the reliability of the whole calculation.  In this thesis, the self-consistent field method is seen as a sequence of rotations of the occupied subspace. Errors coming from computational approximations are characterized as erroneous rotations of this subspace. This viewpoint is optimal in the sense that the occupied subspace uniquely defines the electron density. Errors should be measured by their impact on the overall accuracy instead of by their constituent parts. With this point of view, a mathematical framework for control of errors in Hartree-Fock/Kohn-Sham calculations is proposed.  A unifying framework is of particular importance when computational approximations are introduced to efficiently handle large systems. An important operation in Hartree-Fock/Kohn-Sham calculations is the calculation of the density matrix for a given Fock/Kohn-Sham matrix. In this thesis, density matrix purification is used to compute the density matrix with time and memory usage increasing only linearly with system size. The forward error of purification is analyzed and schemes to control the forward error are proposed. The presented purification methods are coupled with effective methods to compute interior eigenvalues of the Fock/Kohn-Sham matrix also proposed in this thesis.New methods for inverse factorizations of Hermitian positive definite matrices that can be used for congruence transformations of the Fock/Kohn-Sham and density matrices are suggested as well. Most of the methods above have been implemented in the Ergo quantum chemistry program. This program uses a hierarchic sparse matrix library, also presented in this thesis, which is parallelized for shared memory computer architectures. It is demonstrated that the Ergo program is able to perform linear scaling Hartree-Fock calculations. / QC 20100908
155

Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox / Efficient matrix multiplication and design for the exact linear algebra library LinBox

Boyer, Brice 21 June 2012 (has links)
Dans ce mémoire de thèse, nous développons d'abord des multiplications matricielles efficaces. Nous créons de nouveaux ordonnancements qui permettent de réduire la taille de la mémoire supplémentaire nécessaire lors d'une multiplication du type Winograd tout en gardant une bonne complexité, grâce au développement d'outils externes ad hoc (jeu de galets), à des calculs fins de complexité et à de nouveaux algorithmes hybrides. Nous utilisons ensuite des technologies parallèles (multicœurs et GPU) pour accélérer efficacement la multiplication entre matrice creuse et vecteur dense (SpMV), essentielles aux algorithmes dits /boîte noire/, et créons de nouveaux formats hybrides adéquats. Enfin, nous établissons des méthodes de /design/ générique orientées vers l'efficacité, notamment par conception par briques de base, et via des auto-optimisations. Nous proposons aussi des méthodes pour améliorer et standardiser la qualité du code de manière à pérenniser et rendre plus robuste le code produit. Cela permet de pérenniser de rendre plus robuste le code produit. Ces méthodes sont appliquées en particulier à la bibliothèque de calcul exact LinBox. / We first expose in this memoir efficient matrix multiplication techniques. We set up new schedules that allow us to minimize the extra memory requirements during a Winograd-style matrix multiplication, while keeping the complexity competitive. In order to get them, we develop external tools (pebble game), tight complexity computations and new hybrid algorithms. Then we use parallel technologies (multicore CPU and GPU) in order to accelerate efficiently the sparse matrix--dense vector multiplication (SpMV), crucial to /blackbox/ algorithms and we set up new hybrid formats to store them. Finally, we establish generic design methods focusing on efficiency, especially via building block conceptions or self-optimization. We also propose tools for improving and standardizing code quality in order to make it more sustainable and more robust. This is in particular applied to the LinBox computer algebra library.
156

Interprétation sémantique d'images hyperspectrales basée sur la réduction adaptative de dimensionnalité / Semantic interpretation of hyperspectral images based on the adaptative reduction of dimensionality

Sellami, Akrem 11 December 2017 (has links)
L'imagerie hyperspectrale permet d'acquérir des informations spectrales riches d'une scène dans plusieurs centaines, voire milliers de bandes spectrales étroites et contiguës. Cependant, avec le nombre élevé de bandes spectrales, la forte corrélation inter-bandes spectrales et la redondance de l'information spectro-spatiale, l'interprétation de ces données hyperspectrales massives est l'un des défis majeurs pour la communauté scientifique de la télédétection. Dans ce contexte, le grand défi posé est la réduction du nombre de bandes spectrales inutiles, c'est-à-dire de réduire la redondance et la forte corrélation de bandes spectrales tout en préservant l'information pertinente. Par conséquent, des approches de projection visent à transformer les données hyperspectrales dans un sous-espace réduit en combinant toutes les bandes spectrales originales. En outre, des approches de sélection de bandes tentent à chercher un sous-ensemble de bandes spectrales pertinentes. Dans cette thèse, nous nous intéressons d'abord à la classification d'imagerie hyperspectrale en essayant d'intégrer l'information spectro-spatiale dans la réduction de dimensions pour améliorer la performance de la classification et s'affranchir de la perte de l'information spatiale dans les approches de projection. De ce fait, nous proposons un modèle hybride permettant de préserver l'information spectro-spatiale en exploitant les tenseurs dans l'approche de projection préservant la localité (TLPP) et d'utiliser l'approche de sélection non supervisée de bandes spectrales discriminantes à base de contraintes (CBS). Pour modéliser l'incertitude et l'imperfection entachant ces approches de réduction et les classifieurs, nous proposons une approche évidentielle basée sur la théorie de Dempster-Shafer (DST). Dans un second temps, nous essayons d'étendre le modèle hybride en exploitant des connaissances sémantiques extraites à travers les caractéristiques obtenues par l'approche proposée auparavant TLPP pour enrichir la sélection non supervisée CBS. En effet, l'approche proposée permet de sélectionner des bandes spectrales pertinentes qui sont à la fois informatives, discriminantes, distinctives et peu redondantes. En outre, cette approche sélectionne les bandes discriminantes et distinctives en utilisant la technique de CBS en injectant la sémantique extraite par les techniques d'extraction de connaissances afin de sélectionner d'une manière automatique et adaptative le sous-ensemble optimal de bandes spectrales pertinentes. La performance de notre approche est évaluée en utilisant plusieurs jeux des données hyperspectrales réelles. / Hyperspectral imagery allows to acquire a rich spectral information of a scene in several hundred or even thousands of narrow and contiguous spectral bands. However, with the high number of spectral bands, the strong inter-bands spectral correlation and the redundancy of spectro-spatial information, the interpretation of these massive hyperspectral data is one of the major challenges for the remote sensing scientific community. In this context, the major challenge is to reduce the number of unnecessary spectral bands, that is, to reduce the redundancy and high correlation of spectral bands while preserving the relevant information. Therefore, projection approaches aim to transform the hyperspectral data into a reduced subspace by combining all original spectral bands. In addition, band selection approaches attempt to find a subset of relevant spectral bands. In this thesis, firstly we focus on hyperspectral images classification attempting to integrate the spectro-spatial information into dimension reduction in order to improve the classification performance and to overcome the loss of spatial information in projection approaches.Therefore, we propose a hybrid model to preserve the spectro-spatial information exploiting the tensor model in the locality preserving projection approach (TLPP) and to use the constraint band selection (CBS) as unsupervised approach to select the discriminant spectral bands. To model the uncertainty and imperfection of these reduction approaches and classifiers, we propose an evidential approach based on the Dempster-Shafer Theory (DST). In the second step, we try to extend the hybrid model by exploiting the semantic knowledge extracted through the features obtained by the previously proposed approach TLPP to enrich the CBS technique. Indeed, the proposed approach makes it possible to select a relevant spectral bands which are at the same time informative, discriminant, distinctive and not very redundant. In fact, this approach selects the discriminant and distinctive spectral bands using the CBS technique injecting the extracted rules obtained with knowledge extraction techniques to automatically and adaptively select the optimal subset of relevant spectral bands. The performance of our approach is evaluated using several real hyperspectral data.
157

A reta de Euler e a circunferência dos nove pontos: um olhar algébrico

Souto, Antonio Marcos da Silva 12 August 2013 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2015-05-19T14:38:35Z No. of bitstreams: 2 license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5) arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5) / Approved for entry into archive by Leonardo Americo (leonardo@sti.ufpb.br) on 2015-05-19T15:13:52Z (GMT) No. of bitstreams: 2 license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5) arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5) / Made available in DSpace on 2015-05-19T15:13:52Z (GMT). No. of bitstreams: 2 license_rdf: 22190 bytes, checksum: 19e8a2b57ef43c09f4d7071d2153c97d (MD5) arquivototal.pdf: 4151214 bytes, checksum: 332991af15a25d2ea7775c9d71ef3216 (MD5) Previous issue date: 2013-08-12 / This work is the result of a research on the Euler line and the circumference of the nine points. The software geogebra was used to illustrate geometric constructions and present some practical activities for the study of notable points of the triangle, the Euler line and the circumference of the nine points to high school students. However, the work was based on the proof, with the use of Modern Algebra and Linear Algebra, the existence and properties of the object of this research, especially the universal property of points in the plane, critical in these demonstrations. / Este trabalho é o resultado de uma pesquisa sobre a reta de Euler e a circunferência dos nove pontos. Foi utilizado o software geogebra para ilustrar as construções geométricas e apresentar algumas atividades práticas para o estudo dos pontos notá- veis do triângulo, da reta de Euler e da circunferência dos nove pontos aos estudantes do Ensino Médio. Todavia, o trabalho se baseou nas demonstrações, com o uso da Álgebra Moderna e da Álgebra Linear, da existência e das propriedades do objeto desta pesquisa, sobretudo da propriedade universal dos pontos no plano, fundamental nestas demonstrações.
158

Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic / Algorithmes efficaces pour le calcul scientifique vérifié : algèbre linéaire numérique et arithmétique par intervalles

Nguyen, Hong Diep 18 January 2011 (has links)
L'arithmétique par intervalles permet de calculer et simultanément vérifier des résultats. Cependant, une application naïve de cette arithmétique conduit à un encadrement grossier des résultats. De plus, de tels calculs peuvent être lents.Nous proposons des algorithmes précis et des implémentations efficaces, utilisant l'arithmétique par intervalles, dans le domaine de l'algèbre linéaire. Deux problèmes sont abordés : la multiplication de matrices à coefficients intervalles et la résolution vérifiée de systèmes linéaires. Pour le premier problème, nous proposons deux algorithmes qui offrent de bons compromis entre vitesse et précision. Pour le second problème, nos principales contributions sont d'une part une technique de relaxation, qui réduit substantiellement le temps d'exécution de l'algorithme, et d'autre part l'utilisation d'une précision étendue en quelques portions bien choisies de l'algorithme, afin d'obtenir rapidement une grande précision. / Interval arithmetic is a means to compute verified results. However, a naive use of interval arithmetic does not provide accurate enclosures of the exact results. Moreover, interval arithmetic computations can be time-consuming. We propose several accurate algorithms and efficient implementations in verified linear algebra using interval arithmetic. Two fundamental problems are addressed, namely the multiplication of interval matrices and the verification of a floating-point solution of a linear system. For the first problem, we propose two algorithms which offer new tradeoffs between speed and accuracy. For the second problem, which is the verification of the solution of a linear system, our main contributions are twofold. First, we introduce a relaxation technique, which reduces drastically the execution time of the algorithm. Second, we propose to use extended precision for few, well-chosen parts of the computations, to gain accuracy without losing much in term of execution time.
159

Memory-aware algorithms : from multicores to large scale platforms / Algorithmes orientés mémoire : des processeurs multi-cœurs aux plates-formes à grande échelle

Jacquelin, Mathias 20 July 2011 (has links)
Cette thèse s’intéresse aux algorithmes adaptés aux architectures mémoire hiérarchiques, rencontrées notamment dans le contexte des processeurs multi-cœurs.Nous étudions d’abord le produit de matrices sur les processeurs multi-cœurs. Nous modélisons le processeur, bornons le volume de communication, présentons trois algorithmes réduisant ce volume de communication et validons leurs performances. Nous étudions ensuite la factorisation QR, dans le contexte des matrices ayant plus de lignes que de colonnes. Nous revisitons les algorithmes existants afin d’exploiter les processeurs multi-cœurs, analysons leurs chemins critiques, montrons que certains sont asymptotiquement optimaux, et analysons leurs performances.Nous étudions ensuite les applications pipelinées sur une plate-forme hétérogène, le QS 22. Nous modélisons celle-ci et appliquons les techniques d’ordonnancement en régime permanent. Nous introduisons un programme linéaire mixte permettant d’obtenir une solution optimale. Nous introduisons en outre un ensemble d’heuristiques.Puis, nous minimisons la mémoire nécessaire à une application modélisée par un arbre, sur une plate-forme à deux niveaux de mémoire. Nous présentons un algorithme optimal et montrons qu’il existe des arbres tels que les parcours postfixes sont arbitrairement mauvais. Nous étudions alors la minimisation du volume d’E/S à mémoire donnée, montrons que ce problème est NP-complet, et présentons des heuristiques. Enfin, nous comparons plusieurs politiques d’archivage pour BLUE WATERS. Nous introduisons deux politiques d’archivage améliorant les performances de la politique RAIT, modélisons la plate-forme de stockage et simulons son fonctionnement. / This thesis focus on memory-aware algorithms tailored for hierarchical memory architectures, found for instance within multicore processors. We first study the matrix product on multicore architectures. We model such a processor, and derive lower bounds on the communication volume. We introduce three ad hoc algorithms, and experimentally assess their performance.We then target a more complex operation: the QR factorization of tall matrices. We revisit existing algorithms to better exploit the parallelism of multicore processors. We thus study the critical paths of many algorithms, prove some of them to be asymptotically optimal, and assess their performance.In the next study, we focus on scheduling streaming applications onto a heterogeneous multicore platform, the QS 22. We introduce a model of the platform and use steady-state scheduling techniques so as to maximize the throughput. We present a mixed integer programming approach that computes an optimal solution, and propose simpler heuristics. We then focus on minimizing the amount of required memory for tree-shaped workflows, and target a classical two-level memory system. I/O represent transfers from a memory to the other. We propose a new exact algorithm, and show that there exist trees where postorder traversals are arbitrarily bad. We then study the problem of minimizing the I/O volume for a given memory, show that it is NP-hard, and provide a set of heuristics.Finally, we compare archival policies for BLUE WATERS. We introduce two archival policies and adapt the well known RAIT strategy. We provide a model of the tape storage platform, and use it to assess the performance of the three policies through simulation.
160

Efficient computation with structured matrices and arithmetic expressions / Calcul efficace avec des matrices structurées et des expressions arithmétiques

Mouilleron, Christophe 04 November 2011 (has links)
Le développement de code efficace en pratique pour effectuer un calcul donné est un problème difficile. Cette thèse présente deux situations où nous avons été confronté à ce problème. La première partie de la thèse propose des améliorations au niveau algorithmique dans le cadre de l'algèbre linéaire structurée. Nous montrons d'abord comment étendre un algorithme de Cardinal pour l'inversion de matrices de type Cauchy afin de traiter les autres structures classiques. Cette approche, qui repose essentiellement sur des produits de type « matrice structurée × matrice », conduit à une accélération d'un facteur allant jusqu'à 7 en théorie et constaté en pratique. Ensuite, nous généralisons des travaux sur les matrices de type Toeplitz afin de montrer comment, pour les structures classiques, calculer le produit d'une matrice structurée n×n et de rang de déplacement α par une matrice n×α en Õ(α^(ω-1)n). Cela conduit à des algorithmes en Õ(α^(ω-1)n) pour l'inversion de matrices structurées, sans avoir à passer par des matrices de type Toeplitz. La deuxième partie de la thèse traite de l'implantation d'expressions arithmétiques. Ce sujet soulève de nombreuses questions comme le nombre d'opérations minimum, la vitesse, ou encore la précision des calculs en arithmétique approchée. En exploitant la nature inductive des expressions arithmétiques, il est possible de développer des algorithmes aidant à répondre à ces questions. Nous présentons ainsi plusieurs algorithmes de génération de schémas d'évaluation, de comptage et d'optimisation selon un ou plusieurs critères. Ces algorithmes ont été implanté dans une librairie qui a en autre été utilisée pour accélérer un logiciel de génération de code pour une librairie mathématique, et pour étudier des questions d'optimalité pour le problème de l'évaluation d'un polynôme à coefficients scalaires de petit degré en une matrice. / Designing efficient code in practice for a given computation is a hard task. In this thesis, we tackle this issue in two different situations. The first part of the thesis introduces some algorithmic improvements in structured linear algebra. We first show how to extend an algorithm by Cardinal for inverting Cauchy-like matrices to the other common structures. This approach, which mainly relies on products of the type "structured matrix × matrix", leads to a theoretical speed-up of a factor up to 7 that we also observe in practice. Then, we extend some works on Toeplitz-like matrices and prove that, for any of the common structures, the product of an n×n structured matrix of displacement rank α by an n×α matrix can be computed in Õ(α^(ω-1)n). This leads to direct inversion algorithms in Õ(α^(ω-1)n) , that do not rely on a reduction to the Toeplitz-like case. The second part of the thesis deals with the implementation of arithmetic expressions. This topic raises several issues like finding the minimum number of operations, and maximizing the speed or the accuracy when using some finite-precision arithmetic. Making use of the inductive nature of arithmetic expressions enables the design of algorithms that help to answer such questions. We thus present a set of algorithms for generating evaluation schemes, counting them, and optimizing them according to one or several criteria. These algorithms are part of a library that we have developed and used, among other things, in order to decrease the running time of a code generator for a mathematical library, and to study optimality issues about the evaluation of a small degree polynomial with scalar coefficients at a matrix point.

Page generated in 0.045 seconds