• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 51
  • 28
  • 20
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 442
  • 89
  • 77
  • 76
  • 63
  • 46
  • 43
  • 39
  • 37
  • 36
  • 34
  • 27
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Resolução de um problema térmico inverso utilizando processamento paralelo em arquiteturas de memória compartilhada / Resolution of an inverse thermal problem using parallel processing on shared memory architectures

Jonas Laerte Ansoni 03 September 2010 (has links)
A programação paralela tem sido freqüentemente adotada para o desenvolvimento de aplicações que demandam alto desempenho computacional. Com o advento das arquiteturas multi-cores e a existência de diversos níveis de paralelismo é importante definir estratégias de programação paralela que tirem proveito desse poder de processamento nessas arquiteturas. Neste contexto, este trabalho busca avaliar o desempenho da utilização das arquiteturas multi-cores, principalmente o oferecido pelas unidades de processamento gráfico (GPUs) e CPUs multi-cores na resolução de um problema térmico inverso. Algoritmos paralelos para a GPU e CPU foram desenvolvidos utilizando respectivamente as ferramentas de programação em arquiteturas de memória compartilhada NVIDIA CUDA (Compute Unified Device Architecture) e a API POSIX Threads. O algoritmo do método do gradiente conjugado pré-condicionado para resolução de sistemas lineares esparsos foi implementado totalmente no espaço da memória global da GPU em CUDA. O algoritmo desenvolvido foi avaliado em dois modelos de GPU, os quais se mostraram mais eficientes, apresentando um speedup de quatro vezes que a versão serial do algoritmo. A aplicação paralela em POSIX Threads foi avaliada em diferentes CPUs multi-cores com distintas microarquiteturas. Buscando um maior desempenho do código paralelizado foram utilizados flags de otimização as quais se mostraram muito eficientes na aplicação desenvolvida. Desta forma o código paralelizado com o auxílio das flags de otimização chegou a apresentar tempos de processamento cerca de doze vezes mais rápido que a versão serial no mesmo processador sem nenhum tipo de otimização. Assim tanto a abordagem utilizando a GPU como um co-processador genérico a CPU como a aplicação paralela empregando as CPUs multi-cores mostraram-se ferramentas eficientes para a resolução do problema térmico inverso. / Parallel programming has been frequently adopted for the development of applications that demand high-performance computing. With the advent of multi-cores architectures and the existence of several levels of parallelism are important to define programming strategies that take advantage of parallel processing power in these architectures. In this context, this study aims to evaluate the performance of architectures using multi-cores, mainly those offered by the graphics processing units (GPUs) and CPU multi-cores in the resolution of an inverse thermal problem. Parallel algorithms for the GPU and CPU were developed respectively, using the programming tools in shared memory architectures, NVIDIA CUDA (Compute Unified Device Architecture) and the POSIX Threads API. The algorithm of the preconditioned conjugate gradient method for solving sparse linear systems entirely within the global memory of the GPU was implemented by CUDA. It evaluated the two models of GPU, which proved more efficient by having a speedup was four times faster than the serial version of the algorithm. The parallel application in POSIX Threads was evaluated in different multi-core CPU with different microarchitectures. Optimization flags were used to achieve a higher performance of the parallelized code. As those were efficient in the developed application, the parallelized code presented processing times about twelve times faster than the serial version on the same processor without any optimization. Thus both the approach using GPU as a coprocessor to the CPU as a generic parallel application using the multi-core CPU proved to be more efficient tools for solving the inverse thermal problem.
332

Uma adaptação do MEF para análise em multicomputadores: aplicações em alguns modelos estruturais / Multicomputer finite element method analysis of usual structures models

Valério da Silva Almeida 24 March 1999 (has links)
Neste trabalho, apresenta-se uma adaptação dos procedimentos utilizados nos códigos computacionais seqüenciais advindos do MEF, para utilizá-los em multicomputadores. Desenvolve-se uma rotina para a montagem do sistema linear particionado entre os diversos processadores. Resolve-se o sistema de equações lineares geradas mediante a rotina do PIM (Parallel Iterative Method). São feitas adaptações deste pacote para se aproveitar as características comuns do sistema linear gerado pelo MEF: esparsidade e simetria. A técnica de resolução do sistema em paralelo é otimizada com o uso de dois tipos de pré-condicionadores: a decomposição incompleta de Cholesky (IC) generalizado e o POLY(0) ou Jacobi. É feita uma aplicação para a solução de pavimento com o algoritmo-base totalmente paralelizado. Também é avaliada a solução do sistema de equações de uma treliça. Mostram-se resultados de speed-up, de eficiência e de tempo para estes dois modelos estruturais. Além disso, é feito um estudo em processamento seqüencial da performance dos pré-condicionadores genéricos (IC) e do advindo de uma série truncada de Neumann, também generalizada, utilizando-se modelos estruturais de placa e chapa. / This work presents an adaptation of conventional finite element method (FEM) computing procedures to multicomputers. It is presented the procedure which the linear system of equations is split among the processor and its solution. It was improved a public software called PIM (Parallel Iterative Method) is used to solve this system of equations. These improvements explore efficiently the usual features of the FEM systems of equations: sparseness and symmetry. To improve the solution of the system, two different preconditioners are used: a generic Incomplete Cholesky (IC) and the Polynomial preconditioning (POLY(0) or Jacobi). It is carried out a full adaptation of the method to parallel computing with a program developed to analyse floor structures. The improved PIM is also used to solve the system of equations of a tri-dimensional truss. It is presented the speed-up, the efficiency and the time used in the resolution of the systems of equations for the floor and for the truss. It is also presented a study of performance in sequential processing of the generic (IC) and the generic Neumann series preconditioners in the analysis of plates in bending and in plane actions.
333

Análise e determinação de custos específicos e consequências econômico-sociais na incorporação da vacina contra meningite e doença meningocócica C conjugada na rotina do Programa Nacional de Imunização/PNI / Analysis and determination of specific costs and socioeconomic consequences in the incorporation of the vaccine against meningitis and Meningococcal Disease C conjugate in the routine national immunization program / NIP

Alexander Itria 14 October 2011 (has links)
As avaliações econômicas em saúde, que se propõe a estudar a alocação mais eficiente de recursos, apresentam expansão nos últimos 20 anos. Para as vacinas especificamente, há crescente surgimento das avaliações econômicas de programas de vacinação dado aumento dos preços das novas vacinas. Nesse cenário tem-se que a doença meningocócica continua sendo um agravo de extrema importância na população mundial, com características peculiares quando se consideram manifestações, morbi-mortalidade e ocorrência nas diferentes regiões. Não são suficientemente conhecidas as causas do início de uma epidemia em um dado momento e lugar, mas sabe-se que são necessários a presença concomitante de múltiplos fatores, como características do agente etiológico, do hospedeiro e do meio ambiente. Isto inclui a susceptibilidade da população, condições climáticas favoráveis, situação socioeconômica precária, tornando a prevenção primária da doença difícil, sendo necessária uma intervenção específica como as vacinas. Há diversas complicações da doença meningocócica, principalmente as sequelas, sendo as mais comuns a perda auditiva, as amputações, necrose de pele e convulsões. O Brasil, através do Programa Nacional de Imunizações / PNI, incluiu em sua agenda de Avaliação de Tecnologias em Saúde, via Secretaria de Vigilância Sanitária do Ministério da Saúde, avaliações econômicas locais para introdução de novas vacinas no calendário nacional de vacinação, sendo uma delas a vacina antimeningocócica C conjugada. Assim o objetivo desta tese é desenvolver um estudo complementar de custo-efetividade para a vacina conjugada contra a doença meningocócica C, com inclusão de estimativas suplementares de custos adicionais, para análise da sua repercussão sobre as razões incrementais encontradas em estudo original. A fim de aprofundar os estudos que medem as proporções de sequelas e os custos indiretos, bem como a inserção de novos custos. A hipótese sugere que a medição e valoração de custos envolvidos com sequelas da doença, aprimora os resultados do estudo de custo-efetividade e agregar elementos adicionais nas decisões dos gestores. Foram realizadas na cidade de Sorocaba entrevistas junto aos doentes e familiares com questionários de rotina de gastos e de qualidade de vida - EuroQol (EQ-5D), sendo inseridos nas análise de custoefetividade, os gastos diversos realizados pelas famílias, ora denominado de Gastos Familiares . A tese teve como resultado o fato que o melhor detalhamento e inserção de gastos familiares no tratamento de pessoas que adquiriram deficiências em consequência de sequelas, alterou a relação de custo-efetividade no programa de vacinação da doença meningocócica. A análise de sensibilidade mostrou que esses dados, quando extrapolados resultam num valor incremental ainda mais próximo no valor ideal de custo-efetividade / The economic evaluations in health, which proposes to study the more efficient allocation of resources, an expansion in the last 20 years. For vaccines specifically, there is increasing emergence of economic evaluations of vaccination programs because price increases of new vaccines. In this scenario have that meningococcal disease still a disorder of extreme importance in the world population with peculiar characteristics when considering events, morbidity, mortality and incidence in different regions. Are not sufficiently known cause of the beginning of an epidemic in a given time and place, but it is known that it takes the concomitant presence of multiple factors like characteristics of the agent, host and environment. This includes the susceptibility of the population, favorable climatic conditions, poor socioeconomic situation, making the primary prevention of the disease hard and requires specific interventions such as vaccines. There are several complications of meningococcal disease, mostly the sequels, the most common hearing loss, amputations, skin necrosis and seizures. Brazil, through the National Immunization Program / PNI, included in its agenda for Technology Assessment in Health via the Health Surveillance Secretariat of the Ministry of Health, local economic assessments for the introduction of new vaccines in national vaccination schedule, one of which meningococcal C conjugate vaccine. So the purpose of this thesis is to develop a complementary study of cost-effectiveness for the conjugate vaccine against meningococcal C disease, with inclusion of supplementary estimates of additional costs for analysis of its impact on the incremental ratios found in the original study. In order to deepen the studies that measures the proportions of sequels and indirect costs, as well as the inclusion of new costs. The hypothesis suggests that the measurement and valuation of costs involved with sequelae of disease, improves the results of cost-effectiveness and add additional elements in the decisions of managers. Were held in the city of Sorocaba interviews with the patients and family questionnaires for routine expenditures and quality of life - EuroQol (EQ-5D), and inserted into the cost-effectiveness, the spending made by many families, sometimes called \"Family Expenditures\". The thesis resulted in the fact that better detail and inclusion of family spending in treating people who have acquired disabilities as a result of disability, has changed the costeffectiveness in the program of vaccination for meningococcal meningitis. The sensitivity analysis showed that these data, when extrapolated result in incremental value even closer to the ideal value of cost-effectiveness
334

A parallel version of the preconditioned conjugate gradient method for boundary element equations

Pester, M., Rjasanow, S. 30 October 1998 (has links) (PDF)
The parallel version of precondition techniques is developed for matrices arising from the Galerkin boundary element method for two-dimensional domains with Dirichlet boundary conditions. Results were obtained for implementations on a transputer network as well as on an nCUBE-2 parallel computer showing that iterative solution methods are very well suited for a MIMD computer. A comparison of numerical results for iterative and direct solution methods is presented and underlines the superiority of iterative methods for large systems.
335

Site-specific glycoconjugate synthesis / Synthèse site-spécifique de glycoconjugués

Bayart, Caroline 08 December 2017 (has links)
Les vaccins conjugués furent développés suite à l’inefficacité des vaccins polysaccharidiques chez les nourrissons et les personnes âgées. Les vaccins conjugués sont composés d’un polysaccharide extrait de la capsule bactérienne et d’une protéine porteuse. Celle-ci permet de décupler la réponse immunitaire, permettant aux vaccins d’être efficaces. L’évolution des connaissances en chimie et en analytique permettent aujourd’hui de mieux caractériser ces vaccins et de mieux maîtriser leur production. Cependant, les chimies de conjugaison utilisées pour lier le polysaccharide et la protéine porteuse, ne sont pas toujours définies et cela mène souvent à l’obtention de produits hétérogènes. Les objectifs de cette thèse ont été d’étudier le polysaccharide, les protéines porteuses et de nouvelles voies de conjugaisons pour lier spécifiquement ces deux biomolécules.Différents outils analytiques ont été utilisés afin d’acquérir une meilleure connaissance des deux partenaires de conjugaison. Cela a également permis d’établir une stratégie d’analyse efficace pour caractériser les produits de réaction. La spécificité des réactions de conjugaison a été induite par l’utilisation d’espaceurs bi-fonctionnels, réagissant spécifiquement sur certains acides aminés. Leur réactivité a d’abord été testée sur un modèle peptidique. Cela a permis de faciliter la caractérisation et d’étudier l’efficacité et la spécificité des réactions. Les réactions efficaces ont ensuite été testées différents modèles : de la protéine au vaccin. Sur les quatre réactions testées, une a été efficace sur tous les modèles. Cette chimie de conjugaison est prometteuse pour le développement de nouveaux vaccins / Conjugate vaccines were developed because polysaccharide vaccines were not efficient in infant and old people. These vaccines were composed of the polysaccharide extracted from the bacterial capsule linked to a carrier protein. This protein created an immunological boost which allowed the vaccine to induce a proper protection for everyone. As chemistry knowledge and analytical techniques evolved, vaccines can now be better characterized and the production can be better controlled. Nevertheless, the chemistries used to bind the polysaccharide and the carrier protein are not always well-defined, which leads to the production of heterogeneous products. The objectives of this PhD were to study the polysaccharide, carrier proteins and new conjugation chemistries to specifically bind the two biomolecules. The other challenge was to be able to check the reaction specificity and characterize reaction products.To do so different analytical tools were used to allow a better knowledge of both conjugation partners but also to establish an efficient analytical strategy for glycoconjugate characterization. Conjugation reactions specificity was induced by using different bi-functional linkers, reacting specifically for one type of amino acid. Linkers’ reactivity was first tested on a model peptide. This allowed to facilitate the characterization and to check for both reaction specificity and reaction success. Efficient reactions were then tested on different models from carrier proteins to glycoconjugate vaccines. One of the four tested reactions was efficient from the peptide to the vaccine model. This conjugation is thus promising for the development of new conjugate vaccines
336

Optimization of Cooling Protocols for Hearts Destined for Transplantation

Abdoli, Abas 10 October 2014 (has links)
Design and analysis of conceptually different cooling systems for the human heart preservation are numerically investigated. A heart cooling container with required connections was designed for a normal size human heart. A three-dimensional, high resolution human heart geometric model obtained from CT-angio data was used for simulations. Nine different cooling designs are introduced in this research. The first cooling design (Case 1) used a cooling gelatin only outside of the heart. In the second cooling design (Case 2), the internal parts of the heart were cooled via pumping a cooling liquid inside both the heart’s pulmonary and systemic circulation systems. An unsteady conjugate heat transfer analysis is performed to simulate the temperature field variations within the heart during the cooling process. Case 3 simulated the currently used cooling method in which the coolant is stagnant. Case 4 was a combination of Case 1 and Case 2. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart during the cooling process. In Cases 5 through 9, the coolant solution was used for both internal and external cooling. For external circulation in Case 5 and Case 6, two inlets and two outlets were designed on the walls of the cooling container. Case 5 used laminar flows for coolant circulations inside and outside of the heart. Effects of turbulent flow on cooling of the heart were studied in Case 6. In Case 7, an additional inlet was designed on the cooling container wall to create a jet impinging the hot region of the heart’s wall. Unsteady periodic inlet velocities were applied in Case 8 and Case 9. The average temperature of the heart in Case 5 was +5.0oC after 1500 s of cooling. Multi-objective constrained optimization was performed for Case 5. Inlet velocities for two internal and one external coolant circulations were the three design variables for optimization. Minimizing the average temperature of the heart, wall shear stress and total volumetric flow rates were the three objectives. The only constraint was to keep von Mises stress below the ultimate tensile stress of the heart’s tissue.
337

Adaptive techniques in signal processing and connectionist models

Lynch, Michael Richard January 1990 (has links)
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
338

Desenvolvimento de um método de conjugação entre o polissacarídeo capsular sorotipo 1 de Streptococcus pneumoniae e a proteína de superfície pneumocócica A. / Development of a conjugation method between the capsular polysaccharide serotype 1 of Streptococcus pneumoniae and pneumococcal surface protein A.

Luciene Oliveira Machado 23 June 2015 (has links)
Streptococcus pneumoniae é uma bactéria encapsulada causadora de doenças infecciosas como pneumonia, bacteremia e meningite, infecções essas que estão entre as principais causas de morte entre crianças, idosos e imunodeprimidos, indivíduos que constituem o grupo de risco para tais infecções. A vacinação tem sido a mais eficaz forma de conter tais infecções. A vantagem das vacinas conjugadas em comparação às polissacarídicas é a capacidade de indução de uma resposta imune T-dependente o que garante proteção mesmo ao grupo de risco para infecções por S. pneumonia. A proposta do projeto foi estabelecer um protocolo para obtenção de um conjugado constituído pelo polissacarídeo capsular de S. pneumonia sorotipo 1 (PS1) e pela proteína de superfície pneumocócica A (PspA). A síntese do conjugado empregou uma metodologia inédita para o sorotipo 1. A avaliação da resposta imune humoral induzida pelo conjugado mostrou a indução de IgG anti-PS1 gerada pelas imunizações com o conjugado PS1-PspA. / Streptococcus pneumoniae is an encapsulated bacteria causing infectious diseases such as pneumonia, bacteremia and meningitis, these infections are among the leading causes of death among children, elderly and immunocompromised, who constituting individuals of risk group. The vaccination has been the more effective form to counter these infection. The advantage of conjugated vaccines compared to vaccines polysaccharide, is the ability to induce a T-dependent immune response which provides protection even at risk groups for infection by S. pneumoniae. The project proposal was establish a protocol for obtaining a conjugate consisting of the capsular polysaccharide of S. pneumoniae serotype 1 (PS1) and the pneumococcal surface protein A (PspA). The synthesis of conjugate employed a new methodology for serotype 1. The evaluation of humoral immune response induced by the conjugate showed anti-PS1 IgG induction generated by immunization with the PS1-PspA.
339

Entwicklung eines mehrstufigen Screening-Verfahrens zur Identifizierung maßgeschneiderter Wirkstofftransporter

Remmler, Dario 18 July 2019 (has links)
Geringe Wasserlöslichkeiten kleiner organischer Wirkstoffkandidaten, sogenannter Leitstrukturen, sind in der Medikamentenentwicklung häufig für das Scheitern vielversprechender Projekte verantwortlich. Um diese Kandidaten dennoch zur Marktreife bringen zu können, wurden verschiedene Strategien entwickelt. Neben der kostenintensiven Strukturoptimierung rücken Formulierungsadditive in den Fokus, die in der Lage sind, Wirkstoffe zu solubilisieren, transportieren und gezielt freizusetzen. In dieser Arbeit wird eine Hochdurchsatz-Screening-Methode präsentiert, die eine schnelle und arbeitsextensive Identifizierung maßgeschneiderter Binder für wasserunlösliche, niedermolekulare Wirkstoffe ermöglicht und mithilfe derer Löslichkeitsvermittler in Form von Peptid-Polymer-Konjugaten mit definierten Solubilisierungs- und Freisetzungseigenschaften realisiert werden können. Dazu werden Peptidbibliotheken in einem zweistufigen Prozess auf Wirkstoffbindung und auf Wirkstofffreisetzung durchsucht. Das Screening kann aufgrund einer innovativen on-chip Immobilisierung der Peptidbibliothek und der intrinsischen Fluoreszenz der niedermolekularen Wirkstoffe halbautomatisiert durchgeführt werden. Vielversprechende Peptidsequenzen können anschließend direkt on-chip mittels MALDI-ToF-MS/MS bzw. fragmentierungsfrei sequenziert und löslichkeitsvermittelnde Peptid-PEG-Konjugate hergestellt werden. In einem Testsystem wurden maßgeschneiderte Peptid-PEG-Konjugate mit unterschiedlichen Freisetzungseigenschaften für einen potentiellen Alzheimer-Wirkstoff realisiert und sowohl Solubilisierungseigenschaften, als auch die Freisetzungseigenschaften in einem vereinfachten Blutplasmamodell mittels Fluoreszenzanisotropie und Fluoreszenzkorrelationsspektroskopie bestätigt. In Zelltests mit einer Neuro-2a-Zelllinie konnten durch Zugabe der Wirkstoff-Transporter-Komplexe effektiv die Ausbildung der bei einer Alzheimer-Erkrankung auftretenden Tau-Protein-Aggregate bis zu 55 % reduziert werden. / Low water solubility of promising small organic drugs is one of the main reasons for failures during early drug development. Solubilizers promise to overcome these difficulties by solubilization, improved transport and final release of the potential drug candidates, which may result in an approval as a commercial drug. Here, a high-throughput screening method is presented, which is capable of identifying tailor-made peptide-polymer conjugates binding small molecule drugs, which can be used to act as solubilizers with precisely defined drug uptake and release properties. The screening is based on a two-dimensional process, which in a first step identifies strong binders and in a second, characterises their drug release. Due to its innovative on-chip immobilization of the peptide library and the intrinsic fluorescence of the small molecule drugs, the screening can be performed semi-automatically. Promising peptides can be sequenced directly by on-chip fragmentation-free or via MALDI-ToF-MS/MS and subsequently peptide-PEG conjugates can be synthesized. A screening against a high potential Alzheimer´s disease drug resulted in several tailor-made peptide-PEG conjugates with various drug uptake and release characteristics, which were confirmed in additional experiments. Here, loading capacities were determined and release properties analysed with a simplified blood plasma model utilizing fluorescence anisotropy and fluorescence correlation spectroscopy. Cell tests with a Neuro-2a cell line confirmed the effectiveness of the drug-transporter aggregates by reducing the tau-protein concentration by 55 % and inhibiting their aggregation, which is one of the key issues in Alzheimer´s disease.
340

Enlarged Krylov Subspace Methods and Preconditioners for Avoiding Communication / Méthodes de sous-espace de krylov élargis et préconditionneurs pour réduire les communications

Moufawad, Sophie 19 December 2014 (has links)
La performance d'un algorithme sur une architecture donnée dépend à la fois de la vitesse à laquelle le processeur effectue des opérations à virgule flottante (flops) et de la vitesse d'accès à la mémoire et au disque. Etant donné que le coût de la communication est beaucoup plus élevé que celui des opérations arithmétiques, celle-là forme un goulot d'étranglement dans les algorithmes numériques. Récemment, des méthodes de sous-espace de Krylov basées sur les méthodes 's-step' ont été développées pour réduire les communications. En effet, très peu de préconditionneurs existent pour ces méthodes, ce qui constitue une importante limitation. Dans cette thèse, nous présentons le préconditionneur nommé ''Communication-Avoiding ILU0'', pour la résolution des systèmes d’équations linéaires (Ax=b) de très grandes tailles. Nous proposons une nouvelle renumérotation de la matrice A ('alternating min-max layers'), avec laquelle nous montrons que le préconditionneur en question réduit la communication. Il est ainsi possible d’effectuer « s » itérations d’une méthode itérative préconditionnée sans communication. Nous présentons aussi deux nouvelles méthodes itératives, que nous nommons 'multiple search direction with orthogonalization CG' (MSDO-CG) et 'long recurrence enlarged CG' (LRE-CG). Ces dernières servent à la résolution des systèmes linéaires d’équations de très grandes tailles, et sont basées sur l’enrichissement de l’espace de Krylov par la décomposition du domaine de la matrice A. / The performance of an algorithm on any architecture is dependent on the processing unit’s speed for performing floating point operations (flops) and the speed of accessing memory and disk. As the cost of communication is much higher than arithmetic operations, and since this gap is expected to continue to increase exponentially, communication is often the bottleneck in numerical algorithms. In a quest to address the communication problem, recent research has focused on communication avoiding Krylov subspace methods based on the so called s-step methods. However there are very few communication avoiding preconditioners, and this represents a serious limitation of these methods. In this thesis, we present a communication avoiding ILU0 preconditioner for solving large systems of linear equations (Ax=b) by using iterative Krylov subspace methods. Our preconditioner allows to perform s iterations of the iterative method with no communication, by applying a heuristic alternating min-max layers reordering to the input matrix A, and through ghosting some of the input data and performing redundant computation. We also introduce a new approach for reducing communication in the Krylov subspace methods, that consists of enlarging the Krylov subspace by a maximum of t vectors per iteration, based on the domain decomposition of the graph of A. The enlarged Krylov projection subspace methods lead to faster convergence in terms of iterations and to parallelizable algorithms with less communication, with respect to Krylov methods. We discuss two new versions of Conjugate Gradient, multiple search direction with orthogonalization CG (MSDO-CG) and long recurrence enlarged CG (LRE-CG).

Page generated in 0.0525 seconds