• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 12
  • 12
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatic program generation for scientific computing /

Fu, Zhe. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2007. / Printout. Includes bibliographical references (leaves 132-137). Also available on the World Wide Web.
12

Squelettes algorithmiques asynchrones : application aux langages orientés domaine / Asynchronous algorithmic skeletons : application to domain specific languages

Tran tan, Antoine 08 October 2015 (has links)
Dans cette thèse, nous présentons des développements de l'approche utilisée dans l'équipe « ParSys » du LRI pour traduire automatiquement des codes scientifiques écrits dans un langage dédié inspiré de Matlab en codes de production haute performance. Pour garantir cette performance, nous mettons à profit d'une part la méta-programmation par templates C++ afin d'analyser chaque expression pour détecter les opportunités de parallélisme, et d'autre part la programmation parallèle asynchrone pour utiliser au mieux les ressources disponibles des machines multi-cœurs. Pour faire le lien entre ces deux étapes du processus de génération de code, des squelettes algorithmiques multi-niveaux sont implémentés. Nos outils ont été implantés dans la bibliothèque NT2 et évalués sur des applications scientifiques du monde réel. / In this thesis, we present developments to the approach used by the LRI Parsys team to automatically translate MATLAB-like scientific codes into high performance production codes. To reach a high level of performance, we have combined C++ template meta-programming and asynchronous parallel programming to analyze each expression and detect parallelism opportunities first, and then to ensure near-optimal use of the available resources of multi-core machines. To link these two stages of the code generation process, we have implemented a solution based on multi-level algorithmic skeletons. We have implemented our tools in the NT2 library and evaluated them with several significant scientific benchmarks.
13

Changes in Fish Diversity Due To Hydrologic and Suspended Sediment Variability in the Sandusky River, Ohio: A Genetic Programming Approach

Sanderson, Louis M. 29 July 2009 (has links)
No description available.
14

Understanding and Maintaining C++ Generic Libraries

Sutton, Andrew 09 July 2010 (has links)
No description available.
15

Software Engineering Best Practices for Parallel Computing Development

patney, vikas January 2010 (has links)
In today’s computer age, the numerical simulations are replacing the traditional laboratory experiments. Researchers around the world are using advanced computer software and multiprocessor computer technology to perform experiments, and analyse these simulation results to advance in their respective endeavours. With a wide variety of tools and technologies available, it could be a tedious and time taking task for a non-computer science researcher to choose appropriate methodologies for developing simulation software The research of this thesis addresses the use of Message Passing Interface (MPI) using object-oriented programming techniques and discusses the methodologies suitable to scientific computing, also, propose a customized software engineering development model.
16

Multi-Architectural Support : A Generic and Generative Approach / Support multi-architectural : une approche générique et générative

Estérie, Pierre 20 June 2014 (has links)
Le besoin constant de puissance de calcul a poussé les développeurs à concevoir de nouvelles architectures: les architectures parallèles. Le calcul scientifique dépend fortement des performances de ces dernières afin de fournir des résultats dans un temps optimal. Les applications scientifiques exécutées sur de tels systèmes doivent alors tirer partie des spécificités de ces nouvelles architectures pour être efficaces.Cette thèse présente une nouvelle approche pour la conception de logiciels embarquant des optimisations relatives aux architectures : l'approche AADEMRAL (Architecture Aware DEMRAL). Cette méthodologie a pour but de simplifier le développement de bibliothèques de calcul parallèle avec un support multi-Architectural grâce à une approche générique et générative.Cette nouvelle méthodologie est ensuite intégrée dans trois bibliothèques. La première d'entre elles, Boost.Dispatch, permet de concevoir des logiciels basés sur l'approche AADEMRAL. Boost.Dispatch est une bibliothèque C++fournissant une interface générique pour réaliser de la surcharge de fonction avisée de l'architecture sous-Jacente. Ensuite nous présentons deux bibliothèques C++ implémentées en tant que langages orientés domaine : Boost.SIMD et NT2. Leurs conceptions mettent en œuvre la méthodologie AADEMRAL et leurs implémentations sont basées sur Boost.Dispatch. Boost.SIMD propose une interface de haut niveau pour la programmation des unités vectorielles.NT2 se base sur une interface similaire à celle de Matlab et fournie un support pour les systèmes multi-Cœurs et les unités vectorielles. Enfin, nous validons les performances de ces deux outils ainsi que la robustesse de notre nouvelle approche en présentant une série de résultats obtenus sur des applications de référence. / The constant increasing need for computing power has pushed the development of parallel architectures. Scientific computing relies on the performance of such architectures to produce scientific results. Programming efficient applications that takes advantage of these computing systems remains a non trivial task. In this thesis, we present a new methodology to design architecture aware software: the AA-DEMRAL methodology. This methodology aims at simplifying the development of parallel programming tools with multi-Architectural support through a generic and generative approach. We then present three high level programming tools that rely on this approach. First, we introduce the Boost.Dispatch library that provides a way to develop software based on the AA-DEMRAL methodology. The Boost.Dispatch library is a C++ generic framework for architecture aware function dispatching. Then, we present two C++ template libraries implemented as Architecture Aware DSELs which assess the AA-DEMRAL methodology through the use of Boost.Dispatch: Boost.SIMD, that provides a high level API for SIMD extensions and NT2 , which propose a Matlab like interface with support for multi-Core and SIMD based systems. We assess the performance of these libraries and the validity of our new methodology through benchmarks.
17

Simplifying the Analysis of C++ Programs

Solodkyy, Yuriy 16 December 2013 (has links)
Based on our experience of working with different C++ front ends, this thesis identifies numerous problems that complicate the analysis of C++ programs along the entire spectrum of analysis applications. We utilize library, language, and tool extensions to address these problems and offer solutions to many of them. In particular, we present efficient, expressive and non-intrusive means of dealing with abstract syntax trees of a program, which together render the visitor design pattern obsolete. We further extend C++ with open multi-methods to deal with the broader expression problem. Finally, we offer two techniques, one based on refining the type system of a language and the other on abstract interpretation, both of which allow developers to statically ensure or verify various run-time properties of their programs without having to deal with the full language semantics or even the abstract syntax tree of a program. Together, the solutions presented in this thesis make ensuring properties of interest about C++ programs available to average language users.
18

以AspectFun探討模組化型態擴充與泛型程式設計 / A study on modular type extension and generic programming using AspectFun

陳政宏, Chen, Cheng Hung Unknown Date (has links)
AspectFun是一個語法近似Haskell語言的函數式剖面導向語言。本論文中探討AspectFun在模組化型態擴充以及泛型程式設計上可扮演的角色。研究首先比較剖面與Haskell語言的type class在處理型態擴充需求議題的可行性,型態擴充議題我們以著名的Expression Problem作為代表(程式語言機制如何確保程式在擴充資料與運算函數的過程中,不需要修改舊有的程式碼,並能確保程式的型態安全)。 我們接著會探討剖面如何以模組化方式實現泛型程式設計。泛型程式設計是指函數接收一額外的型態引數,且函數所執行的運算是依據此型態引數結構來進行的。型態引數是用來表示函數所處理的引數或其回傳值型態為何,但型態引數在函數定義中可能是明確定義的或者隱含的。在此研究中會展示以剖面實現的泛型程式設計更優於使用type class。此外,本研究為使AspectFun可以實現泛型程式設計方法,亦在AspectFun擴充了Existential types與多型互遞迴函數。 / AspectFun is an aspect-oriented functional language with a Haskell-like syntax. This thesis present an study on modular type extension and generic programming using AspectFun. First, we compare the feasibility of using aspects and Haskell's type classes to address the type extension requirements as stated in the famous expression problem (which calls for language mechanisms that can support type-safe program extension in both the dimensions of data types and associated operations, yet neither code duplication nor code rewriting is required.) Second, we investigate how to use aspects to support generic programming in amodular manner. Generic programming means a form of programming in which a function takes a type as argument, and its behavior depends upon the structure of this type. The type argument, which may be explicit or implicit, represents the type of values to which the function is applied, or which the function returns. We show that aspects can do better than type classes in supporting generic programming. In particular, we extend AspectFun with existential types and polymorphic mutual recursion to achieve such a result.
19

The problem of tuning metaheuristics as seen from a machine learning perspective

Birattari, Mauro 20 December 2004 (has links)
<p>A metaheuristic is a generic algorithmic template that, once properly instantiated, can be used for finding high quality solutions of combinatorial optimization problems.<p>For obtaining a fully functioning algorithm, a metaheuristic needs to be configured: typically some modules need to be instantiated and some parameters need to be tuned. For the sake of precision, we use the expression <em>parametric tuning</em> for referring to the tuning of numerical parameters, either continuous or discrete but in any case ordinal. <p>On the other hand, we use the expression <em>structural tuning</em> for referring to the problem of defining which modules should be included and, in general, to the problem of tuning parameters that are either boolean or categorical. Finally, with <em>tuning</em> we refer to the composite <em>structural and parametric tuning</em>.</p><p><p><p>Tuning metaheuristics is a very sensitive issue both in practical applications and in academic studies. Nevertheless, a precise definition of the tuning problem is missing in the literature. In this thesis, we argue that the problem of tuning a metaheuristic can be profitably described and solved as a machine learning problem.</p><p><p><p>Indeed, looking at the problem of tuning metaheuristics from a machine learning perspective, we are in the position of giving a formal statement of the tuning problem and to propose an algorithm, called F-Race, for tackling the problem itself. Moreover, always from this standpoint, we are able to highlight and discuss some catches and faults in the current research methodology in the metaheuristics field, and to propose some guidelines.</p><p><p><p>The thesis contains experimental results on the use of F-Race and some examples of practical applications. Among others, we present a feasibility study carried out by the German-based software company <em>SAP</em>, that concerned the possible use of F-Race for tuning a commercial computer program for vehicle routing and scheduling problems. Moreover, we discuss the successful use of F-Race for tuning the best performing algorithm submitted to the <em>International Timetabling Competition</em> organized in 2003 by the <em>Metaheuristics Network</em> and sponsored by <em>PATAT</em>, the international series of conferences on the <em>Practice and Theory of Automated Timetabling</em>.</p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
20

Méthodes de génération automatique de code appliquées à l’algèbre linéaire numérique dans le calcul haute performance / Automatic code generation methods applied to numerical linear algebra in high performance computing

Masliah, Ian 26 September 2016 (has links)
Les architectures parallèles sont aujourd'hui présentes dans tous les systèmes informatiques, allant des smartphones aux supercalculateurs en passant par les ordinateurs de bureau. Programmer efficacement ces architectures en fonction des applications requiert un effort pluridisciplinaire portant sur les langages dédiés (Domain Specific Languages - DSL), les techniques de génération de code et d'optimisation, et les algorithmes numériques propres aux applications. Dans cette thèse, nous présentons une méthode de programmation haut niveau prenant en compte les caractéristiques des architectures hétérogènes et les propriétés existantes des matrices pour produire un solveur générique d'algèbre linéaire dense. Notre modèle de programmation supporte les transferts explicites et implicites entre un processeur (CPU) et un processeur graphique qui peut être généraliste (GPU) ou intégré (IGP). Dans la mesure où les GPU sont devenus un outil important pour le calcul haute performance, il est essentiel d'intégrer leur usage dans les plateformes de calcul. Une architecture récente telle que l'IGP requiert des connaissances supplémentaires pour pouvoir être programmée efficacement. Notre méthodologie a pour but de simplifier le développement sur ces architectures parallèles en utilisant des outils de programmation haut niveau. À titre d'exemple, nous avons développé un solveur de moindres carrés en précision mixte basé sur les équations semi-normales qui n'existait pas dans les bibliothèques actuelles. Nous avons par la suite étendu nos travaux à un modèle de programmation multi-étape ("multi-stage") pour résoudre les problèmes d'interopérabilité entre les modèles de programmation CPU et GPU. Nous utilisons cette technique pour générer automatiquement du code pour accélérateur à partir d'un code effectuant des opérations point par point ou utilisant des squelettes algorithmiques. L'approche multi-étape nous assure que le typage du code généré est valide. Nous avons ensuite montré que notre méthode est applicable à d'autres architectures et algorithmes. Les routines développées ont été intégrées dans une bibliothèque de calcul appelée NT2.Enfin, nous montrons comment la programmation haut niveau peut être appliquée à des calculs groupés et des contractions de tenseurs. Tout d'abord, nous expliquons comment concevoir un modèle de container en utilisant des techniques de programmation basées sur le C++ moderne (C++-14). Ensuite, nous avons implémenté un produit de matrices optimisé pour des matrices de petites tailles en utilisant des instructions SIMD. Pour ce faire, nous avons pris en compte les multiples problèmes liés au calcul groupé ainsi que les problèmes de localité mémoire et de vectorisation. En combinant la programmation haut niveau avec des techniques avancées de programmation parallèle, nous montrons qu'il est possible d'obtenir de meilleures performances que celles des bibliothèques numériques actuelles. / Parallelism in today's computer architectures is ubiquitous whether it be in supercomputers, workstations or on portable devices such as smartphones. Exploiting efficiently these systems for a specific application requires a multidisciplinary effort that concerns Domain Specific Languages (DSL), code generation and optimization techniques and application-specific numerical algorithms. In this PhD thesis, we present a method of high level programming that takes into account the features of heterogenous architectures and the properties of matrices to build a generic dense linear algebra solver. Our programming model supports both implicit or explicit data transfers to and from General-Purpose Graphics Processing Units (GPGPU) and Integrated Graphic Processors (IGPs). As GPUs have become an asset in high performance computing, incorporating their use in general solvers is an important issue. Recent architectures such as IGPs also require further knowledge to program them efficiently. Our methodology aims at simplifying the development on parallel architectures through the use of high level programming techniques. As an example, we developed a least-squares solver based on semi-normal equations in mixed precision that cannot be found in current libraries. This solver achieves similar performance as other mixed-precision algorithms. We extend our approach to a new multistage programming model that alleviates the interoperability problems between the CPU and GPU programming models. Our multistage approach is used to automatically generate GPU code for CPU-based element-wise expressions and parallel skeletons while allowing for type-safe program generation. We illustrate that this work can be applied to recent architectures and algorithms. The resulting code has been incorporated into a C++ library called NT2. Finally, we investigate how to apply high level programming techniques to batched computations and tensor contractions. We start by explaining how to design a simple data container using modern C++14 programming techniques. Then, we study the issues around batched computations, memory locality and code vectorization to implement a highly optimized matrix-matrix product for small sizes using SIMD instructions. By combining a high level programming approach and advanced parallel programming techniques, we show that we can outperform state of the art numerical libraries.

Page generated in 0.1299 seconds