• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 13
  • 9
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 252
  • 78
  • 52
  • 50
  • 43
  • 41
  • 38
  • 36
  • 35
  • 32
  • 31
  • 30
  • 28
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Directed homotopy and homology theories for geometric models of true concurrency / Théories homotopiques et homologiques dirigées pour des modèles géométriques de la vraie concurrence

Dubut, Jérémy 11 September 2017 (has links)
Le but principal de la topologie algébrique dirigée est d’étudier des systèmes qui évoluent avec le temps à travers leur géométrie. Ce sujet émergea en informatique, plus particulièrement en vraie concurrence, où Pratt introduisit les automates de dimension supérieure (HDA) en 1991 (en réalité, l’idée de la géométrie de la concurrence peut être retracée jusque Dijkstra en 1965). Ces automates sont géométriques par nature: chaque ensemble de n processus exécutant des actions indépendantes en parallèle peuvent être modélisées par un cube de dimension n, et un tel automate donne naissance à un espace topologique, obtenu en recollant ces cubes. Cet espace a naturellement une direction du temps provenant du flot d’exécution. Il semble alors totalement naturel d’utiliser des outils provenant de la topologie algébrique pour étudier ces espaces: les chemins modélisent les exécutions et les homotopies de chemins, c’est-à-dire les déformations continues de chemins, modélisent l’équivalence entre exécutions modulo ordonnancement d’actions indépendantes, mais ces notions géométriques doivent préserver la direction du temps, d’une façon ou d’une autre. Ce caractère dirigé apporte des complications et la théorie doit être refaite, essentiellement depuis le début. Dans cette thèse, j’ai développé des théories de l’homotopie et de l’homologie pour ces espaces dirigés. Premièrement, ma théorie de l’homotopie dirigée est basée sur la notion de rétracts par déformations, c’est-à-dire de déformations continues d’un gros espaces sur un espace plus petit, suivant des chemins inessentiels, c’est-à-dire qui ne changent pas le type d’homotopie des « espaces d’exécutions ». Cette théorie est reliée aux catégories de composantes et catégories de dimension supérieures. Deuxièmement, ma théorie de l’homologie dirigée suit l’idée que l’on doit regarder les « espaces d’exécutions » et comment ceux-ci évoluent avec le temps. Cette évolution temporelle est traitée en définissant cette homologie comme un diagramme des « espaces d’exécutions » et en comparant de tels diagrammes en utilisant une notion de bisimulation. Cette théorie homologique a de très bonnes propriétés: elle est calculable sur des espaces simples, elle est un invariant de notre théorie homotopique, elle est invariante par des raffinements d’actions simples et elle une théorie des suites exactes. / Studying a system that evolves with time through its geometry is the main purpose of directed algebraic topology. This topic emerged in computer science, more particularly in true concurrency, where Pratt introduced the higher dimensional automata (HDA) in 1991 (actually, the idea of geometry of concurrency can be tracked down Dijkstra in 1965). Those automata are geometric by nature: every set of n processes executing independent actions can be modeled by a n-cube, and such an automaton then gives rise to a topological space, obtained by glueing such cubes together. This space naturally has a specific direction of time coming from the execution flow. It then seems natural to use tools from algebraic topology to study those spaces: paths model executions, homotopies of paths, that is continuous deformations of paths, model equivalence of executions modulo scheduling of independent actions, and so on, but all those notions must preserve the direction. This brings many complications and the theory must be done again.In this thesis, we develop homotopy and homology theories for those spaces with a direction. First, my directed homotopy theory is based on deformation retracts, that is continuous deformation of a big space on a smaller space, following directed paths that are inessential, meaning that they do not change the homotopy type of spaces of executions. This theory is related to categories of components and higher categories. Secondly, my directed homology theory follows the idea that we must look at the spaces of executions and those evolves with time. This evolution of time is handled by defining such homology as a diagram of spaces of executions and comparing such diagrams using a notion of bisimulation. This homology theory has many nice properties: it is computable on simple spaces, it is an invariant of our homotopy theory, it is invariant under simple action refinements and it has a theory of exactness.
132

Runtime Verification and Debugging of Concurrent Software

Zhang, Lu 29 July 2016 (has links)
Our reliance on software has been growing fast over the past decades as the pervasive use of computer and software penetrated not only our daily life but also many critical applications. As the computational power of multi-core processors and other parallel hardware keeps increasing, concurrent software that exploit these parallel computing hardware become crucial for achieving high performance. However, developing correct and efficient concurrent software is a difficult task for programmers due to the inherent nondeterminism in their executions. As a result, concurrency related software bugs are among the most troublesome in practice and have caused severe problems in recent years. In this dissertation, I propose a series of new and fully automated methods for verifying and debugging concurrent software. They cover the detection, prevention, classification, and repair of some important types of bugs in the implementation of concurrent data structures and client-side web applications. These methods can be adopted at various stages of the software development life cycle, to help programmers write concurrent software correctly as well as efficiently. / Ph. D.
133

Safe Concurrent Programming and Execution

Pyla, Hari Krishna 05 March 2013 (has links)
The increasing prevalence of multi and many core processors has brought the issues of concurrency and parallelism to the forefront of everyday computing. Even for applications amenable to traditional parallelization techniques, the subtleties of concurrent programming are known to introduce concurrency bugs. Due to the potential of concurrency bugs, programmers find it hard to write correct concurrent code. To take full advantage of parallel shared memory platforms, application programmers need safe and efficient mechanisms that can support a wide range of parallel applications. In addition, a large body of applications are inherently hard-to-parallelize; their data and control dependencies impose execution order constraints that preclude the use of traditional parallelization techniques. Sensitive to their input data, a substantial number of applications fail to scale well, leaving cores idle. To improve the performance of such applications, application programmers need effective mechanisms that can fully leverage multi and many core architectures. These challenges stand in the way of realizing the true potential of emerging many core platforms. The techniques described in this dissertation address these challenges. Specifically, this dissertation contributes techniques to transparently detect and eliminate several concurrency bugs, including deadlocks, asymmetric write-write data races, priority inversion, live-locks, order violations, and bugs that stem from the presence of asynchronous signaling and locks. A second major contribution of this dissertation is a programming framework that exploits coarse-grain speculative parallelism to improve the performance of otherwise hard-to-parallelize applications. / Ph. D.
134

Modeling and Runtime Systems for Coordinated Power-Performance Management

Li, Bo 28 January 2019 (has links)
Emergent systems in high-performance computing (HPC) expect maximal efficiency to achieve the goal of power budget under 20-40 megawatts for 1 exaflop set by the Department of Energy. To optimize efficiency, emergent systems provide multiple power-performance control techniques to throttle different system components and scale of concurrency. In this dissertation, we focus on three throttling techniques: CPU dynamic voltage and frequency scaling (DVFS), dynamic memory throttling (DMT), and dynamic concurrency throttling (DCT). We first conduct an empirical analysis of the performance and energy trade-offs of different architectures under the throttling techniques. We show the impact on performance and energy consumption on Intel x86 systems with accelerators of Intel Xeon Phi and a Nvidia general-purpose graphics processing unit (GPGPU). We show the trade-offs and potentials for improving efficiency. Furthermore, we propose a parallel performance model for coordinating DVFS, DMT, and DCT simultaneously. We present a multivariate linear regression-based approach to approximate the impact of DVFS, DMT, and DCT on performance for performance prediction. Validation using 19 HPC applications/kernels on two architectures (i.e., Intel x86 and IBM BG/Q) shows up to 7% and 17% prediction error correspondingly. Thereafter, we develop the metrics for capturing the performance impact of DVFS, DMT, and DCT. We apply the artificial neural network model to approximate the nonlinear effects on performance impact and present a runtime control strategy accordingly for power capping. Our validation using 37 HPC applications/kernels shows up to a 20% performance improvement under a given power budget compared with the Intel RAPL-based method. / Ph. D. / System efficiency on high-performance computing (HPC) systems is the key to achieving the goal of power budget for exascale supercomputers. Techniques for adjusting the performance of different system components can help accomplish this goal by dynamically controlling system performance according to application behaviors. In this dissertation, we focus on three techniques: adjusting CPU performance, memory performance, and the number of threads for running parallel applications. First, we profile the performance and energy consumption of different HPC applications on both Intel systems with accelerators and IBM BG/Q systems. We explore the trade-offs of performance and energy under these techniques and provide optimization insights. Furthermore, we propose a parallel performance model that can accurately capture the impact of these techniques on performance in terms of job completion time. We present an approximation approach for performance prediction. The approximation has up to 7% and 17% prediction error on Intel x86 and IBM BG/Q systems respectively under 19 HPC applications. Thereafter, we apply the performance model in a runtime system design for improving performance under a given power budget. Our runtime strategy achieves up to 20% performance improvement to the baseline method.
135

Automatic Discovery and Exposition of Parallelism in Serial Applications for Compiler-Inserted Runtime Adaptation

Greenland, David A. 25 May 2012 (has links) (PDF)
Compiler-Inserted Runtime Adaptation (CIRA) is a compilation and runtime adaptation strategy which has great potential for increasing performance in multicore systems. In this strategy, the compiler inserts directives into the application which will adapt the application at runtime. Its ability to overcome the obstacles of architectural and environmental diversity coupled with its flexibility to work with many programming languages and styles of applications make it a very powerful tool. However, it is not complete. In fact, there are many pieces still needed to accomplish these lofty goals. This work describes the automatic discovery of parallelism inherent in an application and the generation of an intermediate representation to expose that parallelism. This work shows on six benchmark applications that a significant amount of parallelism which was not initially apparent can be automatically discovered. This work also shows that the parallelism can then be exposed in a representation which is also automatically generated. This is accomplished by a series of analysis and transformation passes with only minimal programmer-inserted directives. This series of passes forms a necessary part of the CIRA toolchain called the concurrency compiler. This concurrency compiler proves that a representation with exposed parallelism and locality can be generated by a compiler. It also lays the groundwork for future, more powerful concurrency compilers. This work also describes the extension of the intermediate representation to support hierarchy, a prerequisite characteristic to the creation of the concurrency compiler. This extension makes it capable of representing many more applications in a much more effective way. This extension to support hierarchy allows much more of the parallelism discovered by the concurrency compiler to be stored in the representation.
136

Efficient, Practical Dynamic Program Analyses for Concurrency Correctness

Cao, Man 15 August 2017 (has links)
No description available.
137

Improving the Performance of Smartphone Apps with Soft Hang Bug Detection and Dynamic Resource Management

Brocanelli, Marco 30 October 2018 (has links)
No description available.
138

Efficient Runtime Support for Reliable and Scalable Parallelism

Zhang, Minjia January 2016 (has links)
No description available.
139

SYMBOLIC ANALYSIS OF WEAK CONCURRENCY SEMANTICS IN MODERN DATABASE PROGRAMS

Kiarash Rahmani (13171128) 28 July 2022 (has links)
<p>The goal of this dissertation is to design a collection of techniques and tools that enable<br> the ease of programming under the traditional strong concurrency guarantees, without sacrificing the performance offered by modern distributed database systems. Our main thesis<br> is that language-centric reasoning can help developers efficiently identify and eliminate con-<br> currency anomalies in modern database programs, and we have demonstrated that it results<br> in faster and safer database programs</p>
140

An Experimental Implementation of Action-Based Concurrency

Cui, Xiao-Lei 01 1900 (has links)
This thesis reports on an implementation of an action-based model for concurrent programming. Concurrency is expressed by allowing objects to have actions with a guard and a body. Each action has its own execution context, and concurrent execution is realized when program execution is happening in more than one context at a time. Two actions of different objects can run concurrently, and they are synchronized whenever a shared object is accessed simultaneously by both actions. The appeal of this model is that it allows a conceptually simple framework for designing and analyzing concurrent programs. To experiment with action-based concurrency, we present a small language, ABC Pascal, which is an experimental attempt as a proof of feasibility of such a model, and also meant to help identify issues for achieving reasonable efficiency in implementation. It extends a subset of Pascal that supports basic sequential programming constructs, and provides action-based concurrency as the action-based model prescribes. This work deals with the specification and implementation of ABC Pascal. The one-pass compiler directly generates assembly code, without devoting efforts to optimization. While the code is not optimized, the results that ABC Pascal has achieved in performance testing are so far comparable to mainstream concurrent programming languages. / Thesis / Master of Science (MSc)

Page generated in 0.0678 seconds