11 |
Using wait-free synchronization to increase system reliability and performanceBerrios, Joseph Stephen. January 2002 (has links)
Thesis (Ph. D.)--University of Florida, 2002. / Title from title page of source document. Includes vita. Includes bibliographical references.
|
12 |
APOP an automatic pattern- and object-based code parallelization framework for clusters /Liu, Xuli. January 1900 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2007. / Title from title screen (site viewed July 10, 2007). PDF text: 140 p. : ill. UMI publication number: AAT 3252445. Includes bibliographical references. Also available in microfilm and microfiche formats.
|
13 |
Indexical parallel programmingDu, Weichang 26 June 2018 (has links)
Indexical programming means programming languages and/or computational models based on indexical logic and possible world semantics. Indexical languages can be considered as the result of enriching conventional languages by allowing constructs to vary according to an implicit context or index. Programs written in an indexical language define the way in which objects vary from context to context, using context switching or indexical operators to combine meanings of objects from different contexts.
Based on indexical semantics, in indexical programs, context parallelism means that computations of objects at different contexts can be performed in parallel, and indexical communication means that parallel computation tasks at different contexts communicate with each other through indexical operators provided by the indexical language.
The dissertation defines the indexical functional language mLucid--a multidimensional extension of the programming language Lucid proposed by Ashcroft and Wadge. The language enriches the functional language ISWIM by incorporating functional semantics with indexical semantics. The indexical semantics of mLucid is based on the context space consisting of points in an arbitrary n-dimensional integer space. The meanings of objects, called intensions, in mLucid are functions from these contexts to data values. The language provides five primitive indexical operators, origin, next, prev, fby and before to switch context along a designated dimension.
The dimensionality of an intension in the indexical semantics of mLucid is defined as the set of dimensions that determines the range of the context space in which the tension varies. An abstract interpretation are defined that maps mLucid expressions to approximations of dimensionalities. Context parallelism and indexical communication in mLucid programs are defined by a semantics-based dependency relation between the values of variables at different contexts.
In parallel programming, the context space of mLucid is divided into a time dimension and space dimensions. The time dimension can be used to specify time steps in synchronous computations, or to specify indices of data streams in asynchronous computations. The space dimensions can be used to specify process-to-processor mappings. The dissertation shows that mLucid supports several parallel programming models, including systolic programming, multidimensional dataflow programming, and data parallel programming. / Graduate
|
14 |
Knowledge support for parallel performance data mining /Huck, Kevin A., January 2009 (has links)
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 218-231). Also available online in Scholars' Bank; and in ProQuest, free to University of Oregon users.
|
15 |
Parallel process placementHandler, Caroline January 1989 (has links)
This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware considerations
|
16 |
Facilitating program parallelisation : a profiling-based approachMak, Jonathan Chee Heng January 2011 (has links)
No description available.
|
17 |
Algorithmic skeletons as a method of parallel programmingWatkins, Rees Collyer January 1993 (has links)
A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
|
18 |
Improved algorithms for some classical graph problemsChong, Ka-wong., 莊家旺 January 1996 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
19 |
START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic dataZhu, Xinjie, 朱信杰 January 2015 (has links)
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed.
START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users.
It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
20 |
Supporting fault-tolerant parallel programming in Linda.Bakken, David Edward January 1994 (has links)
As people are becoming increasingly dependent on computerized systems, the need for these systems to be dependable is also increasing. However, programming dependable systems is difficult, especially when parallelism is involved. This is due in part to the fact that very few high-level programming languages support both fault-tolerance and parallel programming. This dissertation addresses this problem by presenting FT-Linda, a high-level language for programming fault-tolerant parallel programs. FT-Linda is based on Linda, a language for programming parallel applications whose most notable feature is a distributed shared memory called tuple space. FT-Linda extends Linda by providing support to allow a program to tolerate failures in the underlying computing platform. The distinguishing features of FT-Linda are stable tuple spaces and atomic execution of multiple tuple space operations. The former is a type of stable storage in which tuple values are guaranteed to persist across failures, while the latter allows collections of tuple operations to be executed in an all-or-nothing fashion despite failures and concurrency. Example FT-Linda programs are given for both dependable systems and parallel applications. The design and implementation of FT-Linda are presented in detail. The key technique used is the replicated state machine approach to constructing fault-tolerant distributed programs. Here, tuple space is replicated to provide failure resilience, and the replicas are sent a message describing the atomic sequence of tuple space operations to perform. This strategy allows an efficient implementation in which only a single multicast message is needed for each atomic sequence of tuple space operations. An implementation of FT-Linda for a network of workstations is also described. FT-Linda is being implemented using Consul, a communication substrate that supports fault-tolerant distributed programming. Consul is built in turn with the x-kernel, an operating system kernel that provides support for composing network protocols. Each of the components of the implementation has been built and tested.
|
Page generated in 0.0331 seconds