• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 81
  • 19
  • 12
  • 6
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 440
  • 440
  • 215
  • 169
  • 85
  • 76
  • 69
  • 65
  • 57
  • 53
  • 50
  • 47
  • 45
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

APOP an automatic pattern- and object-based code parallelization framework for clusters /

Liu, Xuli. January 1900 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2007. / Title from title screen (site viewed July 10, 2007). PDF text: 140 p. : ill. UMI publication number: AAT 3252445. Includes bibliographical references. Also available in microfilm and microfiche formats.
32

Indexical parallel programming

Du, Weichang 26 June 2018 (has links)
Indexical programming means programming languages and/or computational models based on indexical logic and possible world semantics. Indexical languages can be considered as the result of enriching conventional languages by allowing constructs to vary according to an implicit context or index. Programs written in an indexical language define the way in which objects vary from context to context, using context switching or indexical operators to combine meanings of objects from different contexts. Based on indexical semantics, in indexical programs, context parallelism means that computations of objects at different contexts can be performed in parallel, and indexical communication means that parallel computation tasks at different contexts communicate with each other through indexical operators provided by the indexical language. The dissertation defines the indexical functional language mLucid--a multidimensional extension of the programming language Lucid proposed by Ashcroft and Wadge. The language enriches the functional language ISWIM by incorporating functional semantics with indexical semantics. The indexical semantics of mLucid is based on the context space consisting of points in an arbitrary n-dimensional integer space. The meanings of objects, called intensions, in mLucid are functions from these contexts to data values. The language provides five primitive indexical operators, origin, next, prev, fby and before to switch context along a designated dimension. The dimensionality of an intension in the indexical semantics of mLucid is defined as the set of dimensions that determines the range of the context space in which the tension varies. An abstract interpretation are defined that maps mLucid expressions to approximations of dimensionalities. Context parallelism and indexical communication in mLucid programs are defined by a semantics-based dependency relation between the values of variables at different contexts. In parallel programming, the context space of mLucid is divided into a time dimension and space dimensions. The time dimension can be used to specify time steps in synchronous computations, or to specify indices of data streams in asynchronous computations. The space dimensions can be used to specify process-to-processor mappings. The dissertation shows that mLucid supports several parallel programming models, including systolic programming, multidimensional dataflow programming, and data parallel programming. / Graduate
33

Parallelization of algorithms by explicit partitioning

Bahoshy, Nimatallah M. January 1992 (has links)
In order to utilize parallel computers, four approaches, broadly speaking, to the provision of parallel software have been followed: (1) automatic production of parallel code by parallelizing—compilers which act on sequential programs written in existing languages; (2) "add on" features to existing languages that enable the programmer to make use of the parallel computer—these are specific to each machine; (3) full-blown parallel languages—these could be completely new languages, but usually they are derived from existing languages; (4) the provision of tools to aid the programmer in the detection of inherent parallelism in a given algorithm and in the design and implementation of parallel programs.
34

Knowledge support for parallel performance data mining /

Huck, Kevin A., January 2009 (has links)
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 218-231). Also available online in Scholars' Bank; and in ProQuest, free to University of Oregon users.
35

Parallel process placement

Handler, Caroline January 1989 (has links)
This thesis investigates methods of automatic allocation of processes to available processors in a given network configuration. The research described covers the investigation of various algorithms for optimal process allocation. Among those researched were an algorithm which used a branch and bound technique, an algorithm based on graph theory, and an heuristic algorithm involving cluster analysis. These have been implemented and tested in conjunction with the gathering of performance statistics during program execution, for use in improving subsequent allocations. The system has been implemented on a network of loosely-coupled microcomputers using multi-port serial communication links to simulate a transputer network. The concurrent programming language occam has been implemented, replacing the explicit process allocation constructs with an automatic placement algorithm. This enables the source code to be completely separated from hardware considerations
36

Assessing the Suitability of Python as a Language for Parallel Programming

Kohli, Manav S 01 January 2016 (has links)
With diminishing gains in processing power from successive generations of hardware development, there is a new focus on using advances in computer science and parallel programming to build faster, more efficient software. As computers trend toward including multiple and multicore processors, parallel computing serves as a promising option for optimizing the next generation of software applications. However, models for implementing parallel programs remain highly opaque due to their reliance on languages such as Fortran, C, and C++. In this paper I investigate Python an option for implementing parallel programming techniques in application development. I analyze the efficiency and accessibility of MPI for Python and IPython Parallel packages by calculating in parallel using a Monte Carlo simulation and comparing their speeds to the sequential calculation. While MPI for Python offers the core functionality of MPI and C-like syntax in Python, IPython Parallel's architecture provides a truly unique model.
37

Virtual memory on data diffusion architectures

Buenabad-Chavez, Jorge January 1998 (has links)
No description available.
38

Parallel discrete event simulation

Kalantery, Nasser January 1994 (has links)
No description available.
39

Modelling of saturated traffic flow using highly parallel systems

Lu, Kang Hsin January 1996 (has links)
No description available.
40

Distributed simulation of high-level algebraic Petri nets

Djemame, Karim January 1999 (has links)
No description available.

Page generated in 0.1199 seconds