• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 11
  • 11
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 153
  • 153
  • 50
  • 46
  • 46
  • 24
  • 24
  • 23
  • 21
  • 20
  • 19
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

IPPM : Interactive parallel program monitor /

Brandis, Robert Craig, January 1986 (has links)
Thesis (M.S.)--Oregon Graduate Center, 1986.
32

A distributed Linda server on a network of heterogeneous processors

Smith, Graham Leslie January 1993 (has links)
Linda is an approach to parallelism which relies on a virtual associative shared memory called tuple space. Tuple space is accessed through a small set of primitive operations and is conceptually easy to understand and manipulate. The physical implementation of a Linda tuple space may of course be completely different from the conceptual model. Rhodes has implemented versions of Linda on a ring of RS-232 joined PC's and on a cluster of T800 transputers with a single copy of tuple space on one transputer. Current research targets the implementation of a distributed Linda server on a network of heterogeneous processors. This work describes the design and implementation of a distributed Linda server. Emphasis is placed on aspects of the design which enhance portability and efficiency.
33

Model-based automatic performance diagnosis of parallel computations /

Li, Li, January 2007 (has links)
Thesis (Ph. D.)--University of Oregon, 2007. / Typescript. Includes vita and abstract. Includes bibliographical references (leaves 119-123). Also available for download via the World Wide Web; free to University of Oregon users.
34

Data-parallel concurrent constraint programming.

January 1994 (has links)
by Bo-ming Tong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 104-[110]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Concurrent Constraint Programming --- p.2 / Chapter 1.2 --- Finite Domain Constraints --- p.3 / Chapter 2 --- The Firebird Language --- p.5 / Chapter 2.1 --- Finite Domain Constraints --- p.6 / Chapter 2.2 --- The Firebird Computation Model --- p.6 / Chapter 2.3 --- Miscellaneous Features --- p.7 / Chapter 2.4 --- Clause-Based N on determinism --- p.9 / Chapter 2.5 --- Programming Examples --- p.10 / Chapter 2.5.1 --- Magic Series --- p.10 / Chapter 2.5.2 --- Weak Queens --- p.14 / Chapter 3 --- Operational Semantics --- p.15 / Chapter 3.1 --- The Firebird Computation Model --- p.16 / Chapter 3.2 --- The Firebird Commit Law --- p.17 / Chapter 3.3 --- Derivation --- p.17 / Chapter 3.4 --- Correctness of Firebird Computation Model --- p.18 / Chapter 4 --- Exploitation of Data-Parallelism in Firebird --- p.24 / Chapter 4.1 --- An Illustrative Example --- p.25 / Chapter 4.2 --- Mapping Partitions to Processor Elements --- p.26 / Chapter 4.3 --- Masks --- p.27 / Chapter 4.4 --- Control Strategy --- p.27 / Chapter 4.4.1 --- A Control Strategy Suitable for Linear Equations --- p.28 / Chapter 5 --- Data-Parallel Abstract Machine --- p.30 / Chapter 5.1 --- Basic DPAM --- p.31 / Chapter 5.1.1 --- Hardware Requirements --- p.31 / Chapter 5.1.2 --- Procedure Calling Convention And Process Creation --- p.32 / Chapter 5.1.3 --- Memory Model --- p.34 / Chapter 5.1.4 --- Registers --- p.41 / Chapter 5.1.5 --- Process Management --- p.41 / Chapter 5.1.6 --- Unification --- p.49 / Chapter 5.1.7 --- Variable Table --- p.49 / Chapter 5.2 --- DPAM with Backtracking --- p.50 / Chapter 5.2.1 --- Choice Point --- p.52 / Chapter 5.2.2 --- Trailing --- p.52 / Chapter 5.2.3 --- Recovering the Process Queues --- p.57 / Chapter 6 --- Implementation --- p.58 / Chapter 6.1 --- The DECmpp Massively Parallel Computer --- p.58 / Chapter 6.2 --- Implementation Overview --- p.59 / Chapter 6.3 --- Constraints --- p.60 / Chapter 6.3.1 --- Breaking Down Equality Constraints --- p.61 / Chapter 6.3.2 --- Processing the Constraint 'As Is' --- p.62 / Chapter 6.4 --- The Wide-Tag Architecture --- p.63 / Chapter 6.5 --- Register Window --- p.64 / Chapter 6.6 --- Dereferencing --- p.65 / Chapter 6.7 --- Output --- p.66 / Chapter 6.7.1 --- Collecting the Solutions --- p.66 / Chapter 6.7.2 --- Decoding the solution --- p.68 / Chapter 7 --- Performance --- p.69 / Chapter 7.1 --- Uniprocessor Performance --- p.71 / Chapter 7.2 --- Solitary Mode --- p.73 / Chapter 7.3 --- Bit Vectors of Domain Variables --- p.75 / Chapter 7.4 --- Heap Consumption of the Heap Frame Scheme --- p.77 / Chapter 7.5 --- Eager Nondeterministic Derivation vs Lazy Nondeterministic Deriva- tion --- p.78 / Chapter 7.6 --- Priority Scheduling --- p.79 / Chapter 7.7 --- Execution Profile --- p.80 / Chapter 7.8 --- Effect of the Number of Processor Elements on Performance --- p.82 / Chapter 7.9 --- Change of the Degree of Parallelism During Execution --- p.84 / Chapter 8 --- Related Work --- p.88 / Chapter 8.1 --- Vectorization of Prolog --- p.89 / Chapter 8.2 --- Parallel Clause Matching --- p.90 / Chapter 8.3 --- Parallel Interpreter --- p.90 / Chapter 8.4 --- Bounded Quantifications --- p.91 / Chapter 8.5 --- SIMD MultiLog --- p.91 / Chapter 9 --- Conclusion --- p.93 / Chapter 9.1 --- Limitations --- p.94 / Chapter 9.1.1 --- Data-Parallel Firebird is Specialized --- p.94 / Chapter 9.1.2 --- Limitations of the Implementation Scheme --- p.95 / Chapter 9.2 --- Future Work --- p.95 / Chapter 9.2.1 --- Extending Firebird --- p.95 / Chapter 9.2.2 --- Improvements Specific to DECmpp --- p.99 / Chapter 9.2.3 --- Labeling --- p.100 / Chapter 9.2.4 --- Parallel Domain Consistency --- p.101 / Chapter 9.2.5 --- Branch and Bound Algorithm --- p.102 / Chapter 9.2.6 --- Other Possible Future Work --- p.102 / Bibliography --- p.104
35

Java message passing interface.

January 1998 (has links)
by Wan Lai Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 76-80). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Objectives --- p.3 / Chapter 1.3 --- Contributions --- p.4 / Chapter 1.4 --- Overview --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Message Passing Interface --- p.6 / Chapter 2.1.1 --- Point-to-Point Communication --- p.7 / Chapter 2.1.2 --- Persistent Communication Request --- p.8 / Chapter 2.1.3 --- Collective Communication --- p.8 / Chapter 2.1.4 --- Derived Datatype --- p.9 / Chapter 2.2 --- Communications in Java --- p.10 / Chapter 2.2.1 --- Object Serialization --- p.10 / Chapter 2.2.2 --- Remote Method Invocation --- p.11 / Chapter 2.3 --- Performances Issues in Java --- p.11 / Chapter 2.3.1 --- Byte-code Interpreter --- p.11 / Chapter 2.3.2 --- Just-in-time Compiler --- p.12 / Chapter 2.3.3 --- HotSpot --- p.13 / Chapter 2.4 --- Parallel Computing in Java --- p.14 / Chapter 2.4.1 --- JavaMPI --- p.15 / Chapter 2.4.2 --- Bayanihan --- p.15 / Chapter 2.4.3 --- JPVM --- p.15 / Chapter 3 --- Infrastructure --- p.17 / Chapter 3.1 --- Layered Model --- p.17 / Chapter 3.2 --- Java Parallel Environment --- p.19 / Chapter 3.2.1 --- Job Coordinator --- p.20 / Chapter 3.2.2 --- HostApplet --- p.20 / Chapter 3.2.3 --- Formation of Java Parallel Environment --- p.21 / Chapter 3.2.4 --- Spawning Processes --- p.24 / Chapter 3.2.5 --- Message-passing Mechanism --- p.28 / Chapter 3.3 --- Application Programming Interface --- p.28 / Chapter 3.3.1 --- Message Routing --- p.29 / Chapter 3.3.2 --- Language Binding for MPI in Java --- p.31 / Chapter 4 --- Programming in JMPI --- p.35 / Chapter 4.1 --- JMPI Package --- p.35 / Chapter 4.2 --- Application Startup Procedure --- p.37 / Chapter 4.2.1 --- MPI --- p.38 / Chapter 4.2.2 --- JMPI --- p.38 / Chapter 4.3 --- Example --- p.39 / Chapter 5 --- Processes Management --- p.42 / Chapter 5.1 --- Background --- p.42 / Chapter 5.2 --- Scheduler Model --- p.43 / Chapter 5.3 --- Load Estimation --- p.45 / Chapter 5.3.1 --- Cost Ratios --- p.47 / Chapter 5.4 --- Task Distribution --- p.49 / Chapter 6 --- Performance Evaluation --- p.51 / Chapter 6.1 --- Testing Environment --- p.51 / Chapter 6.2 --- Latency from Java --- p.52 / Chapter 6.2.1 --- Benchmarking --- p.52 / Chapter 6.2.2 --- Experimental Results in Computation Costs --- p.52 / Chapter 6.2.3 --- Experimental Results in Communication Costs --- p.55 / Chapter 6.3 --- Latency from JMPI --- p.56 / Chapter 6.3.1 --- Benchmarking --- p.56 / Chapter 6.3.2 --- Experimental Results --- p.58 / Chapter 6.4 --- Application Granularity --- p.62 / Chapter 6.5 --- Scheduling Enable --- p.64 / Chapter 7 --- Conclusion --- p.66 / Chapter 7.1 --- Summary of the thesis --- p.66 / Chapter 7.2 --- Future work --- p.67 / Chapter A --- Performance Metrics and Benchmark --- p.69 / Chapter A.1 --- Model and Metrics --- p.69 / Chapter A.1.1 --- Measurement Model --- p.69 / Chapter A.1.2 --- Performance Metrics --- p.70 / Chapter A.1.3 --- Communication Parameters --- p.72 / Chapter A.2 --- Benchmarking --- p.73 / Chapter A.2.1 --- Ping --- p.73 / Chapter A.2.2 --- PingPong --- p.74 / Chapter A.2.3 --- Collective --- p.74 / Bibliography --- p.76
36

Finding, Measuring, and Reducing Inefficiencies in Contemporary Computer Systems

Kambadur, Melanie Rae January 2016 (has links)
Computer systems have become increasingly diverse and specialized in recent years. This complexity supports a wide range of new computing uses and users, but is not without cost: it has become difficult to maintain the efficiency of contemporary general purpose computing systems. Computing inefficiencies, which include nonoptimal runtimes, excessive energy use, and limits to scalability, are a serious problem that can result in an inability to apply computing to solve the world's most important problems. Beyond the complexity and vast diversity of modern computing platforms and applications, a number of factors make improving general purpose efficiency challenging, including the requirement that multiple levels of the computer system stack be examined, that legacy hardware devices and software may stand in the way of achieving efficiency, and the need to balance efficiency with reusability, programmability, security, and other goals. This dissertation presents five case studies, each demonstrating different ways in which the measurement of emerging systems can provide actionable advice to help keep general purpose computing efficient. The first of the five case studies is Parallel Block Vectors, a new profiling method for understanding parallel programs with a fine-grained, code-centric perspective aids in both future hardware design and in optimizing software to map better to existing hardware. Second is a project that defines a new way of measuring application interference on a datacenter's worth of chip-multiprocessors, leading to improved scheduling where applications can more effectively utilize available hardware resources. Next is a project that uses the GT-Pin tool to define a method for accelerating the simulation of GPGPUs, ultimately allowing for the development of future hardware with fewer inefficiencies. The fourth project is an experimental energy survey that compares and combines the latest energy efficiency solutions at different levels of the stack to properly evaluate the state of the art and to find paths forward for future energy efficiency research. The final project presented is NRG-Loops, a language extension that allows programs to measure and intelligently adapt their own power and energy use.
37

Diagonalizing quantum spin models on parallel machine. / 並行機上量子自旋模型的對角化 / Diagonalizing quantum spin models on parallel machine. / Bing xing ji shang liang zi zi xuan mo xing de dui jiao hua

January 2005 (has links)
Chan Yuk-Lin = 並行機上量子自旋模型的對角化 / 陳玉蓮. / Thesis submitted in: September 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 121-123). / Text in English; abstracts in English and Chinese. / Chan Yuk-Lin = Bing xing ji shang liang zi zi xuan mo xing de dui jiao hua / Chen Yulian. / Abstract --- p.i / 摘要 --- p.ii / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Development of Theory of Magnetism --- p.2 / Chapter 1.3 --- Heisenberg Model --- p.5 / Chapter 1.4 --- Thesis Organization --- p.9 / Chapter 2 --- Introduction to Parallel Computing --- p.11 / Chapter 2.1 --- Architecture of Parallel Computer --- p.12 / Chapter 2.2 --- Symmetric Multiprocessors and Clusters --- p.16 / Chapter 2.2.1 --- Symmetric Multiprocessors --- p.16 / Chapter 2.2.2 --- Cluster --- p.18 / Chapter 2.2.3 --- Clusters versus SMP --- p.19 / Chapter 2.3 --- Hybrid Architectures (Cluster of SMPs) --- p.20 / Chapter 2.4 --- Hardware Platform for Parallel Computing --- p.21 / Chapter 2.4.1 --- SGI Origin 2000 (Origin) --- p.21 / Chapter 2.4.2 --- IBM RS/6000 SP (Orbit) --- p.22 / Chapter 3 --- Parallelization --- p.23 / Chapter 3.1 --- Models of Parallel Programming --- p.24 / Chapter 3.2 --- Parallel Programming Paradigm --- p.26 / Chapter 3.2.1 --- Programming for Distributed Memory Systems: MPI --- p.26 / Chapter 3.2.2 --- Programming for Shared Memory Systems: OpenMP --- p.31 / Chapter 3.2.3 --- Programming for Hybrid Systems: MPI + OpenMP --- p.39 / Chapter 4 --- Performance --- p.42 / Chapter 4.1 --- Writing a Parallel Program --- p.42 / Chapter 4.2 --- Performance Analysis --- p.43 / Chapter 4.3 --- Synchronization and Communication --- p.47 / Chapter 4.3.1 --- Communication modes --- p.47 / Chapter 5 --- Exact Diagonalization --- p.50 / Chapter 5.1 --- Symmetry Invariance --- p.52 / Chapter 5.2 --- Lanczos Method --- p.53 / Chapter 5.2.1 --- Basic Lanczos Algorithm --- p.54 / Chapter 5.2.2 --- Modified Lanczos Method --- p.56 / Chapter 5.3 --- Dynamic Memory Allocation --- p.58 / Chapter 6 --- Parallelization of Exact Diagonalization --- p.62 / Chapter 6.1 --- Parallelization of Lanczos Method --- p.62 / Chapter 6.2 --- Hamiltonian Matrix Decomposition --- p.66 / Chapter 6.2.1 --- Row-Wise Block Decomposition --- p.67 / Chapter 6.2.2 --- Column-Wise Block Decomposition --- p.69 / Chapter 7 --- Results and Discussion --- p.71 / Chapter 7.1 --- Lattice structure --- p.71 / Chapter 7.2 --- Definition of Timing --- p.72 / Chapter 7.3 --- Rowwise vs Columnwise --- p.73 / Chapter 7.4 --- SGI Origin 2000(0rigin) --- p.77 / Chapter 7.4.1 --- Timing Results --- p.77 / Chapter 7.4.2 --- Performance --- p.79 / Chapter 7.5 --- IBM RS/6000 SP(Orbit) --- p.82 / Chapter 7.5.1 --- MPI vs Hybrid --- p.82 / Chapter 7.5.2 --- Timing and Performance --- p.84 / Chapter 7.6 --- Timing on Origin vs Orbit --- p.89 / Chapter 8 --- Conclusion --- p.91 / Chapter A --- Basic MPI Concepts --- p.95 / Chapter A.1 --- Message Passing Interface --- p.95 / Chapter A.2 --- MPI Routine Format --- p.96 / Chapter A.3 --- Start writing a MPI program --- p.96 / Chapter A.3.1 --- The First MPI Program --- p.97 / Chapter A.3.2 --- Sample MPI Program #1 --- p.100 / Chapter A.3.3 --- Sample MPI Program #2 --- p.106 / Chapter B --- Compiling and Running Parallel Jobs in IBM SP --- p.111 / Chapter B.1 --- Compilation --- p.111 / Chapter B.1.1 --- Compiler Options --- p.112 / Chapter B.2 --- Running Jobs --- p.114 / Chapter B.2.1 --- Loadleveler --- p.114 / Chapter B.2.2 --- Serial Job Script --- p.114 / Chapter B.2.3 --- Parallel Job Script : MPI Program --- p.115 / Chapter B.2.4 --- Parallel Job Script: OpenMP Program --- p.117 / Chapter B.2.5 --- Parallel Job Script: Hybrid MPI/OpenMP Program . . --- p.118 / Chapter B.2.6 --- LoadLeveler Commands --- p.120 / Bibliography --- p.123
38

Genetic parallel programming. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
Sin Man Cheang. / "March 2005." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 219-233) / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
39

Data-parallel programming with multiple inheritance on the connection machine

Girimaji, Sanjay 01 April 1990 (has links)
The demand for computers is oriented toward faster computers and newer computers are being built with more than one CPU. These computers require sophisticated software to program them. One such approach to program the multiple CPU machines is through the use of object-oriented programming techniques. An example of such an approach is the use of C* on the Connection Machine. Though C* supports many of the object-oriented concepts, it does not support the concept of software reuse through inheritance. This thesis introduces a new language called C*±+ , an extension of C* language to support inheritance. We also discuss the issues invloved in the implementation of multiple inheritance in programming languages. This thesis describes the differences between C** and C* . It also discusses the various issues involved in the design and implementation of the translator from C** to C* . It also illustrates the advantages of programming in C*++ through an example. Since C*++ is designed to support software reuse which allows the users to create quality software in shorter time, it is anticipated that C*+ will have widespread use in programming the Connection Machine.
40

Distributed parallel computation using standard ML

Chattopadhyay, Vaishali, January 2007 (has links) (PDF)
Thesis (M.S. in computer science)--Washington State University, December 2007. / Includes bibliographical references (p. 97-102).

Page generated in 0.145 seconds