• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parallelization of ECG template-based abnormality detection

Kratsas, Sherry L. January 2000 (has links)
Thesis (M.S.)--West Virginia University, 2000. / Title from document title page. Document formatted into pages; contains vii, 62 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 61-62).
2

Establishing Linux Clusters for high-performance computing (HPC) at NPS

Daillidis, Christos 09 1900 (has links)
Approved for public release; distribution is unlimited / S tasks. Discrete Event Simulation (DES) often involves repeated, independent runs of the same models with different input parameters. A system which is able to run many replications quickly is more useful than one in which a single monolithic application runs quickly. A loosely coupled parallel system is indicated. Inexpensive commodity hardware, high speed local area networking, and open source software have created the potential to create just such loosely coupled parallel systems. These systems are constructed from Linux-based computers and are called Beowulf clusters. This thesis presents an analysis of clusters in high-performance computing and establishes a testbed implementation at the MOVES Institute. It describes the steps necessary to create a cluster, factors to consider in selecting hardware and software, and describes the process of creating applications that can run on the cluster. Monitoring the running cluster and system administration are also addressed. / Major, Hellenic Army
3

A descriptive performance model of small, low cost, diskless Beowulf clusters /

Nielson, Curtis R., January 2003 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. School of Technology, 2003. / Includes bibliographical references (p. 93-96).
4

Optimal Load Balancing in a Beowulf Cluster

Adams, Daniel Alan. January 2005 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: beowulf; load sharing; load balancing; PANTS. Includes bibliographical references (p. 28 ).
5

Establishing Linux Clusters for high-performance computing (HPC) at NPS /

Daillidis, Christos. January 2004 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, Sept. 2004. / Thesis advisor(s): Don Brutzman, Don McGregor. Includes bibliographical references (p. 159-164). Also available online.
6

Optimal Load Balancing in a Beowulf Cluster

Adams, Daniel Alan 02 May 2005 (has links)
PANTS (PANTS Application Node Transparency System) is a suite of programs designed to add transparent load balancing to a Beowulf cluster so that processes are transfered among the nodes of the cluster to improve performance. PANTS provides the option of using one of several different load balancing policies, each having a different approach. This paper studies the scalability and performance of these policies on large clusters and under various workloads. We measure the performance of our policies on our current cluster, and use that performance data to build simulations to test the performance of the policies in larger clusters and under differing workloads. Two policies, one deterministic and one non-deterministic, are presented which offer optimal steady-state performance. We also present best practices and discuss the major challenges of load balancing policy design.
7

Molecular simulation of vapour-liquid equilibrium using beowulf clusters.

01 November 2010 (has links)
This work describes the installation of a Beowulf cluster at the University of KwaZulu-Natal / Thesis (Ph.D.-Eng)-University of KwaZulu-Natal, 2006.
8

A Performance Study of LAM and MPICH on an SMP Cluster

Kearns, Brian Patrick 01 December 2002 (has links)
Many universities and research laboratories have developed low cost clusters, built from Commodity-Off-The-Shelf (COTS) components and running mostly free software. Research has shown that these types of systems are well-equipped to handle many problems requiring parallel processing. The primary components of clusters are hardware, networking, and system software. An important system software consideration for clusters is the choice of the message passing library. MPI (Message Passing Interface) has arguably become the most widely used message passing library on clusters and other parallel architectures, due in part to its existence as a standard. As a standard, MPI is open for anyone to implement, as long as the rules of the standard are followed. For this reason, a number of proprietary and freely available implementations have been developed. Of the freely available implementations, two have become increasingly popular: LAM (Local Area Multicomputer) and MPICH (MPI Chameleon). This thesis compares the performance of LAM and MPICH in an effort to provide performance data and analysis of the current releases of each to the cluster computing community. Specifically, the accomplishments of this thesis are: comparative testing of the High Performance Linpack benchmark (HPL); comparative testing of su3_rmd, an MPI application used in physics research; and a series of bandwidth comparisons involving eight MPI point-to-point communication constructs. All research was performed on a partition of the Wyeast SMP Cluster in the High Performance Computing Laboratory at Portland State University. We generate a vast amount of data, and show that LAM and MPICH perform similarly on many experiments, with LAM outperforming MPICH in the bandwidth tests and on a large problem size for su3_rmd. These findings, along with the findings of other research comparing the two libraries, suggest that LAM performs better than MPICH in the cluster environment. This conclusion may seem surprising, as MPICH has received more attention than LAM from MPI researchers. However, the two architectures are very different. LAM was originally designed for the cluster and networked workstation environments, while MPICH was designed to be portable across many different types of parallel architectures.

Page generated in 0.1216 seconds