• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 337
  • 189
  • 134
  • 56
  • 45
  • 44
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 919
  • 919
  • 919
  • 404
  • 393
  • 351
  • 351
  • 329
  • 324
  • 320
  • 319
  • 316
  • 314
  • 313
  • 313
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Improving the productivity of volunteer computing

Toth, David M. January 2008 (has links)
Dissertation (Ph.D.)--Worcester Polytechnic Institute. / Keywords: performance; volunteer computing. Includes bibliographical references (leaves 148-157).
72

Distributed configuration management for reconfigurable cluster computing

Jacob, Aju. January 2004 (has links)
Thesis (M.S.)--University of Florida, 2004. / Title from title page of source document. Document formatted into pages; contains 67 pages. Includes vita. Includes bibliographical references.
73

Establishing Linux Clusters for high-performance computing (HPC) at NPS /

Daillidis, Christos. January 2004 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, Sept. 2004. / Thesis advisor(s): Don Brutzman, Don McGregor. Includes bibliographical references (p. 159-164). Also available online.
74

Design and implementation of a computational cluster for high performance design and modeling of integrated circuits /

Gruener, Charles J. January 2009 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2009. / Typescript. Includes bibliographical references (leaves 109-111).
75

Toward high performance and highly reliable storage service /

Zhang, Ming, January 2004 (has links)
Thesis (Ph. D.)--University of Rhode Island, 2004. / Typescript. Includes bibliographical references (leaves 81-87).
76

Design and evaluation of a public resource computing framework

Baldassari, James D. January 2006 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: distributed systems, network computing, volunteer computing, public resource computing. Includes bibliographical references. (p.108-109)
77

Reduced-Order modeling of multiscale turbulent convection application to data center thermal management /

Rambo, Jeffrey D. January 2006 (has links)
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006. / Marc Smith, Committee Member ; P.K. Yeung, Committee Member ; Benjamin Shapiro, Committee Member ; Sheldon Jeter, Committee Member ; Yogendra Joshi, Committee Chair.
78

A framework for scoring and tagging NetFlow data

Sweeney, Michael John January 2019 (has links)
With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
79

Towards brain-scale modelling of the human cerebral blood flow : hybrid approach and high performance computing

Peyrounette, Myriam 25 October 2017 (has links) (PDF)
The brain microcirculation plays a key role in cerebral physiology and neuronal activation. In the case of degenerative diseases such as Alzheimer’s, severe deterioration of the microvascular networks (e.g. vascular occlusions) limit blood flow, thus oxygen and nutrients supply, to the cortex, eventually resulting in neurons death. In addition to functional neuroimaging, modelling is a valuable tool to investigate the impact of structural variations of the microvasculature on blood flow and mass transfers. In the brain microcirculation, the capillary bed contains the smallest vessels (1-10 μm in diameter) and presents a mesh-like structure embedded in the cerebral tissue. This is the main place of molecular exchange between blood and neurons. The capillary bed is fed and drained by larger arteriolar and venular tree-like vessels (10-100 μm in diameter). For the last decades, standard network approaches have significantly advanced our understanding of blood flow, mass transport and regulation mechanisms in the human brain microcirculation. By averaging flow equations over the vascular cross-sections, such approaches yield a one-dimensional model that involves much fewer variables compared to a full three-dimensional resolution of the flow. However, because of the high density of capillaries, such approaches are still computationally limited to relatively small volumes (<100 mm3). This constraint prevents applications at clinically relevant scales, since standard imaging techniques only yield much larger volumes (∼100 cm3), with a resolution of 1-10 mm3. To get around this computational cost, we present a hybrid approach for blood flow modelling where the capillaries are replaced by a continuous medium. This substitution makes sense since the capillary bed is dense and space-filling over a cut-off length of ∼50 μm. In this continuum, blood flow is characterized by effective properties (e.g. permeability) at the scale of a much larger representative volume. Furthermore, the domain is discretized on a coarse grid using the finite volume method, inducing an important computational gain. The arteriolar and venular trees cannot be homogenized because of their quasi-fractal structure, thus the network approach is used to model blood flow in the larger vessels. The main difficulty of the hybrid approach is to develop a proper coupling model at the points where arteriolar or venular vessels are connected to the continuum. Indeed, high pressure gradients build up at capillary-scale in the vicinity of the coupling points, and must be properly described at the continuum-scale. Such multiscale coupling has never been discussed in the context of brain microcirculation. Taking inspiration from the Peaceman “well model” developed for petroleum engineering, our coupling model relies on to use analytical solutions of the pressure field in the neighbourhood of the coupling points. The resulting equations yield a single linear system to solve for both the network part and the continuum (strong coupling). The accuracy of the hybrid model is evaluated by comparison with a classical network approach, for both very simple synthetic architectures involving no more than two couplings, and more complex ones, with anatomical arteriolar and venular trees displaying a large number of couplings. We show that the present approach is very accurate, since relative pressure errors are lower than 6 %. This lays the goundwork for introducing additional levels of complexity in the future (e.g. non uniform hematocrit). In the perspective of large-scale simulations and extension to mass transport, the hybrid approach has been implemented in a C++ code designed for High Performance Computing. It has been fully parallelized using Message Passing Interface standards and specialized libraries (e.g. PETSc). Since the present work is part of a larger project involving several collaborators, special care has been taken in developing efficient coding strategies.
80

Introducing enhanced fully-adaptive routing decisions within Torus-Mesh and hypercube interconnect networks

Lydick, Christopher L. January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Don M. Gruenbacher / The method for communicating within an interconnection network, or fabric of connections between nodes, can be as diverse as are the applications which utilize them. Because of dynamic traffic loads on these interconnection networks, fully-adaptive routing algorithms have been shown to exploit locality while balancing loads and softening the effects of hot-spots. One issue which has been overlooked is the impact of data traveling along the periphery of a selected minimal routable quadrant (MRQ) within these fully-adaptive algorithms. As data aligns with the destination in the x, y, and z dimensions for instance, the data then traverses the periphery of an MRQ. For each dimension that this occurs, the data is given one less choice for routing around hotspots which could appear later along the path. By weighting the decision of selecting a next-hop by avoiding the periphery of the selected MRQ, the data then has more options for avoiding hotspots. One hybridized routing algorithm which borrows heavily from CQR (an efficient and stable fully-adaptive algorithm), is introduced within this work. Enhanced CQR with Periphery Avoidance, attempts to weight the routing decision for a next hop using both output queues and the proximity to the periphery of the MRQ. This fully-adaptive algorithm is tested using simulations and a laboratory research cluster using a USB interconnect in the hypercube topology. It is also compared against other static, oblivious, and adaptive algorithms. Thor's Tack Hammer, the Kansas State University research cluster, is also benchmarked and discussed as an inexpensive and dependable parallel system.

Page generated in 0.2052 seconds