1 |
Enabling system validation for the many-core supercomputerChen, Fei. January 2009 (has links)
Thesis (Ph.D.)--University of Delaware, 2009. / Principal faculty advisor: Guang R. Gao, Dept. of Electrical & Computer Engineering. Includes bibliographical references.
|
2 |
The development of supercomputing tools in a global atmospheric chemistry model and its application on selected problems in global atmospheric chemistry modelingKindler, Thomas Paul 05 1900 (has links)
No description available.
|
3 |
Failure analysis, modeling, and prediction for BlueGene/LLiang, Yinglung. January 2007 (has links)
Thesis (Ph. D.)--Rutgers University, 2007. / "Graduate Program in Electrical and Computer Engineering." Includes bibliographical references (p. 123-127).
|
4 |
Locality and parallel optimizations for parallel supercomputingHarrison, Ian, January 2003 (has links)
Thesis (B.A.)--Haverford College, Dept. of Computer Science, 2003. / Includes bibliographical references.
|
5 |
Parallel subdomain method for massively parallel computersSu, (Philip) Shin-Chen 12 1900 (has links)
No description available.
|
6 |
Optimizing the Fast Fourier Transform on a many-core architectureChen, Long. January 2008 (has links)
Thesis (M.S.)--University of Delaware, 2008. / Principal faculty advisor: Guang R. Gao, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
|
7 |
Express lanes modification to the data vortex photonic all-optical path interconnection networkBozek, Matthew Peter January 2008 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Wills, D. Scott; Committee Member: Keezer, David; Committee Member: Yalamanchili, Sudhakar
|
8 |
Relational Computing Using HPC Resources: Services and OptimizationsSoundarapandian, Manikandan 15 September 2015 (has links)
Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intensive tasks are processed using high performance computing resources and supercomputers whereas data intensive tasks are delegated to standalone databases and some custom programs. DiceX framework is a one-stop solution for distributed database management and processing and its main mission is to leverage and utilize supercomputing resources for data intensive computing, in particular relational data processing.
While standalone databases are always on and a user can submit queries at any time for required results, supercomputing resources must be acquired and are available for a limited time period. These resources are relinquished either upon completion of execution or at the expiration of the allocated time period. This kind of reservation based usage style poses critical challenges, including building and launching a distributed data engine onto the supercomputer, saving the engine and resuming from the saved image, devising efficient optimization upgrades to the data engine and enabling other applications to seamlessly access the engine . These challenges and requirements cause us to align our approach more closely with cloud computing paradigms of Infrastructure as a Service(IaaS) and Platform as a Service(PaaS). In this thesis, we propose cloud computing like workflows, but using supercomputing resources to manage and process relational data intensive tasks. We propose and implement several services including database freeze and migrate and resume, ad-hoc resource addition and table redistribution. These services assist in carrying out the workflows defined.
We also propose an optimization upgrade to the query planning module of postgres-XC, the core relational data processing engine of the DiceX framework. With a knowledge of domain semantics, we have devised a more robust data distribution strategy that would enable to push down most time consuming sql operations forcefully to the postgres-XC data nodes, bypassing its query planner's default shippability criteria without compromising correctness. Forcing query push down reduces the query processing time by a factor of almost 40%-60% for certain complex spatio-temporal queries on our epidemiology datasets.
As part of this work, a generic broker service has also been implemented, which acts as an interface to the DiceX framework by exposing restful apis, which applications can make use of to query and retrieve results irrespective of the programming language or environment. / Master of Science
|
9 |
Breaking away from the OS shadow a program execution model aware thread virtual machine for multicore architectures /Cuvillo, Juan del. January 2008 (has links)
Thesis (Ph.D.)--University of Delaware, 2008. / Principal faculty advisor: Guang R. Gao, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
|
10 |
The effects of microprocessor architecture on speedup in distributed memory supercomputers /Beane, Glen L., January 2004 (has links) (PDF)
Thesis (M.S.) in Computer Science--University of Maine, 2004. / Includes vita. Includes bibliographical references (leaves 53-54).
|
Page generated in 0.0477 seconds