Spelling suggestions: "subject:"high performance anda"" "subject:"high performance ando""
301 |
Evaluating and Enhancing FAIR Compliance in Data Resource Portal DevelopmentYiqing Qu (18437745) 01 May 2024 (has links)
<p dir="ltr">There is a critical need for improvement in scientific data management when the big-data era arrives. Motivated by the evolution and significance of FAIR principles in contemporary research, the study focuses on the development and evaluation of a FAIR-compliant data resource portal. The challenge lies in translating the abstract FAIR principles into actionable, technological implementations and the evaluation. After baseline selection, the study aims to benchmark standards and outperform existing FAIR compliant data resource portals. The proposed approach includes an assessment of existing portals, the interpretation of FAIR principles into practical considerations, and the integration of modern technologies for the implementation. With a FAIR-ness evaluation framework designed and applied to the implementation, this study evaluated and improved the FAIR-compliance of data resource portal. Specifically, the study identified the need for improved persistent identifiers, comprehensive descriptive metadata, enhanced metadata access methods and adherence to community standards and formats. The evaluation of the FAIR-compliant data resource portal with FAIR implementation, showed a significant improvement in FAIR compliance, and eventually enhanced data discoverability, usability, and overall management in academic research.</p>
|
302 |
Same principles, different practices: The many routes to a high performance work systemPerrett, Robert A. 2016 May 1923 (has links)
No
|
303 |
Development and Validation of Micro Emulsion High Performance Liquid Chromatography(MELC) Method for the Determination of Nifedipine in Pharmaceutical PreparationAl-Jammal, M.K.H., Al Ayoub, Yuosef, Assi, Khaled H. 24 February 2015 (has links)
Yes / Microemulsion is a stable, isotropic clear solution consisting of oil based substance, water surfactant and cosurfactant.
There are two types of microemulsion which are used as a mobile phase; water in oil (w/o) and oil in water
(o/w).Microemulsion has a strong ability to solubilize both hydrophobic and hydrophilic analytes, therefore reducing
the pre-treatment of the sample which is needed for the complex sample. Recent reports found that separating the
analytes by using microemulsion high performance liquid chromatography can be achieved with superior speed and
efficiency compared to conventional HPLC modes. In this work, Oil in water (o/w) microemulsion has been used
for the determination of nifedipine in pharmaceutical preparation. The effect of each parameter on the separation
process was examined. The samples were injected into C18, analytical columns maintained at 30°C with a flow rate 1
ml/min. The mobile phase was 87.1% aqueous orthophosphate buffer 15 mM (adjusted to pH 3 with orthophosphoric
acid), 0.8% of octane as oil, 4.5 SDS, and 7.6% 1-butanol, all w/w. The nifedipine and internal standard peaks were
detected by UV detection at λ max 237 nm
The calibration curve was linear (r2=0.9995) over nifedipine concentrations ranging from 1 to 60 μg/ml (n=6). The
method has good sensitivity with limit of detection (LOD) of 0.33 μg/ml and limit of quantitation (LOQ) of 1.005 μg/
ml. Also it has an excellent accuracy ranging from 99.11 to 101.64%. The intra-day and inter-day precisions (RSD
%) were <0.45% and <0.9%, respectively.
|
304 |
Microbore HPLC methodology and temperature programmed microbore HPLCBowermaster, Jeffrey January 1984 (has links)
Small diameter LC columns provide rapid thermal equilibration and are ideal candidates for temperature programmed LC. Special instrumentation requirements are presented and details of column assembly are given to permit the preparation of highly efficient, stable microbore columns. Three LC temperature control systems are described and their individual strengths and weaknesses are discussed. Problems encountered in raising the temperature of an LC column are addressed and solutions are described. Experimental results of column and instrumentation evaluation are given and the effects of temperature on speed, efficiency, stability and retention of a broad range of samples is reported. Temperature and solvent programming are compared directly. / Ph. D.
|
305 |
HPC-based Parallel Algorithms for Generating Random Networks and Some Other Network Analysis ProblemsAlam, Md Maksudul 06 December 2016 (has links)
The advancement of modern technologies has resulted in an explosive growth of complex systems, such as the Internet, biological, social, and various infrastructure networks, which have, in turn, contributed to the rise of massive networks. During the past decade, analyzing and mining of these networks has become an emerging research area with many real-world applications. The most relevant problems in this area include: collecting and managing networks, modeling and generating random networks, and developing network mining algorithms. In the era of big data, speed is not an option anymore for the effective analysis of these massive systems, it is an absolute necessity. This motivates the need for parallel algorithms on modern high-performance computing (HPC) systems including multi-core, distributed, and graphics processor units (GPU) based systems. In this dissertation, we present distributed memory parallel algorithms for generating massive random networks and a novel GPU-based algorithm for index searching.
This dissertation is divided into two parts. In Part I, we present parallel algorithms for generating massive random networks using several widely-used models. We design and develop a novel parallel algorithm for generating random networks using the preferential-attachment model. This algorithm can generate networks with billions of edges in just a few minutes using a medium-sized computing cluster. We develop another parallel algorithm for generating random networks with a given sequence of expected degrees. We also design a new a time and space efficient algorithmic method to generate random networks with any degree distributions. This method has been applied to generate random networks using other popular network models, such as block two-level Erdos-Renyi and stochastic block models. Parallel algorithms for network generation pose many nontrivial challenges such as dependency on edges, avoiding duplicate edges, and load balancing. We applied novel techniques to deal with these challenges. All of our algorithms scale very well to a large number of processors and provide almost linear speed-up.
Dealing with a large number of networks collected from a variety of fields requires efficient management systems such as graph databases. Finding a record in those databases is very critical and typically is the main bottleneck for performance. In Part II of the dissertation, we develop a GPU-based parallel algorithm for index searching. Our algorithm achieves the fastest throughput ever reported in the literature for various benchmarks. / Ph. D. / The advancement of modern technologies has resulted in an explosive growth of complex systems, such as the Internet, biological, social, and various infrastructure networks, which have, in turn, contributed to the rise of massive networks. During the past decade, analyzing and mining of these networks has become an emerging research area with many real-world applications. The most relevant problems in this area include: collecting and managing networks, modeling and generating random networks, and developing network mining algorithms. As the networks are massive in size, we need faster algorithms for the quick and effective analysis of these systems. This motivates the need for parallel algorithms on modern high-performance computing (HPC) based systems. In this dissertation, we present HPC-based parallel algorithms for generating massive random networks and managing large scale network data.
This dissertation is divided into two parts. In Part I, we present parallel algorithms for generating massive random networks using several widely-used models, such as the preferential attachment model, the Chung-Lu model, the block two-level Erdős-Rényi model and the stochastic block model. Our algorithms can generate networks with billions of edges in just a few minutes using a medium-sized HPC-based cluster. We applied novel load balancing techniques to distribute workloads equally among the processors. As a result, all of our algorithms scale very well to a large number of processors and provide almost linear speed-up. In Part II of the dissertation, we develop a parallel algorithm for finding records by given keys. Dealing with a large number of network data collected from a variety of fields requires efficient database management systems such as graph databases. Finding a record in those databases is very critical and typically is the main bottleneck for performance. Our algorithm achieves the fastest data lookup throughput ever reported in the literature for various benchmarks.
|
306 |
Multi-core processors and the future of parallelism in softwareYoungman, Ryan Christopher 01 January 2007 (has links)
The purpose of this thesis is to examine multi-core technology. Multi-core architecture provides benefits such as less power consumption, scalability, and improved application performance enabled by thread-level parallelism.
|
307 |
A Heterogeneous, Purpose Built Computer Architecture For Accelerating Biomolecular SimulationMadill, Christopher Andre 09 June 2011 (has links)
Molecular dynamics (MD) is a powerful computer simulation technique providing atomistic resolution across a broad range of time scales.
In the past four decades, researchers have harnessed the exponential growth in computer power and applied it to the simulation of diverse molecular systems. Although MD simulations are playing an increasingly
important role in biomedical research, sampling limitations imposed by both hardware and software constraints establish a \textit{de facto} upper bound on the size and length of MD trajectories. While simulations are currently approaching the hundred-thousand-atom, millisecond-timescale
mark using large-scale computing centres optimized for general-purpose data processing, many interesting research topics are still beyond the reach of practical computational biophysics efforts.
The purpose of this work is to design a high-speed MD machine which outperforms standard simulators running on commodity hardware or on large computing clusters. In pursuance of this goal, an MD-specific computer architecture is developed which tightly couples the fast processing power of Field-Programmable Gate Array (FPGA) computer
chips with a network of high-performance CPUs. The development of this architecture is a multi-phase approach. Core MD algorithms
are first analyzed and deconstructed to identify the computational bottlenecks governing the simulation rate. High-speed, parallel algorithms are subsequently developed to perform the most time-critical components in MD simulations on specialized
hardware much faster than is possible with general-purpose processors. Finally, the functionality of the hardware accelerators
is expanded into a fully-featured MD simulator through the integration of novel parallel algorithms running on a network of CPUs.
The developed architecture enabled the construction of various prototype machines running on a variety of hardware platforms
which are explored throughout this thesis. Furthermore, simulation models are developed to predict the rate of acceleration using
different architectural configurations and molecular systems.
With initial acceleration efforts focused primarily on expensive van der Waals and Coulombic force calculations, an architecture
was developed whereby a single machine achieves the performance equivalent of an 88-core InfiniBand-connected network of CPUs.
Finally, a methodology to successively identify and accelerate the remaining time-critical aspects of MD simulations is developed.
This design leads to an architecture with a projected performance equivalent of nearly 150 CPU-cores, enabling supercomputing
performance in a single computer chassis, plugged into a standard wall socket.
|
308 |
Nas benchmark evaluation of HKU cluster of workstations麥志華, Mak, Chi-wah. January 1999 (has links)
published_or_final_version / abstract / toc / Computer Science / Master / Master of Philosophy
|
309 |
A Heterogeneous, Purpose Built Computer Architecture For Accelerating Biomolecular SimulationMadill, Christopher Andre 09 June 2011 (has links)
Molecular dynamics (MD) is a powerful computer simulation technique providing atomistic resolution across a broad range of time scales.
In the past four decades, researchers have harnessed the exponential growth in computer power and applied it to the simulation of diverse molecular systems. Although MD simulations are playing an increasingly
important role in biomedical research, sampling limitations imposed by both hardware and software constraints establish a \textit{de facto} upper bound on the size and length of MD trajectories. While simulations are currently approaching the hundred-thousand-atom, millisecond-timescale
mark using large-scale computing centres optimized for general-purpose data processing, many interesting research topics are still beyond the reach of practical computational biophysics efforts.
The purpose of this work is to design a high-speed MD machine which outperforms standard simulators running on commodity hardware or on large computing clusters. In pursuance of this goal, an MD-specific computer architecture is developed which tightly couples the fast processing power of Field-Programmable Gate Array (FPGA) computer
chips with a network of high-performance CPUs. The development of this architecture is a multi-phase approach. Core MD algorithms
are first analyzed and deconstructed to identify the computational bottlenecks governing the simulation rate. High-speed, parallel algorithms are subsequently developed to perform the most time-critical components in MD simulations on specialized
hardware much faster than is possible with general-purpose processors. Finally, the functionality of the hardware accelerators
is expanded into a fully-featured MD simulator through the integration of novel parallel algorithms running on a network of CPUs.
The developed architecture enabled the construction of various prototype machines running on a variety of hardware platforms
which are explored throughout this thesis. Furthermore, simulation models are developed to predict the rate of acceleration using
different architectural configurations and molecular systems.
With initial acceleration efforts focused primarily on expensive van der Waals and Coulombic force calculations, an architecture
was developed whereby a single machine achieves the performance equivalent of an 88-core InfiniBand-connected network of CPUs.
Finally, a methodology to successively identify and accelerate the remaining time-critical aspects of MD simulations is developed.
This design leads to an architecture with a projected performance equivalent of nearly 150 CPU-cores, enabling supercomputing
performance in a single computer chassis, plugged into a standard wall socket.
|
310 |
Design and evaluation of a technology-scalable architecture for instruction-level parallelismNagarajan, Ramadass, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
|
Page generated in 0.1128 seconds