• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 798
  • 474
  • 212
  • 148
  • 88
  • 77
  • 70
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 2239
  • 2239
  • 969
  • 658
  • 644
  • 442
  • 432
  • 409
  • 357
  • 335
  • 329
  • 328
  • 323
  • 317
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Hiding Decryption Latency in Intel SGX using Metadata Prediction

Talapkaliyev, Daulet 20 January 2020 (has links)
Hardware-Assisted Trusted Execution Environment technologies have become a crucial component in providing security for cloud-based computing. One of such hardware-assisted countermeasures is Intel Software Guard Extension (SGX). Using additional dedicated hardware and a new set of CPU instructions, SGX is able to provide isolated execution of code within trusted hardware containers called enclaves. By utilizing private encrypted memory and various integrity authentication mechanisms, it can provide confidentiality and integrity guarantees to protected data. In spite of dedicated hardware, these extra layers of security add a significant performance overhead. Decryption of data using secret OTPs, which are generated by modified Counter Mode Encryption AES blocks, results in a significant latency overhead that contributes to the overall SGX performance loss. This thesis introduces a metadata prediction extension to SGX based on local metadata releveling and prediction mechanisms. Correct prediction of metadata allows to speculatively precompute OTPs, which can be immediately used in decryption of incoming ciphertext data. This hides a significant part of decryption latency and results in faster SGX performance without any changes to the original SGX security guarantees. / Master of Science / With the exponential growth of cloud computing, where critical data processing is happening on third-party computer systems, it is important to ensure data confidentiality and integrity against third-party access. Sometimes that may include not only external attackers, but also insiders, like cloud computing providers themselves. While software isolation using Virtual Machines is the most common method of achieving runtime security in cloud computing, numerous shortcomings of software-only countermeasures force companies to demand extra layers of security. Recently adopted general purpose hardware-assisted technology like Intel Software Guard Extension (SGX) add that extra layer of security at the significant performance overhead. One of the major contributors to the SGX performance overhead is data decryption latency. This work proposes a novel algorithm to speculatively predict metadata that is used during decryption. This allows the processor to hide a significant portion of decryption latency, thus improving the overall performance of Intel SGX without compromising security.
302

Evaluating and Enhancing FAIR Compliance in Data Resource Portal Development

Yiqing Qu (18437745) 01 May 2024 (has links)
<p dir="ltr">There is a critical need for improvement in scientific data management when the big-data era arrives. Motivated by the evolution and significance of FAIR principles in contemporary research, the study focuses on the development and evaluation of a FAIR-compliant data resource portal. The challenge lies in translating the abstract FAIR principles into actionable, technological implementations and the evaluation. After baseline selection, the study aims to benchmark standards and outperform existing FAIR compliant data resource portals. The proposed approach includes an assessment of existing portals, the interpretation of FAIR principles into practical considerations, and the integration of modern technologies for the implementation. With a FAIR-ness evaluation framework designed and applied to the implementation, this study evaluated and improved the FAIR-compliance of data resource portal. Specifically, the study identified the need for improved persistent identifiers, comprehensive descriptive metadata, enhanced metadata access methods and adherence to community standards and formats. The evaluation of the FAIR-compliant data resource portal with FAIR implementation, showed a significant improvement in FAIR compliance, and eventually enhanced data discoverability, usability, and overall management in academic research.</p>
303

Same principles, different practices: The many routes to a high performance work system

Perrett, Robert A. 2016 May 1923 (has links)
No
304

Development and Validation of Micro Emulsion High Performance Liquid Chromatography(MELC) Method for the Determination of Nifedipine in Pharmaceutical Preparation

Al-Jammal, M.K.H., Al Ayoub, Yuosef, Assi, Khaled H. 24 February 2015 (has links)
Yes / Microemulsion is a stable, isotropic clear solution consisting of oil based substance, water surfactant and cosurfactant. There are two types of microemulsion which are used as a mobile phase; water in oil (w/o) and oil in water (o/w).Microemulsion has a strong ability to solubilize both hydrophobic and hydrophilic analytes, therefore reducing the pre-treatment of the sample which is needed for the complex sample. Recent reports found that separating the analytes by using microemulsion high performance liquid chromatography can be achieved with superior speed and efficiency compared to conventional HPLC modes. In this work, Oil in water (o/w) microemulsion has been used for the determination of nifedipine in pharmaceutical preparation. The effect of each parameter on the separation process was examined. The samples were injected into C18, analytical columns maintained at 30°C with a flow rate 1 ml/min. The mobile phase was 87.1% aqueous orthophosphate buffer 15 mM (adjusted to pH 3 with orthophosphoric acid), 0.8% of octane as oil, 4.5 SDS, and 7.6% 1-butanol, all w/w. The nifedipine and internal standard peaks were detected by UV detection at λ max 237 nm The calibration curve was linear (r2=0.9995) over nifedipine concentrations ranging from 1 to 60 μg/ml (n=6). The method has good sensitivity with limit of detection (LOD) of 0.33 μg/ml and limit of quantitation (LOQ) of 1.005 μg/ ml. Also it has an excellent accuracy ranging from 99.11 to 101.64%. The intra-day and inter-day precisions (RSD %) were <0.45% and <0.9%, respectively.
305

Microbore HPLC methodology and temperature programmed microbore HPLC

Bowermaster, Jeffrey January 1984 (has links)
Small diameter LC columns provide rapid thermal equilibration and are ideal candidates for temperature programmed LC. Special instrumentation requirements are presented and details of column assembly are given to permit the preparation of highly efficient, stable microbore columns. Three LC temperature control systems are described and their individual strengths and weaknesses are discussed. Problems encountered in raising the temperature of an LC column are addressed and solutions are described. Experimental results of column and instrumentation evaluation are given and the effects of temperature on speed, efficiency, stability and retention of a broad range of samples is reported. Temperature and solvent programming are compared directly. / Ph. D.
306

Multi-core processors and the future of parallelism in software

Youngman, Ryan Christopher 01 January 2007 (has links)
The purpose of this thesis is to examine multi-core technology. Multi-core architecture provides benefits such as less power consumption, scalability, and improved application performance enabled by thread-level parallelism.
307

A Heterogeneous, Purpose Built Computer Architecture For Accelerating Biomolecular Simulation

Madill, Christopher Andre 09 June 2011 (has links)
Molecular dynamics (MD) is a powerful computer simulation technique providing atomistic resolution across a broad range of time scales. In the past four decades, researchers have harnessed the exponential growth in computer power and applied it to the simulation of diverse molecular systems. Although MD simulations are playing an increasingly important role in biomedical research, sampling limitations imposed by both hardware and software constraints establish a \textit{de facto} upper bound on the size and length of MD trajectories. While simulations are currently approaching the hundred-thousand-atom, millisecond-timescale mark using large-scale computing centres optimized for general-purpose data processing, many interesting research topics are still beyond the reach of practical computational biophysics efforts. The purpose of this work is to design a high-speed MD machine which outperforms standard simulators running on commodity hardware or on large computing clusters. In pursuance of this goal, an MD-specific computer architecture is developed which tightly couples the fast processing power of Field-Programmable Gate Array (FPGA) computer chips with a network of high-performance CPUs. The development of this architecture is a multi-phase approach. Core MD algorithms are first analyzed and deconstructed to identify the computational bottlenecks governing the simulation rate. High-speed, parallel algorithms are subsequently developed to perform the most time-critical components in MD simulations on specialized hardware much faster than is possible with general-purpose processors. Finally, the functionality of the hardware accelerators is expanded into a fully-featured MD simulator through the integration of novel parallel algorithms running on a network of CPUs. The developed architecture enabled the construction of various prototype machines running on a variety of hardware platforms which are explored throughout this thesis. Furthermore, simulation models are developed to predict the rate of acceleration using different architectural configurations and molecular systems. With initial acceleration efforts focused primarily on expensive van der Waals and Coulombic force calculations, an architecture was developed whereby a single machine achieves the performance equivalent of an 88-core InfiniBand-connected network of CPUs. Finally, a methodology to successively identify and accelerate the remaining time-critical aspects of MD simulations is developed. This design leads to an architecture with a projected performance equivalent of nearly 150 CPU-cores, enabling supercomputing performance in a single computer chassis, plugged into a standard wall socket.
308

Nas benchmark evaluation of HKU cluster of workstations

麥志華, Mak, Chi-wah. January 1999 (has links)
published_or_final_version / abstract / toc / Computer Science / Master / Master of Philosophy
309

A Heterogeneous, Purpose Built Computer Architecture For Accelerating Biomolecular Simulation

Madill, Christopher Andre 09 June 2011 (has links)
Molecular dynamics (MD) is a powerful computer simulation technique providing atomistic resolution across a broad range of time scales. In the past four decades, researchers have harnessed the exponential growth in computer power and applied it to the simulation of diverse molecular systems. Although MD simulations are playing an increasingly important role in biomedical research, sampling limitations imposed by both hardware and software constraints establish a \textit{de facto} upper bound on the size and length of MD trajectories. While simulations are currently approaching the hundred-thousand-atom, millisecond-timescale mark using large-scale computing centres optimized for general-purpose data processing, many interesting research topics are still beyond the reach of practical computational biophysics efforts. The purpose of this work is to design a high-speed MD machine which outperforms standard simulators running on commodity hardware or on large computing clusters. In pursuance of this goal, an MD-specific computer architecture is developed which tightly couples the fast processing power of Field-Programmable Gate Array (FPGA) computer chips with a network of high-performance CPUs. The development of this architecture is a multi-phase approach. Core MD algorithms are first analyzed and deconstructed to identify the computational bottlenecks governing the simulation rate. High-speed, parallel algorithms are subsequently developed to perform the most time-critical components in MD simulations on specialized hardware much faster than is possible with general-purpose processors. Finally, the functionality of the hardware accelerators is expanded into a fully-featured MD simulator through the integration of novel parallel algorithms running on a network of CPUs. The developed architecture enabled the construction of various prototype machines running on a variety of hardware platforms which are explored throughout this thesis. Furthermore, simulation models are developed to predict the rate of acceleration using different architectural configurations and molecular systems. With initial acceleration efforts focused primarily on expensive van der Waals and Coulombic force calculations, an architecture was developed whereby a single machine achieves the performance equivalent of an 88-core InfiniBand-connected network of CPUs. Finally, a methodology to successively identify and accelerate the remaining time-critical aspects of MD simulations is developed. This design leads to an architecture with a projected performance equivalent of nearly 150 CPU-cores, enabling supercomputing performance in a single computer chassis, plugged into a standard wall socket.
310

Design and evaluation of a technology-scalable architecture for instruction-level parallelism

Nagarajan, Ramadass, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.

Page generated in 0.1462 seconds