Spelling suggestions: "subject:"high performance anda"" "subject:"high performance ando""
291 |
Optimization of Highway Bridge Girders for Use with Ultra-High Performance Concrete (UHPC)Woodworth, Michael Allen 10 December 2008 (has links)
Ultra High Performance Concrete (UHPC) is a class of cementitious materials that share similar characteristics including very large compressive strengths, tensile strength greater than conventional concrete and high durability. The material consists of finely graded cementitious particles and aggregates to develop a durable dense matrix. The addition of steel fibers increases ductility such that the material develops usable tensile strength. The durability and strength of UHPC makes it a desirable material for the production of highway bridge girders. However, UHPC's unique constitutive materials make it more expensive than conventional concrete. The cost and lack of appropriate design guidelines has limited its introduction into bridge products.
The investigation presented in this thesis developed several optimization formulations to determine a suitable bridge girder shape for use with UHPC. The goal of this optimization was to develop a methodology of using UHPC in highway bridge designs that was cost competitive with conventional concrete solutions. Several surveys and field visits were performed to identify the important aspects of girder fabrication. Optimizations were formulated to develop optimized girder cross sections and full bridge design configurations that utilize UHPC. The results showed that for spans greater than 90 ft UHPC used in the proposed girder shape was more economical than conventional girders. The optimizations and surveys resulted in the development of a proposed method to utilize UHPC in highway bridges utilizing existing girder shapes and formwork. The proposed method consists of three simple calculations to transform an initial conventional design to an initial design using modified UHPC girders. / Master of Science
|
292 |
Automatic Scheduling of Compute Kernels Across Heterogeneous ArchitecturesLyerly, Robert Frantz 24 June 2014 (has links)
The world of high-performance computing has shifted from increasing single-core performance to extracting performance from heterogeneous multi- and many-core processors due to the power, memory and instruction-level parallelism walls. All trends point towards increased processor heterogeneity as a means for increasing application performance, from smartphones to servers. These various architectures are designed for different types of applications — traditional "big" CPUs (like the Intel Xeon) are optimized for low latency while other architectures (such as the NVidia Tesla K20x) are optimized for high-throughput. These architectures have different tradeoffs and different performance profiles, meaning fantastic performance gains for the right types of applications. However applications that are ill-suited for a given architecture may experience significant slowdown; therefore, it is imperative that applications are scheduled onto the correct processor.
In order to perform this scheduling, applications must be analyzed to determine their execution characteristics. Traditionally this application-to-hardware mapping was determined statically by the programmer. However, this requires intimate knowledge of the application and underlying architecture, and precludes load-balancing by the system. We demonstrate and empirically evaluate a system for automatically scheduling compute kernels by extracting program characteristics and applying machine learning techniques. We develop a machine learning process that is system-agnostic, and works for a variety of contexts (e.g. embedded, desktop/workstation, server). Finally, we perform scheduling in a workload-aware and workload-adaptive manner for these compute kernels. / Master of Science
|
293 |
Quantitative HPTLCCleary, Maryanne Viola 11 July 2009 (has links)
Advances in thin layer chromatography (TLC), including smaller more uniform particles, use of a scanning spectrophotometer (densitometer), and sample application devices, led to the development of the High Performance Thin Layer Chromatography (HPTLC) technique. HPTLC allows quantitative as well as qualitative results of much smaller amounts. in some cases down to the picogram level. With these advancements, the limiting factor in detection of smaller concentrations has become the plate itself, and more specifically the preparation of the absorbent and binder and the layering process.
This research evaluated HPTLC plates from several manufacturers for significant differences between manufacturers and between plates of each manufacturer. Several concentrations of three drugs of abuse were applied, developed, and quantitated. Both Rf and peak area were statistically evaluated to look for any effect of manufacturer, specific plate for that manufacturer, specific drug, concentration, and/or cross nested effects.
Significant differences were found between manufacturers for both Rf and peak area with E. Merck and Baker plates having the best overall results. All manufacturers were found to have some plates with obvious visual surface defects that were not suitable for use. The major source of variation for all manufacturers was the plate to plate variation rather than track to track deviations on any given plate. / Master of Science
|
294 |
Single Straight Steel Fiber Pullout Characterization in Ultra-High Performance ConcreteBlack, Valerie Mills 18 July 2014 (has links)
This thesis presents results of an experimental investigation to characterize single straight steel fiber pullout in Ultra-High Performance Concrete (UHPC). Several parameters were explored including the distance of fibers to the edge of specimen, distance between fibers, and fiber volume in the matrix. The pullout load versus slip curve was recorded, from which the pullout work and maximum pullout load for each series of parameters were obtained. The curves were fitted to an existing fiber pullout model considering bond-fracture energy, Gd, bond frictional stress, 𝛕0, and slip hardening-softening coefficient, 𝜷. The representative load-slip curve characterizing the fiber pullout behavior will be implemented into a computational modeling protocol, for concrete structures, based on Lattice Discrete Particle Modeling (LDPM). The parametric study showed that distances over 12.7 mm from the edge of the specimen have no significant effect on the maximum pullout load and work. Edge distances of 3.2 mm decreased the average pullout work by 26% and the maximum pullout load by 24% for mixes with 0% fiber volume. The distance between fibers did not have a significant effect on the pullout behavior within this study. Slight differences in pullout behavior between the 2% and 4% fiber volumes were observed including slight increase in the maximum pullout load when increasing fiber volume. The suggested fitted parameters for modeling with 2% and 4% fiber volumes are a bond-fracture energy value of zero, a bond friction coefficient of 2.6 N/mm² and 2.9 N/mm² and a slip-hardening coefficient of 0.21 and 0.18 respectively. / Master of Science
|
295 |
Retention trends of chemical classes using CCl₄ as a carrier solvent in normal-phase HPLCWang, Muh S. January 1985 (has links)
Carbon tetrachloride (CCl₄ ) was closely evaluated as a carrier solvent in high-performance liquid chromatography (HPLC). The separation and retention trends of ninety-two selected compounds from eleven chemical classes (furans, thiophenes, aromatic hydrocarbons, ethers, esters, ketones, aldehydes, aromatic amines, azaarenes, alcohols and phenols) on three analytical silica-bonded phase (amino (NH₂), cyano (CN) and polar amino-cyano (PAC)) columns were investigated with CCl₄ and refractive index (RI) detection. The sample capacity and column efficiency of each of the NH₂ and PAC columns were measured and compared. Besides, a method of determining unmeasurable capacity factors (k' values) was found and illustrated. / M.S.
|
296 |
High Performance Computing Issues in Large-Scale Molecular Statics SimulationsPulla, Gautam 02 June 1999 (has links)
Successful application of parallel high performance computing to practical problems requires overcoming several challenges. These range from the need to make sequential and parallel improvements in programs to the implementation of software tools which create an environment that aids sharing of high performance hardware resources and limits losses caused by hardware and software failures. In this thesis we describe our approach to meeting these challenges in the context of a Molecular Statics code. We describe sequential and parallel optimizations made to the code and also a suite of tools constructed to facilitate the execution of the Molecular Statics program on a network of parallel machines with the aim of increasing resource sharing, fault tolerance and availability. / Master of Science
|
297 |
Extraction of Additives from Polystyrene and Subsequent AnalysisSmith, Susan H. 19 June 1998 (has links)
The extraction of fifteen (15) polymer additives with supercritical carbon dioxide which are used as antioxidants, uv stabilizers, process lubes, flame retardants and antistats from eight formulations of polystyrene is demonstrated and compared to traditional dissolution/precipitation extractions. The purpose of the study was twofold: 1) the development of a high performance liquid chromatography (HPLC) method(s) for the additives and 2) the determination of the viability of supercritical fluid extraction (SFE) for the additives from polystyrene.
Separation of some of the additives was achieved using reversed phase liquid chromatography. Nine of the additives were assayed in this manner while, the remaining six additives could not be assayed using reversed phase liquid chromatography. In order to develop an extraction method for the additives, the effects of static extraction time, CO2 density, and temperature were first investigated. These preliminary extractions revealed that a static extraction period which afforded an opportunity for the polymer to swell combined with a high CO2 density and extraction temperature above the glass transition (Tg) yielded quantitative recoveries of the additives. Triplicate extractions of the various polystyrene formulations matched additive recoveries obtained by the traditional dissolution/precipitation method. / Master of Science
|
298 |
Green Schools - The Implementation and Practices of Environmental Education in LEED and USED Green Ribbon Public Schools in VirginiaMarable, Steve Alexander 03 June 2014 (has links)
The purpose of this study was to examine the environmental education curriculum which has been utilized within Green Schools. For this study the researcher defined Green Schools as educational facilities with Leadership in Energy and Environmental Design (LEED) certification or United States Education Department (USED) Green Ribbon recognition. Currently, there is no set standard for the implementation of environmental education in Green Schools or for schools that utilize the building as a teaching tool for students. This descriptive study surveyed Green Schools in the Commonwealth of Virginia in order to better understand what common programs and curricula were being utilized. This study will also assist in establishing pedagogical best practices for environmental education while describing how LEED certified buildings are currently being used by educators as a teaching tool to support sustainable practices.
Overall, 14 Green Schools in the Commonwealth of Virginia agreed to participate in the study. Once principals gave consent for their school to participate in the study, they were asked to respond the survey instrument and invite teachers to participate in the Green Schools eSurvey also. The survey instrument consisted of 14 multiple choice and open response survey items. Overall, 98 principals and staff participated in the survey. Multiple choice survey questions served as the quantitative data for the research study. Quantitative data were examined to report descriptive statistics to provide parameters about the sample population. The frequency and percentage from each category, mean, and mode were also reported from each quantitative survey item. Qualitative data were examined by emerging themes according to pedagogical strategies and programs.
The findings from the study indicated that teachers are employing practices that are consistent with current emphases on environmental education. Data also supported that educators take pride in their buildings and incorporate the facility as a teaching tool in a variety of instructional practices throughout the Commonwealth of Virginia. / Ed. D.
|
299 |
Scalable and Productive Data Management for High-Performance AnalyticsYoussef, Karim Yasser Mohamed Yousri 07 November 2023 (has links)
Advancements in data acquisition technologies across different domains, from genome sequencing to satellite and telescope imaging to large-scale physics simulations, are leading to an exponential growth in dataset sizes. Extracting knowledge from this wealth of data enables scientific discoveries at unprecedented scales. However, the sheer volume of the gathered datasets is a bottleneck for knowledge discovery. High-performance computing (HPC) provides a scalable infrastructure to extract knowledge from these massive datasets. However, multiple data management performance gaps exist between big data analytics software and HPC systems. These gaps arise from multiple factors, including the tradeoff between performance and programming productivity, data growth at a faster rate than memory capacity, and the high storage footprints of data analytics workflows. This dissertation bridges these gaps by combining productive data management interfaces with application-specific optimizations of data parallelism, memory operation, and storage management. First, we address the performance-productivity tradeoff by leveraging Spark and optimizing input data partitioning. Our solution optimizes programming productivity while achieving comparable performance to the Message Passing Interface (MPI) for scalable bioinformatics. Second, we address the operating system's kernel limitations for out-of-core data processing by autotuning memory management parameters in userspace. Finally, we address I/O and storage efficiency bottlenecks in data analytics workflows that iteratively and incrementally create and reuse persistent data structures such as graphs, data frames, and key-value datastores. / Doctor of Philosophy / Advancements in various fields, like genetics, satellite imaging, and physics simulations, are generating massive amounts of data. Analyzing this data can lead to groundbreaking scientific discoveries. However, the sheer size of these datasets presents a challenge. High-performance computing (HPC) offers a solution to process and understand this data efficiently. Still, several issues hinder the performance of big data analytics software on HPC systems. These problems include finding the right balance between performance and ease of programming, dealing with the challenges of handling massive amounts of data, and optimizing storage usage. This dissertation focuses on three areas to improve high-performance data analytics (HPDA). Firstly, it demonstrates how using Spark and optimized data partitioning can optimize programming productivity while achieving similar scalability as the Message Passing Interface (MPI) for scalable bioinformatics. Secondly, it addresses the limitations of the operating system's memory management for processing data that is too large to fit entirely in memory. Lastly, it tackles the efficiency issues related to input/output operations and storage when dealing with data structures like graphs, data frames, and key-value datastores in iterative and incremental workflows.
|
300 |
Hiding Decryption Latency in Intel SGX using Metadata PredictionTalapkaliyev, Daulet 20 January 2020 (has links)
Hardware-Assisted Trusted Execution Environment technologies have become a crucial component in providing security for cloud-based computing. One of such hardware-assisted countermeasures is Intel Software Guard Extension (SGX). Using additional dedicated hardware and a new set of CPU instructions, SGX is able to provide isolated execution of code within trusted hardware containers called enclaves. By utilizing private encrypted memory and various integrity authentication mechanisms, it can provide confidentiality and integrity guarantees to protected data. In spite of dedicated hardware, these extra layers of security add a significant performance overhead. Decryption of data using secret OTPs, which are generated by modified Counter Mode Encryption AES blocks, results in a significant latency overhead that contributes to the overall SGX performance loss. This thesis introduces a metadata prediction extension to SGX based on local metadata releveling and prediction mechanisms. Correct prediction of metadata allows to speculatively precompute OTPs, which can be immediately used in decryption of incoming ciphertext data. This hides a significant part of decryption latency and results in faster SGX performance without any changes to the original SGX security guarantees. / Master of Science / With the exponential growth of cloud computing, where critical data processing is happening on third-party computer systems, it is important to ensure data confidentiality and integrity against third-party access. Sometimes that may include not only external attackers, but also insiders, like cloud computing providers themselves. While software isolation using Virtual Machines is the most common method of achieving runtime security in cloud computing, numerous shortcomings of software-only countermeasures force companies to demand extra layers of security. Recently adopted general purpose hardware-assisted technology like Intel Software Guard Extension (SGX) add that extra layer of security at the significant performance overhead. One of the major contributors to the SGX performance overhead is data decryption latency. This work proposes a novel algorithm to speculatively predict metadata that is used during decryption. This allows the processor to hide a significant portion of decryption latency, thus improving the overall performance of Intel SGX without compromising security.
|
Page generated in 0.1132 seconds