• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 798
  • 474
  • 212
  • 148
  • 88
  • 77
  • 70
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 2239
  • 2239
  • 969
  • 658
  • 644
  • 442
  • 432
  • 409
  • 357
  • 335
  • 329
  • 328
  • 323
  • 317
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Optimization of Highway Bridge Girders for Use with Ultra-High Performance Concrete (UHPC)

Woodworth, Michael Allen 10 December 2008 (has links)
Ultra High Performance Concrete (UHPC) is a class of cementitious materials that share similar characteristics including very large compressive strengths, tensile strength greater than conventional concrete and high durability. The material consists of finely graded cementitious particles and aggregates to develop a durable dense matrix. The addition of steel fibers increases ductility such that the material develops usable tensile strength. The durability and strength of UHPC makes it a desirable material for the production of highway bridge girders. However, UHPC's unique constitutive materials make it more expensive than conventional concrete. The cost and lack of appropriate design guidelines has limited its introduction into bridge products. The investigation presented in this thesis developed several optimization formulations to determine a suitable bridge girder shape for use with UHPC. The goal of this optimization was to develop a methodology of using UHPC in highway bridge designs that was cost competitive with conventional concrete solutions. Several surveys and field visits were performed to identify the important aspects of girder fabrication. Optimizations were formulated to develop optimized girder cross sections and full bridge design configurations that utilize UHPC. The results showed that for spans greater than 90 ft UHPC used in the proposed girder shape was more economical than conventional girders. The optimizations and surveys resulted in the development of a proposed method to utilize UHPC in highway bridges utilizing existing girder shapes and formwork. The proposed method consists of three simple calculations to transform an initial conventional design to an initial design using modified UHPC girders. / Master of Science
292

HPC-based Parallel Algorithms for Generating Random Networks and Some Other Network Analysis Problems

Alam, Md Maksudul 06 December 2016 (has links)
The advancement of modern technologies has resulted in an explosive growth of complex systems, such as the Internet, biological, social, and various infrastructure networks, which have, in turn, contributed to the rise of massive networks. During the past decade, analyzing and mining of these networks has become an emerging research area with many real-world applications. The most relevant problems in this area include: collecting and managing networks, modeling and generating random networks, and developing network mining algorithms. In the era of big data, speed is not an option anymore for the effective analysis of these massive systems, it is an absolute necessity. This motivates the need for parallel algorithms on modern high-performance computing (HPC) systems including multi-core, distributed, and graphics processor units (GPU) based systems. In this dissertation, we present distributed memory parallel algorithms for generating massive random networks and a novel GPU-based algorithm for index searching. This dissertation is divided into two parts. In Part I, we present parallel algorithms for generating massive random networks using several widely-used models. We design and develop a novel parallel algorithm for generating random networks using the preferential-attachment model. This algorithm can generate networks with billions of edges in just a few minutes using a medium-sized computing cluster. We develop another parallel algorithm for generating random networks with a given sequence of expected degrees. We also design a new a time and space efficient algorithmic method to generate random networks with any degree distributions. This method has been applied to generate random networks using other popular network models, such as block two-level Erdos-Renyi and stochastic block models. Parallel algorithms for network generation pose many nontrivial challenges such as dependency on edges, avoiding duplicate edges, and load balancing. We applied novel techniques to deal with these challenges. All of our algorithms scale very well to a large number of processors and provide almost linear speed-up. Dealing with a large number of networks collected from a variety of fields requires efficient management systems such as graph databases. Finding a record in those databases is very critical and typically is the main bottleneck for performance. In Part II of the dissertation, we develop a GPU-based parallel algorithm for index searching. Our algorithm achieves the fastest throughput ever reported in the literature for various benchmarks. / Ph. D.
293

Automatic Scheduling of Compute Kernels Across Heterogeneous Architectures

Lyerly, Robert Frantz 24 June 2014 (has links)
The world of high-performance computing has shifted from increasing single-core performance to extracting performance from heterogeneous multi- and many-core processors due to the power, memory and instruction-level parallelism walls. All trends point towards increased processor heterogeneity as a means for increasing application performance, from smartphones to servers. These various architectures are designed for different types of applications — traditional "big" CPUs (like the Intel Xeon) are optimized for low latency while other architectures (such as the NVidia Tesla K20x) are optimized for high-throughput. These architectures have different tradeoffs and different performance profiles, meaning fantastic performance gains for the right types of applications. However applications that are ill-suited for a given architecture may experience significant slowdown; therefore, it is imperative that applications are scheduled onto the correct processor. In order to perform this scheduling, applications must be analyzed to determine their execution characteristics. Traditionally this application-to-hardware mapping was determined statically by the programmer. However, this requires intimate knowledge of the application and underlying architecture, and precludes load-balancing by the system. We demonstrate and empirically evaluate a system for automatically scheduling compute kernels by extracting program characteristics and applying machine learning techniques. We develop a machine learning process that is system-agnostic, and works for a variety of contexts (e.g. embedded, desktop/workstation, server). Finally, we perform scheduling in a workload-aware and workload-adaptive manner for these compute kernels. / Master of Science
294

Quantitative HPTLC

Cleary, Maryanne Viola 11 July 2009 (has links)
Advances in thin layer chromatography (TLC), including smaller more uniform particles, use of a scanning spectrophotometer (densitometer), and sample application devices, led to the development of the High Performance Thin Layer Chromatography (HPTLC) technique. HPTLC allows quantitative as well as qualitative results of much smaller amounts. in some cases down to the picogram level. With these advancements, the limiting factor in detection of smaller concentrations has become the plate itself, and more specifically the preparation of the absorbent and binder and the layering process. This research evaluated HPTLC plates from several manufacturers for significant differences between manufacturers and between plates of each manufacturer. Several concentrations of three drugs of abuse were applied, developed, and quantitated. Both Rf and peak area were statistically evaluated to look for any effect of manufacturer, specific plate for that manufacturer, specific drug, concentration, and/or cross nested effects. Significant differences were found between manufacturers for both Rf and peak area with E. Merck and Baker plates having the best overall results. All manufacturers were found to have some plates with obvious visual surface defects that were not suitable for use. The major source of variation for all manufacturers was the plate to plate variation rather than track to track deviations on any given plate. / Master of Science
295

Single Straight Steel Fiber Pullout Characterization in Ultra-High Performance Concrete

Black, Valerie Mills 18 July 2014 (has links)
This thesis presents results of an experimental investigation to characterize single straight steel fiber pullout in Ultra-High Performance Concrete (UHPC). Several parameters were explored including the distance of fibers to the edge of specimen, distance between fibers, and fiber volume in the matrix. The pullout load versus slip curve was recorded, from which the pullout work and maximum pullout load for each series of parameters were obtained. The curves were fitted to an existing fiber pullout model considering bond-fracture energy, Gd, bond frictional stress, 𝛕0, and slip hardening-softening coefficient, 𝜷. The representative load-slip curve characterizing the fiber pullout behavior will be implemented into a computational modeling protocol, for concrete structures, based on Lattice Discrete Particle Modeling (LDPM). The parametric study showed that distances over 12.7 mm from the edge of the specimen have no significant effect on the maximum pullout load and work. Edge distances of 3.2 mm decreased the average pullout work by 26% and the maximum pullout load by 24% for mixes with 0% fiber volume. The distance between fibers did not have a significant effect on the pullout behavior within this study. Slight differences in pullout behavior between the 2% and 4% fiber volumes were observed including slight increase in the maximum pullout load when increasing fiber volume. The suggested fitted parameters for modeling with 2% and 4% fiber volumes are a bond-fracture energy value of zero, a bond friction coefficient of 2.6 N/mm² and 2.9 N/mm² and a slip-hardening coefficient of 0.21 and 0.18 respectively. / Master of Science
296

Retention trends of chemical classes using CCl₄ as a carrier solvent in normal-phase HPLC

Wang, Muh S. January 1985 (has links)
Carbon tetrachloride (CCl₄ ) was closely evaluated as a carrier solvent in high-performance liquid chromatography (HPLC). The separation and retention trends of ninety-two selected compounds from eleven chemical classes (furans, thiophenes, aromatic hydrocarbons, ethers, esters, ketones, aldehydes, aromatic amines, azaarenes, alcohols and phenols) on three analytical silica-bonded phase (amino (NH₂), cyano (CN) and polar amino-cyano (PAC)) columns were investigated with CCl₄ and refractive index (RI) detection. The sample capacity and column efficiency of each of the NH₂ and PAC columns were measured and compared. Besides, a method of determining unmeasurable capacity factors (k' values) was found and illustrated. / M.S.
297

High Performance Computing Issues in Large-Scale Molecular Statics Simulations

Pulla, Gautam 02 June 1999 (has links)
Successful application of parallel high performance computing to practical problems requires overcoming several challenges. These range from the need to make sequential and parallel improvements in programs to the implementation of software tools which create an environment that aids sharing of high performance hardware resources and limits losses caused by hardware and software failures. In this thesis we describe our approach to meeting these challenges in the context of a Molecular Statics code. We describe sequential and parallel optimizations made to the code and also a suite of tools constructed to facilitate the execution of the Molecular Statics program on a network of parallel machines with the aim of increasing resource sharing, fault tolerance and availability. / Master of Science
298

Extraction of Additives from Polystyrene and Subsequent Analysis

Smith, Susan H. 19 June 1998 (has links)
The extraction of fifteen (15) polymer additives with supercritical carbon dioxide which are used as antioxidants, uv stabilizers, process lubes, flame retardants and antistats from eight formulations of polystyrene is demonstrated and compared to traditional dissolution/precipitation extractions. The purpose of the study was twofold: 1) the development of a high performance liquid chromatography (HPLC) method(s) for the additives and 2) the determination of the viability of supercritical fluid extraction (SFE) for the additives from polystyrene. Separation of some of the additives was achieved using reversed phase liquid chromatography. Nine of the additives were assayed in this manner while, the remaining six additives could not be assayed using reversed phase liquid chromatography. In order to develop an extraction method for the additives, the effects of static extraction time, CO2 density, and temperature were first investigated. These preliminary extractions revealed that a static extraction period which afforded an opportunity for the polymer to swell combined with a high CO2 density and extraction temperature above the glass transition (Tg) yielded quantitative recoveries of the additives. Triplicate extractions of the various polystyrene formulations matched additive recoveries obtained by the traditional dissolution/precipitation method. / Master of Science
299

Green Schools - The Implementation and Practices of Environmental Education in LEED and USED Green Ribbon Public Schools in Virginia

Marable, Steve Alexander 03 June 2014 (has links)
The purpose of this study was to examine the environmental education curriculum which has been utilized within Green Schools. For this study the researcher defined Green Schools as educational facilities with Leadership in Energy and Environmental Design (LEED) certification or United States Education Department (USED) Green Ribbon recognition. Currently, there is no set standard for the implementation of environmental education in Green Schools or for schools that utilize the building as a teaching tool for students. This descriptive study surveyed Green Schools in the Commonwealth of Virginia in order to better understand what common programs and curricula were being utilized. This study will also assist in establishing pedagogical best practices for environmental education while describing how LEED certified buildings are currently being used by educators as a teaching tool to support sustainable practices. Overall, 14 Green Schools in the Commonwealth of Virginia agreed to participate in the study. Once principals gave consent for their school to participate in the study, they were asked to respond the survey instrument and invite teachers to participate in the Green Schools eSurvey also. The survey instrument consisted of 14 multiple choice and open response survey items. Overall, 98 principals and staff participated in the survey. Multiple choice survey questions served as the quantitative data for the research study. Quantitative data were examined to report descriptive statistics to provide parameters about the sample population. The frequency and percentage from each category, mean, and mode were also reported from each quantitative survey item. Qualitative data were examined by emerging themes according to pedagogical strategies and programs. The findings from the study indicated that teachers are employing practices that are consistent with current emphases on environmental education. Data also supported that educators take pride in their buildings and incorporate the facility as a teaching tool in a variety of instructional practices throughout the Commonwealth of Virginia. / Ed. D.
300

Scalable and Productive Data Management for High-Performance Analytics

Youssef, Karim Yasser Mohamed Yousri 07 November 2023 (has links)
Advancements in data acquisition technologies across different domains, from genome sequencing to satellite and telescope imaging to large-scale physics simulations, are leading to an exponential growth in dataset sizes. Extracting knowledge from this wealth of data enables scientific discoveries at unprecedented scales. However, the sheer volume of the gathered datasets is a bottleneck for knowledge discovery. High-performance computing (HPC) provides a scalable infrastructure to extract knowledge from these massive datasets. However, multiple data management performance gaps exist between big data analytics software and HPC systems. These gaps arise from multiple factors, including the tradeoff between performance and programming productivity, data growth at a faster rate than memory capacity, and the high storage footprints of data analytics workflows. This dissertation bridges these gaps by combining productive data management interfaces with application-specific optimizations of data parallelism, memory operation, and storage management. First, we address the performance-productivity tradeoff by leveraging Spark and optimizing input data partitioning. Our solution optimizes programming productivity while achieving comparable performance to the Message Passing Interface (MPI) for scalable bioinformatics. Second, we address the operating system's kernel limitations for out-of-core data processing by autotuning memory management parameters in userspace. Finally, we address I/O and storage efficiency bottlenecks in data analytics workflows that iteratively and incrementally create and reuse persistent data structures such as graphs, data frames, and key-value datastores. / Doctor of Philosophy / Advancements in various fields, like genetics, satellite imaging, and physics simulations, are generating massive amounts of data. Analyzing this data can lead to groundbreaking scientific discoveries. However, the sheer size of these datasets presents a challenge. High-performance computing (HPC) offers a solution to process and understand this data efficiently. Still, several issues hinder the performance of big data analytics software on HPC systems. These problems include finding the right balance between performance and ease of programming, dealing with the challenges of handling massive amounts of data, and optimizing storage usage. This dissertation focuses on three areas to improve high-performance data analytics (HPDA). Firstly, it demonstrates how using Spark and optimized data partitioning can optimize programming productivity while achieving similar scalability as the Message Passing Interface (MPI) for scalable bioinformatics. Secondly, it addresses the limitations of the operating system's memory management for processing data that is too large to fit entirely in memory. Lastly, it tackles the efficiency issues related to input/output operations and storage when dealing with data structures like graphs, data frames, and key-value datastores in iterative and incremental workflows.

Page generated in 0.1355 seconds