• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 801
  • 474
  • 212
  • 148
  • 88
  • 77
  • 70
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 2242
  • 2242
  • 969
  • 659
  • 645
  • 442
  • 432
  • 409
  • 357
  • 335
  • 329
  • 328
  • 323
  • 317
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Efficient Storage Middleware Design in InfiniBand Clusters for High End Computing

Ouyang, Xiangyong 19 June 2012 (has links)
No description available.
282

A Map-Reduce-Like System for Programming and Optimizing Data-Intensive Computations on Emerging Parallel Architectures

Jiang, Wei 27 August 2012 (has links)
No description available.
283

The Effects of High Performance Work Systems on Operational Performance in Different Manufacturing Environments: Improving the “Fit” of HRM Practices in Mass Customization

Leffakis, Zachary M. 23 September 2009 (has links)
No description available.
284

High-performance liquid chromatographic studies of a 60-kilodalton oncofetal tumor marker /

Sutherland, Donald Eugene January 1986 (has links)
No description available.
285

Biochemical system simulation on a heterogeneous multicore processor

Al Assaad, Sevin January 2009 (has links)
No description available.
286

Failure Prediction using Machine Learning in a Virtualised HPC System and application

Bashir, Mohammed, Awan, Irfan U., Ugail, Hassan, Muhammad, Y. 21 March 2019 (has links)
Yes / Failure is an increasingly important issue in high performance computing and cloud systems. As large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional existing fault-tolerance strategies such as regular check-pointing and replication are not adequate because of the emerging complexities of high performance computing systems. This necessitates the importance of having an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classi cation and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results indicates that the average prediction accuracy of our model using SVM when predicting failure is 90% accurate and effective compared to other algorithms. This f inding implies that our method can effectively predict all possible future system and application failures within the system. / Petroleum Technology Development Fund (PTDF) funding support under the OSS scheme with grant number (PTDF/E/OSS/PHD/MB/651/14)
287

Enhanced Capabilities of the Spike Algorithm and a New Spike-OpenMP Solver

Spring, Braegan S 07 November 2014 (has links) (PDF)
SPIKE is a parallel algorithm to solve block tridiagonal matrices. In this work, two useful improvements to the algorithm are proposed. A flexible threading strategy is developed, to overcome limitations of the recursive reduced system method. Allo- cating multiple threads to some tasks created by the SPIKE algorithm removes the previous restriction that recursive SPIKE may only use a number of threads equal to a power of two. Additionally, a method of solving transpose problems is shown. This method matches the performance of the non-transpose solve while reusing the original factorization.
288

Digestion of inositol phosphates by dairy cows: Method development and application

Ray, Partha Pratim 05 June 2012 (has links)
Successful implementation of dietary P management strategies demand improved understanding of P digestion dynamics in ruminants and this is not possible without a reliable and accurate phytate (Pp) quantification method. The objective of the first study was to develop a robust, accurate, and sensitive method to extract and quantify phytate in feeds, ruminant digesta and feces. Clean-up procedures were developed for acid and alkaline extracts of feed, ruminant digesta and feces and clarified extracts were analyzed for Pp using high performance ion chromatography (HPIC). The quantified Pp in acid and alkaline extracts was comparable for feed but alkaline extraction yielded greater estimates of Pp content for digesta and feces than did acid extraction. Extract clean-up procedures successfully removed sample matrix interferences making alkaline extraction compatible with HPIC. The developed method was applied to investigate the disappearance of Pp from the large intestine of dairy heifers. Eight ruminally- and ileally-cannulated crossbred dairy heifers were used and each heifer was infused ileally with 0, 5, 15, or 25 g/d Pp and total fecal collection was conducted. On average 15% of total Pp entering the large intestine was degraded but the amount of infused Pp did not influence the degradability of Pp. Net absorption of P from the large intestine was observed. A feeding trial was conducted to investigate the effect of dietary Pp supply on ruminal and post-ruminal Pp digestion. Six ruminally-and ileally-cannulated crossbred lactating cows were used and dietary treatments were low (0.10% Pp), medium (0.18% Pp), and high (0.29% Pp) Pp, and a high inorganic P (Pi; 0.11% Pp; same total P content as high Pp). Ruminal Pp digestibility increased linearly with dietary Pp. As in the infusion study, net disappearance of Pp from the large intestine was only 16% of total Pp entering the large intestine and not influenced by dietary Pp. Fecal P excretion increased linearly with increasing dietary Pp but was not affected by form of dietary P. In lactating cows Pp digestibility was not affected by dietary Pp and fecal P excretion was regulated by total dietary P rather than by form of dietary P. / Ph. D.
289

Using Workload Characterization to Guide High Performance Graph Processing

Hassan, Mohamed Wasfy Abdelfattah 24 May 2021 (has links)
Graph analytics represent an important application domain widely used in many fields such as web graphs, social networks, and Bayesian networks. The sheer size of the graph data sets combined with the irregular nature of the underlying problem pose a significant challenge for performance, scalability, and power efficiency of graph processing. With the exponential growth of the size of graph datasets, there is an ever-growing need for faster more power efficient graph solvers. The computational needs of graph processing can take advantage of the FPGAs' power efficiency and customizable architecture paired with CPUs' general purpose processing power and sophisticated cache policies. CPU-FPGA hybrid systems have the potential for supporting performant and scalable graph solvers if both devices can work coherently to make up for each other's deficits. This study aims to optimize graph processing on heterogeneous systems through interdisciplinary research that would impact both the graph processing community, and the FPGA/heterogeneous computing community. On one hand, this research explores how to harness the computational power of FPGAs and how to cooperatively work in a CPU-FPGA hybrid system. On the other hand, graph applications have a data-driven execution profile; hence, this study explores how to take advantage of information about the graph input properties to optimize the performance of graph solvers. The introduction of High Level Synthesis (HLS) tools allowed FPGAs to be accessible to the masses but they are yet to be performant and efficient, especially in the case of irregular graph applications. Therefore, this dissertation proposes automated frameworks to help integrate FPGAs into mainstream computing. This is achieved by first exploring the optimization space of HLS-FPGA designs, then devising a domain-specific performance model that is used to build an automated framework to guide the optimization process. Moreover, the architectural strengths of both CPUs and FPGAs are exploited to maximize graph processing performance via an automated framework for workload distribution on the available hardware resources. / Doctor of Philosophy / Graph processing is a very important application domain, which is emphasized by the fact that many real-world problems can be represented as graph applications. For instance, looking at the internet, web pages can be represented as the graph vertices while hyper links between them represent the edges. Analyzing these types of graphs is used for web search engines, ranking websites, and network analysis among other uses. However, graph processing is computationally demanding and very challenging to optimize. This is due to the irregular nature of graph problems, which can be characterized by frequent indirect memory accesses. Such a memory access pattern is dependent on the data input and impossible to predict, which renders CPUs' sophisticated caching policies useless to performance. With the rise of heterogeneous computing that enabled using hardware accelerators, a new research area was born, attempting to maximize performance by utilizing the available hardware devices in a heterogeneous ecosystem. This dissertation aims to improve the efficiency of utilizing such heterogeneous systems when targeting graph applications. More specifically, this research focuses on the collaboration of CPUs and FPGAs (Field Programmable Gate Arrays) in a CPU-FPGA hybrid system. Innovative ideas are presented to exploit the strengths of each available device in such a heterogeneous system, as well as addressing some of the inherent challenges of graph processing. Automated frameworks are introduced to efficiently utilize the FPGA devices, in addition to distributing and scheduling the workload across multiple devices to maximize the performance of graph applications.
290

Development and characterization of novel detectors for use in flow injection analysis or liquid chromatography

Roush, John Albert 06 June 2008 (has links)
A rapid scanning square wave voltammetric detector has been developed for use with high performance liquid chromatography.The electrochemical cell used in the detector was designed so that the HPLC effluent flows through the center of a large diameter platinum disk electrode and is then forced to flow radially across the electrode surface. The arrangement of the electrodes in the cell was intended to result in large analytical currents while minimizing electrical resistance and analyte band spreading in the detection zone. The detector was evaluated in terms of its minimum detectable quantity, linear dynamic range, electrochemical efficiency, and analyte band spreading. The MDQ was found to be in the low parts per billion range for hydroquinone. The detector was shown to provide data that is qualitatively superior to data obtained by amperometric detection and was shown to be compatible with gradient elution HPLC over a broad range of solvent compositions. A sensor based on the quartz crystal microbalance was also developed for use in flowing solvent streams. Quartz crystals were treated with various compounds to produce close - packed monolayer coatings which could interact with solutes entering the flow cell. The solute capacity was determined for one of the monolayer coatings and various factors that influence the magnitude of the OCM signal were investigated. These factors include the solvent flow rate, the solvent strength, solute molecular structure, and bonded phase molecular structure. The QCM sensor was found to be a convenient probe for conducting surface adsorption studies and the molar free energy of adsorption was determined for some chemically related solutes on an amine modified crystal. / Ph. D.

Page generated in 0.0731 seconds