Spelling suggestions: "subject:"highperformance"" "subject:"highperformance""
361 |
Chromatographic and Spectroscopic Studies on Aquatic Fulvic AcidChang, David Juan-Yuan 08 1900 (has links)
High Performance Liquid Chromatography (HPLC) was used to investigate the utility of this technique for the analytical and preparative separation of components of aquatic fulvic acids (FA). Three modes of HPLC namely adsorption, anion exchange and reversed phase were evaluated. Aquatic fulvic acids were either extracted from surface water and sediment samples collected from the Southwest of the U.S., or were provided in a high purity form from the USGS. On the adsorption mode, a major fraction of aquatic fulvic acid was isolated on a semipreparative scale and subjected to Carbon-13 NMR and FAB Mass Spectroscopy. Results indicated that (1) The analyzed fraction of fulvic acid contains more aliphatic than aromatic moieties; (2) Methoxy, carboxylic acids, and esters are well-defined moieties of the macromolecule; (3) Phenolic components of the macromolecules were not detected in the Carbon-13 NMR spectrum possibly because of the presence of stable free radicals. Results of the anion exchange mode have shown that at least three types of acidic functionalities in aquatic fulvic acid can be separated. Results also indicated that aquatic fulvic acid can be progressively fractionated by using subsequent modes of HPLC. Results of reversed phase mode have shown that (1) The fractionation of aquatic fulvic acid by RP-HPLC is essentially controlled by the polarity and/or pH of the carrier solvent system; (2) Under different RP-HPLC conditions aquatic fulvic acid from several locations are fractionated into the same major components; (3) Fulvic acid extracted from water and sediment from the same site are more similar than those extracted from different sites; (4) Cationic and anionic ion pair reagents indicated the presence of amphoteric compounds within the polymeric structure of fulvic acid. Each mode of HPLC provided a characteristic profile of fulvic acid. The results of this research provided basic information on the behavior of aquatic fulvic acids under three modes of HPLC. Such informations are prerequisite for further investigation by spectroscopic methods.
|
362 |
Optimization of High Performance Liquid Chromatographic SeparationsNguyen, Khanh Thi 08 1900 (has links)
This study had a twofold purpose. First, the usefulness of the simplex algorithm as a short method of optimization in high performance liquid chromatographic separations was investigated. The second was to test a modified simplex method. The volume fractions of mobile phase components were chosen as the variable factors in the optimization process. Four test cases were performed which included separation of cholesterol esters, naphthalene and its derivatives, polycyclic aromatic compounds, and the thiane compounds. The standard for accepting an optimum was based on the baseline separation of two adjacent peaks and the analysis time. In addition to successful separations, the correlation between the separation and the chemical characteristics of mobile phase compositions was calculated and this could then be used for further modification of simplex search strategy.
|
363 |
Vývoj výkonnosti evropských juniorských medailistek z let 2000-2008 v plavání / Evolution of the performance of European junior medalists from the years 2000-2008 in swimmingBrothánková, Tereza January 2015 (has links)
Title: Evolution of the performance of European junior medalists from the years 2000-2008 in swimming. Objectives: The aim of the thesis is to find out if the swimmer who won a medal at the European Junior Championship in the period 2000-2008, continue swimming career for the next five years. The period of five years was set to get adulthood of competitors and unification length exploration of all competitors. Methods: For obtain the informations we have chosen the method of study documents - scorecards. Because of theme of thesis was the collection of data taken from European Junior scorecards in the years 2000-2008, the European Championships at the SC in the years 2001- 2013, from the Olympic Games in 2004, 2008, 2012. Results: Outcomes of our research showed that the more than half of competitors that have succeeded at the European Junior Championships in 2000-2008 failed to qualify for any additional European Championships in a SC. At the final evaluation of the collected informations, we found that only less than one fifth of the competitors were able to qualify at the European championships in 2000-2008. Based on our research, we can deduce that most of the medalists from MEJ don't continue in their swimming careers in adulthood. Keywords: swimming career, high performance, swimming...
|
364 |
Scalable data-management systems for Big Data / Sur le passage à l'échelle des systèmes de gestion des grandes masses de donnéesTran, Viet-Trung 21 January 2013 (has links)
La problématique "Big Data" peut être caractérisée par trois "V": - "Big Volume" se rapporte à l'augmentation sans précédent du volume des données. - "Big Velocity" se réfère à la croissance de la vitesse à laquelle ces données sont déplacées entre les systèmes qui les gèrent. - "Big Variety" correspond à la diversification des formats de ces données. Ces caractéristiques imposent des changements fondamentaux dans l'architecture des systèmes de gestion de données. Les systèmes de stockage doivent être adaptés à la croissance des données, et se doivent de passer à l'échelle tout en maintenant un accès à hautes performances. Cette thèse se concentre sur la construction des systèmes de gestion de grandes masses de données passant à l'échelle. Les deux premières contributions ont pour objectif de fournir un support efficace des "Big Volumes" pour les applications data-intensives dans les environnements de calcul à hautes performances (HPC). Nous abordons en particulier les limitations des approches existantes dans leur gestion des opérations d'entrées/sorties (E/S) non-contiguës atomiques à large échelle. Un mécanisme basé sur les versions est alors proposé, et qui peut être utilisé pour l'isolation des E/S non-contiguës sans le fardeau de synchronisations coûteuses. Dans le contexte du traitement parallèle de tableaux multi-dimensionels en HPC, nous présentons Pyramid, un système de stockage large-échelle optimisé pour ce type de données. Pyramid revoit l'organisation physique des données dans les systèmes de stockage distribués en vue d'un passage à l'échelle des performances. Pyramid favorise un partitionnement multi-dimensionel de données correspondant le plus possible aux accès générés par les applications. Il se base également sur une gestion distribuée des métadonnées et un mécanisme de versioning pour la résolution des accès concurrents, ce afin d'éliminer tout besoin de synchronisation. Notre troisième contribution aborde le problème "Big Volume" à l'échelle d'un environnement géographiquement distribué. Nous considérons BlobSeer, un service distribué de gestion de données orienté "versioning", et nous proposons BlobSeer-WAN, une extension de BlobSeer optimisée pour un tel environnement. BlobSeer-WAN prend en compte la hiérarchie de latence et favorise les accès aux méta-données locales. BlobSeer-WAN inclut la réplication asynchrone des méta-données et une résolution des collisions basée sur des "vector-clock". Afin de traîter le caractère "Big Velocity" de la problématique "Big Data", notre dernière contribution consiste en DStore, un système de stockage en mémoire orienté "documents" qui passe à l'échelle verticalement en exploitant les capacités mémoires des machines multi-coeurs. Nous montrons l'efficacité de DStore dans le cadre du traitement de requêtes d'écritures atomiques complexes tout en maintenant un haut débit d'accès en lecture. DStore suit un modèle d'exécution mono-thread qui met à jour les transactions séquentiellement, tout en se basant sur une gestion de la concurrence basée sur le versioning afin de permettre un grand nombre d'accès simultanés en lecture. / Big Data can be characterized by 3 V’s. • Big Volume refers to the unprecedented growth in the amount of data. • Big Velocity refers to the growth in the speed of moving data in and out management systems. • Big Variety refers to the growth in the number of different data formats. Managing Big Data requires fundamental changes in the architecture of data management systems. Data storage should continue being innovated in order to adapt to the growth of data. They need to be scalable while maintaining high performance regarding data accesses. This thesis focuses on building scalable data management systems for Big Data. Our first and second contributions address the challenge of providing efficient support for Big Volume of data in data-intensive high performance computing (HPC) environments. Particularly, we address the shortcoming of existing approaches to handle atomic, non-contiguous I/O operations in a scalable fashion. We propose and implement a versioning-based mechanism that can be leveraged to offer isolation for non-contiguous I/O without the need to perform expensive synchronizations. In the context of parallel array processing in HPC, we introduce Pyramid, a large-scale, array-oriented storage system. It revisits the physical organization of data in distributed storage systems for scalable performance. Pyramid favors multidimensional-aware data chunking, that closely matches the access patterns generated by applications. Pyramid also favors a distributed metadata management and a versioning concurrency control to eliminate synchronizations in concurrency. Our third contribution addresses Big Volume at the scale of the geographically distributed environments. We consider BlobSeer, a distributed versioning-oriented data management service, and we propose BlobSeer-WAN, an extension of BlobSeer optimized for such geographically distributed environments. BlobSeer-WAN takes into account the latency hierarchy by favoring locally metadata accesses. BlobSeer-WAN features asynchronous metadata replication and a vector-clock implementation for collision resolution. To cope with the Big Velocity characteristic of Big Data, our last contribution feautures DStore, an in-memory document-oriented store that scale vertically by leveraging large memory capability in multicore machines. DStore demonstrates fast and atomic complex transaction processing in data writing, while maintaining high throughput read access. DStore follows a single-threaded execution model to execute update transactions sequentially, while relying on a versioning concurrency control to enable a large number of simultaneous readers.
|
365 |
DNS library for a high-performance DNS server / DNS library for a high-performance DNS serverSlovák, Ľuboš January 2011 (has links)
In this thesis I design and implement a high-performance library for developing authoritative name server software. The library supports all basic as well as several advanced features of the DNS protocol, such as EDNS0, DNSSEC or zone transfers. It is designed to be modular, extensible and easy to use. The library was integrated into an experimental server implementation used for testing and benchmarking. Its performance is evaluated and proved to be superior to prevalent implementations in most cases. The thesis also provides theoretic background and a deep analysis of the task together with detailed description of the implemented solutions.
|
366 |
The effects of hardware acceleration on power usage in basic high-performance computingAmsler, Christopher January 1900 (has links)
Master of Science / Department of Electrical Engineering / Dwight Day / Power consumption has become a large concern in many systems including portable electronics and supercomputers. Creating efficient hardware that can do more computation with less power is highly desirable. This project proposes a possible avenue to complete this goal by hardware accelerating a conjugate gradient solve using a Field Programmable Gate Array (FPGA). This method uses three basic operations frequently: dot product, weighted vector addition, and sparse matrix vector multiply. Each operation was accelerated on the FPGA. A power monitor was also implemented to measure the power consumption of the FPGA during each operation with several different implementations. Results showed that a decrease in time can be achieved with the dot product being hardware accelerated in relation to a software only approach. However, the more memory intensive operations were slowed using the current architecture for hardware acceleration.
|
367 |
Effect of memory access and caching on high performance computingGroening, James January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Dwight Day / High-performance computing is often limited by memory access. As speeds increase, processors are often waiting on data transfers to and from memory. Classic memory controllers focus on delivering sequential memory as quickly as possible. This will increase the performance of instruction reads and sequential data reads and writes. However, many applications in high-performance computing often include random memory access which can limit the performance of the system. Techniques such as scatter/gather can improve performance by allowing nonsequential data to be written and read in a single operation. Caching can also improve performance by storing some of the data in memory local to the processor.
In this project, we try to find the benefits of different cache configurations. The different configurations include different cache line sizes as well as total size of cache. Although a range of benchmarks are typically used to test performance, we focused on a conjugate gradient solver, HPCCG. The program HPCCG incorporates many of the elements of common benchmarks used in high-performance computing, and relates better to a real world problem. Results show that the performance of a cache configuration can depend on the size of the problem. Problems of smaller sizes can benefit more from a larger cache, while a smaller cache may be sufficient for larger problems.
|
368 |
Risk Management in Sustainable Projects in the Construction Industry : Cases of Swedish CompaniesApine, Anete, Escobar Valdés, Francisco José January 2017 (has links)
Sustainable construction projects are expanding in the market and green codes andstandards are advancing giving the ground for development of technology and materialsapplied. With every new material and technology utilised in the field, also risks aregrowing. The importance of risk management in sustainable construction projects isthus increasing and more experience and expertise is needed. So, the purpose of thisthesis is to examine and gain deeper understanding of project related risks in sustainableconstruction projects in Swedish companies operating in built environment. It is crucialto gain knowledge of good practices within the industry to be able to propose furtherinvestigation of the subject that could improve the existing risk management andsustainable construction project goals.This thesis examines the existing theory of the risk management process and sustainableprojects by shedding light on the trends within the construction industry. The intentionof the thesis is to add value to the existing gap in the theory that suggests thatconstruction industry is exposed to more risks and uncertainty than perhaps otherindustries, and that introducing sustainability adds more uncertainties and risks. Thisphenomenon is claimed to be due to the lack of knowledge and experience in the areaand, thus, practitioners seek for new ways how to tackle the arising issues. This thesisattempts to display how Swedish companies who are working with green and highperformance buildings identify and deal with risks.Two Swedish companies operating in built environment were chosen in order toinvestigate different ways of dealing with risks and the trend of sustainability inconstruction. Those in charge of risk and sustainability within the companies wereinterviewed applying semi-structured interviews and additional information wasgathered through multiple sources, such as annual reports, web pages and otherdocuments. This thesis has exploratory and qualitative research design and appliesabductive approach for the purpose and the nature of phenomena.The findings showed the different tools how risk management is applied in thecompanies and how it is related to the risks faced in green building construction. Theresults showed the importance of tools applied tackling sustainable construction projectsthat companies have applied and added to their processes in order to manageuncertainties that could occur if these processes were not implemented. As regards thegeneralisability towards findings, there still could be added more companies and futureresearch could imply also maturity of the companies to make findings more precise.However, after consideration of the processes learnt from companies, the proposedmodel for achievement of successful sustainable construction projects can be followedand applied in other companies operating in this industry.
|
369 |
A Parallel Implementation of an Agent-Based Brain Tumor ModelSkjerven, Brian M. 05 June 2007 (has links)
"The complex growth patterns of malignant brain tumors can present challenges in developing accurate models. In particular, the computational costs associated with modeling a realistically sized tumor can be prohibitive. The use of high-performance computing (HPC) and novel mathematical techniques can help to overcome this barrier. This paper presents a parallel implementation of a model for the growth of glioma, a form of brain cancer, and discusses how HPC is being used to take a first step toward realistically sized tumor models. Also, consideration is given to the visualization process involved with large-scale computing. Finally, simulation data is presented with a focus on scaling."
|
370 |
An integrated component selection framework for system level designUnknown Date (has links)
The increasing system design complexity is negatively impacting the overall system design productivity by increasing the cost and time of product development. One key to overcoming these challenges is exploiting Component Based Engineering practices. However it is a challenge to select an optimum component from a component library that will satisfy all system functional and non-functional requirements, due to varying performance parameters and quality of service requirements. In this thesis we propose an integrated framework for component selection. The framework is a two phase approach that includes a system modeling and analysis phase and a component selection phase. Three component selection algorithms have been implemented for selecting components for a Network on Chip architecture. Two algorithms are based on a standard greedy method, with one being enhanced to produce more intelligent behavior. The third algorithm is based on simulated annealing. Further, a prototype was developed to evaluate the proposed framework and compare the performance of all the algorithms. / by Chad Calvert. / Thesis (M.S.C.S.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
Page generated in 0.0949 seconds