• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 474
  • 212
  • 148
  • 88
  • 77
  • 70
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 2249
  • 2249
  • 972
  • 659
  • 645
  • 443
  • 432
  • 409
  • 357
  • 335
  • 329
  • 328
  • 323
  • 317
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Recovery, separation and characterization of phenolic compounds and flavonoids from maple products

Deslauriers, Isabelle. January 2000 (has links)
Comparative high-performance liquid chromatography (HPLC) and gas-liquid chromatography (GC) analyses of selected phenolic and flavonoid standards were developed using a wide range of detectors, including ultraviolet diode-array (UV-DAD) and electrochemical (EC) detectors for HPLC and flame ionization detector (FID) and mass spectrometry (MS) for GC. The results demonstrated that the limits of detection obtained with HPLC-EC analysis were 10 to 500-times higher for phenolic acid standards and 2 to 50-times higher for flavonoid standards than those obtained with the HPLC-UV analysis. HPLC-EC was more sensitive than GC/FID for all investigated standards, especially for vanillin and syringaldehyde. The results indicated that GC/FID/MS analysis of phenolic and flavonoid standards was more efficient than that of HPLC, providing a fast analysis with better resolution and baseline separation of all standards with minimum co-elution. The only co-elution encountered in GC/FID was with coniferol and p-coumaric acids. For HPLC analysis, (-)-epicatechin, caffeic and homovanillic acids were co-eluted at 28.04 min and sinapic and ferulic acids at 34.57 min. Phenolic compounds and flavonoids were extracted from maple sap and maple syrup with ethyl acetate and the recovered compounds were subjected to HPLC and GC analyses. Tentative identification of phenolic compounds and flavonoids in maple sap and maple syrup indicated the presence of protocatechuic acid, hydroxycinnamic acid derivatives, (+)-catechin, (-)-epicatechin, vanillin, coniferol, syringaldehyde, flavanols and dihydroflavonols related compounds. In addition, the identification by GC/MS of protocatechuic acid, vanillin, syringaldehyde, coniferol and p-coumaric acid was made by comparing mass spectrum characteristics of individual peak from total ion chromatogram (TIC) to that of standard compounds. The seasonal variation of selected phenolic compounds and flavonoids present in maple sap and maple syrup was also invest
502

HPLC-AAS interfaces for the determination of ionic alkyllead, arsonium and selenonium compounds

Blais, Jean-Simon January 1990 (has links)
Three direct interfaces for coupling high performance liquid chromatography (HPLC) with atomic absorption spectrometry (AAS) were developed and optimized for the determination of ionic organolead, organoselenium and organoarsenic compounds. The first all-quartz interface consisted of a thermospray nebulizer and a flame microatomizer in which ionic alkyllead analytes (R$ sb{ rm n}$Pb$ sp{ rm (4-n)+};$ R = CH$ sb3,$ C$ sb2$H$ sb5)$ were atomized by a methanol (from HPLC eluent)-oxygen kinetic flame, and channeled in a quartz tube (atom keeper) mounted into the AAS optical beam. Alternately, the classical electrothermal atomization technique for organolead species (quartz furnace under hydrogen atmosphere) was coupled with a post-column derivatization-volatilization apparatus based on the ethylation of ionic alkylleads by sodium tetraethylborate. The limits of detection provided by these two approaches were 1.0-3.4 ng and 0.10-0.15 ng, respectively. Arsonium ((CH$ sb3) sb3$RAs$ sp+;$ R = CH$ sb3,$ CH$ sb2$CH$ sb2$OH, CH$ sb2$COOH) and selenonium ((CH$ sb3) sb2$RSe$ sp+;$ R = CH$ sb3,$ CH$ sb2$CH$ sb2$OH) species were quantified using a novel HPLC-AAS approach based on a direct coupling of three processes: thermospray nebulization, thermochemical hydride generation using hydrogen gas, and diffuse flame atomization. Direct evidences for the thermochemical hydride generation process was obtained by injecting (CH$ sb3) sb3$SeI and SeO$ sb2$ into the interface and capturing the gaseous end products in liquid chemical traps specific for SeH$ sb2$ and Se(IV). Both analytes were derivatized to SeH$ sb2$ only in the presence of hydrogen in the interface. Reverse- and normal-phase high pressure liquid chromatographic methods were also developed and adapted for the HPLC-AAS analyses of alkyllead, arsonium and selenonium compounds in real samples. The limit of detection of the arsonium and selenonium cations were 7.6-13.3 ng and 31.0-43.9 ng, respectively.
503

KernTune: self-tuning Linux kernel performance using support vector machines.

Yi, Long. January 2006 (has links)
<p>Self-tuning has been an elusive goal for operating systems and is becoming a pressing issue for modern operating systems. Well-trained system administrators are able to tune an operating system to achieve better system performance for a specific system class. Unfortunately, the system class can change when the running applications change. The model for self-tuning operating system is based on a monitor-classify-adjust loop. The idea of this loop is to continuously monitor certain performance metrics, and whenever these change, the system determines the new system class and dynamically adjusts tuning parameters for this new class. This thesis described KernTune, a prototype tool that identifies the system class and improves system performance automatically. A key aspect of KernTune is the notion of Artificial Intelligence oriented performance tuning. Its uses a support vector machine to identify the system class, and tunes the operating system for that specific system class. This thesis presented design and implementation details for KernTune. It showed how KernTune identifies a system class and tunes the operating system for improved performance.</p>
504

Extensible message layers for resource-rich cluster computers

Ulmer, Craig D. 12 1900 (has links)
No description available.
505

Distributed multi-processing for high performance computing

Algire, Martin. January 2000 (has links)
Parallel computing can take many forms. From a user's perspective, it is important to consider the advantages and disadvantages of each methodology. The following project attempts to provide some perspective on the methods of parallel computing and indicate where the tradeoffs lie along the continuum. Problems that are parallelizable enable researchers to maximize the computing resources available for a problem, and thus push the limits of the problems that can be solved. Solving any particular problem in parallel will require some very important design decisions to be made. These decisions may dramatically affect portability, performance, and cost of implementing a software solution to the problem. The results gained from this work indicate that although performance improvements are indeed possible---they are heavily dependent on the application in question and may require much more programming effort and expertise to implement.
506

McMPI : a managed-code message passing interface library for high performance communication in C#

Holmes, Daniel John January 2012 (has links)
This work endeavours to achieve technology transfer between established best-practice in academic high-performance computing and current techniques in commercial high-productivity computing. It shows that a credible high-performance message-passing communication library, with semantics and syntax following the Message-Passing Interface (MPI) Standard, can be built in pure C# (one of the .Net suite of computer languages). Message-passing has been the dominant paradigm in high-performance parallel programming of distributed-memory computer architectures for three decades. The MPI Standard originally distilled architecture-independent and language-agnostic ideas from existing specialised communication libraries and has since been enhanced and extended. Object-oriented languages can increase programmer productivity, for example by allowing complexity to be managed through encapsulation. Both the C# computer language and the .Net common language runtime (CLR) were originally developed by Microsoft Corporation but have since been standardised by the European Computer Manufacturers Association (ECMA) and the International Standards Organisation (ISO), which facilitates portability of source-code and compiled binary programs to a variety of operating systems and hardware. Combining these two open and mature technologies enables mainstream programmers to write tightly-coupled parallel programs in a popular standardised object-oriented language that is portable to most modern operating systems and hardware architectures. This work also establishes that a thread-to-thread delivery option increases shared-memory communication performance between MPI ranks on the same node. This suggests that the thread-as-rank threading model should be explicitly specified in future versions of the MPI Standard and then added to existing MPI libraries for use by thread-safe parallel codes. This work also ascertains that the C# socket object suffers from undesirable characteristics that are critical to communication performance and proposes ways of improving the implementation of this object.
507

Gravitational Microlensing: An automated high-performance modelling system

McDougall, Alistair January 2014 (has links)
Nightly surveys of the skies detect thousands of new gravitational microlensing events every year. With the increasing number of telescopes, and advancements of the tech- nologies used, the detection rate is growing. Of these events, those that display the characteristics of a binary lens are of particular interest. They require special atten- tion with follow-up observations if possible, as such events can lead to new planetary detections. To characterise a new planetary event, high-cadence, accurate observations are optimal. However, without the ability of repeat observations, identification that any event may be planetary needs to happen before it finishes. I have developed a system that automatically retrieves all microlensing survey data and follow-up observations, models the events as single lenses, and publishes the results live to a web site. With minimal human interaction, the modelling system is able to identify and initialize binary events, and perform a thorough search of the seven dimensional parameter space of a binary lens. These results are also presented live through the web site, enabling observers an up to date view of the latest binary solutions. The real-time modelling of the system enables a prompt analysis of ongoing events, providing observers with the information, to determine if further observations are desired for the modelled events. An archive of all modelled binary lens events is maintained and accessible through the website. To date the archive contains 68 unique events’ binary lens solutions from the 2014 observing season. The system developed has been validated through model comparisons of previously published work, and is in use during the current observing season. This year it has played a role in identifying new planetary candidate events, confirming proposed solutions, and providing alternate viable solutions to previously presented solutions.
508

High performance reconfigurable architectures for biological sequence alignment

Isa, Mohammad Nazrin January 2013 (has links)
Bioinformatics and computational biology (BCB) is a rapidly developing multidisciplinary field which encompasses a wide range of domains, including genomic sequence alignments. It is a fundamental tool in molecular biology in searching for homology between sequences. Sequence alignments are currently gaining close attention due to their great impact on the quality aspects of life such as facilitating early disease diagnosis, identifying the characteristics of a newly discovered sequence, and drug engineering. With the vast growth of genomic data, searching for a sequence homology over huge databases (often measured in gigabytes) is unable to produce results within a realistic time, hence the need for acceleration. Since the exponential increase of biological databases as a result of the human genome project (HGP), supercomputers and other parallel architectures such as the special purpose Very Large Scale Integration (VLSI) chip, Graphic Processing Unit (GPUs) and Field Programmable Gate Arrays (FPGAs) have become popular acceleration platforms. Nevertheless, there are always trade-off between area, speed, power, cost, development time and reusability when selecting an acceleration platform. FPGAs generally offer more flexibility, higher performance and lower overheads. However, they suffer from a relatively low level programming model as compared with off-the-shelf microprocessors such as standard microprocessors and GPUs. Due to the aforementioned limitations, the need has arisen for optimized FPGA core implementations which are crucial for this technology to become viable in high performance computing (HPC). This research proposes the use of state-of-the-art reprogrammable system-on-chip technology on FPGAs to accelerate three widely-used sequence alignment algorithms; the Smith-Waterman with affine gap penalty algorithm, the profile hidden Markov model (HMM) algorithm and the Basic Local Alignment Search Tool (BLAST) algorithm. The three novel aspects of this research are firstly that the algorithms are designed and implemented in hardware, with each core achieving the highest performance compared to the state-of-the-art. Secondly, an efficient scheduling strategy based on the double buffering technique is adopted into the hardware architectures. Here, when the alignment matrix computation task is overlapped with the PE configuration in a folded systolic array, the overall throughput of the core is significantly increased. This is due to the bound PE configuration time and the parallel PE configuration approach irrespective of the number of PEs in a systolic array. In addition, the use of only two configuration elements in the PE optimizes hardware resources and enables the scalability of PE systolic arrays without relying on restricted onboard memory resources. Finally, a new performance metric is devised, which facilitates the effective comparison of design performance between different FPGA devices and families. The normalized performance indicator (speed-up per area per process technology) takes out advantages of the area and lithography technology of any FPGA resulting in fairer comparisons. The cores have been designed using Verilog HDL and prototyped on the Alpha Data ADM-XRC-5LX card with the Virtex-5 XC5VLX110-3FF1153 FPGA. The implementation results show that the proposed architectures achieved giga cell updates per second (GCUPS) performances of 26.8, 29.5 and 24.2 respectively for the acceleration of the Smith-Waterman with affine gap penalty algorithm, the profile HMM algorithm and the BLAST algorithm. In terms of speed-up improvements, comparisons were made on performance of the designed cores against their corresponding software and the reported FPGA implementations. In the case of comparison with equivalent software execution, acceleration of the optimal alignment algorithm in hardware yielded an average speed-up of 269x as compared to the SSEARCH 35 software. For the profile HMM-based sequence alignment, the designed core achieved speed-up of 103x and 8.3x against the HMMER 2.0 and the latest version of HMMER (version 3.0) respectively. On the other hand, the implementation of the gapped BLAST with the two-hit method in hardware achieved a greater than tenfold speed-up compared to the latest NCBI BLAST software. In terms of comparison against other reported FPGA implementations, the proposed normalized performance indicator was used to evaluate the designed architectures fairly. The results showed that the first architecture achieved more than 50 percent improvement, while acceleration of the profile HMM sequence alignment in hardware gained a normalized speed-up of 1.34. In the case of the gapped BLAST with the two-hit method, the designed core achieved 11x speed-up after taking out advantages of the Virtex-5 FPGA. In addition, further analysis was conducted in terms of cost and power performances; it was noted that, the core achieved 0.46 MCUPS per dollar spent and 958.1 MCUPS per watt. This shows that FPGAs can be an attractive platform for high performance computation with advantages of smaller area footprint as well as represent economic ‘green’ solution compared to the other acceleration platforms. Higher throughput can be achieved by redeploying the cores on newer, bigger and faster FPGAs with minimal design effort.
509

Middleware for online scientific data analytics at extreme scale

Zheng, Fang 22 May 2014 (has links)
Scientific simulations running on High End Computing machines in domains like Fusion, Astrophysics, and Combustion now routinely generate terabytes of data in a single run, and these data volumes are only expected to increase. Since such massive simulation outputs are key to scientific discovery, the ability to rapidly store, move, analyze, and visualize data is critical to scientists' productivity. Yet there are already serious I/O bottlenecks on current supercomputers, and movement toward the Exascale is further accelerating this trend. This dissertation is concerned with the design, implementation, and evaluation of middleware-level solutions to enable high performance and resource efficient online data analytics to process massive simulation output data at large scales. Online data analytics can effectively overcome the I/O bottleneck for scientific applications at large scales by processing data as it moves through the I/O path. Online analytics can extract valuable insights from live simulation output in a timely manner, better prepare data for subsequent deep analysis and visualization, and gain improved performance and reduced data movement cost (both in time and in power) compared to the conventional post-processing paradigm. The thesis identifies the key challenges for online data analytics based on the needs of a variety of large-scale scientific applications, and proposes a set of novel and effective approaches to efficiently program, distribute, and schedule online data analytics along the critical I/O path. In particular, its solution approach i) provides a high performance data movement substrate to support parallel and complex data exchanges between simulation and online data analytics, ii) enables placement flexibility of analytics to exploit distributed resources, iii) for co-placement of analytics with simulation codes on the same nodes, it uses fined-grained scheduling to harvest idle resources for running online analytics with minimal interference to the simulation, and finally, iv) it supports scalable efficient online spatial indices to accelerate data analytics and visualization on the deep memory hierarchies of high end machines. Our middleware approach is evaluated with leadership scientific applications in domains like Fusion, Combustion, and Molecular Dynamics, and on different High End Computing platforms. Substantial improvements are demonstrated in end-to-end application performance and in resource efficiency at scales of up to 16384 cores, for a broad range of analytics and visualization codes. The outcome is a useful and effective software platform for online scientific data analytics facilitating large-scale scientific data exploration.
510

Further characterization of the direct injection nebulizer for flow injection analysis and liquid chromatography with inductively coupled plasma spectrometric detection

Avery, Thomas W. January 1988 (has links)
A direct injection nebulizer (DIN) was constructed in our laboratory and was evaluated as an interface between a liquid chromatography column and an inductively coupled plasma-atomic emission spectrometer (ICP-AES). Optimum operating conditions, detection limits and reproducibility of the DIN closely matched literature data for a somewhat different commercial device. In addition, when using the DIN for sample introduction, the ICP detection exhibited uniform response towards phosphorous compounds of different volatilities. / Department of Chemistry

Page generated in 0.0457 seconds