• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High speed comprehensive two-dimenstional gas chromatography/mass spectrometry

Samiveloo, Silverraji, Chemistry, Faculty of Science, UNSW January 2005 (has links)
The use of short columns, higher carrier gas velocity and fast temperature programs in Comprehensive Two-Dimensional Gas Chromatography coupled to Time-of- Flight Mass Spectrometry (GC x GC/TOFMS) technique is expected to increase the speed of analysis up to several orders of magnitude when compared to conventional gas chromatography (GC) or gas chromatography/mass spectrometry (GC/MS). A systematic evaluation of the GC x GC/TOFMS configuration for high-speed applications has received little attention in the literature. The feasibility of High Speed Comprehensive Two-Dimensional Gas Chromatography coupled to Mass Spectrometry (High speed GC x GC/MS) for complex mixtures has been investigated in this thesis. A particular focus was placed on comparing conventional scanning quadrupole mass spectrometry (qMS) with a newly available non-scanning time-of-flight instruments (TOFMS). Experiments were carried out using GC/qMS, GC x GC/qMS, GC/TOFMS and GC x GC/TOFMS both in normal (slow) and fast temperature rates coupled with high frequency modulation in GC x GC. Initially a complex mixture consists of 24 semivolatile compounds was used as the analyte for the above purpose. In the initial experiments parameters like acquisition rate and duty cycle for qMS were determined to evaluate the effectiveness of the instrument for fast analysis. The practical duty cycle value obtained for the qMS was only about 18 % for single ion and one compound at a dwell time of 10 ms in SIM mode. In both high-speed GC/qMS and high-speed GC x GC/qMS techniques only about 40 % of the components in the complex mixture were found to be well separated. The acquisition rate of scanning instruments like qMS is incompatible for fast eluting peaks in high speed GC. TOFMS that has an acquisition rate of several hundred spectra per second offer the potential to define the fast GC peaks accurately. The high quality spectra from TOFMS also enable deconvolution of coeluting peaks in the complex mixtures. The advantage of the automated spectral deconvolution is demonstrated for the identification of the coeluting peaks in the complex mixtures. Coelution of peaks is also observed with highspeed GC/TOFMS technique. The high-speed GC x GC/TOFMS was also tested with two different analyte system ??? A pesticide mixture and platformate (an aromatic mixture) to evaluate the suitability for high-speed analysis of complex mixtures. A poor resolution was observed for the pesticide mixture in the two-dimensional plane and it appeared, as there was nearly no orthogonal separation in the second dimension. The platformate mixture displayed a better two-dimensional separation. Chromatographic peak resolution is not really a primary requirement for locating and identifying the coeluting compounds in high-speed GC x GC/TOFMS technique. However, it was observed that the high-speed GC x GC/TOFMS too faced problem to unscramble the mass spectra of those compounds with similar structure and sharing the same unique masses.
2

High speed comprehensive two-dimenstional gas chromatography/mass spectrometry

Samiveloo, Silverraji, Chemistry, Faculty of Science, UNSW January 2005 (has links)
The use of short columns, higher carrier gas velocity and fast temperature programs in Comprehensive Two-Dimensional Gas Chromatography coupled to Time-of- Flight Mass Spectrometry (GC x GC/TOFMS) technique is expected to increase the speed of analysis up to several orders of magnitude when compared to conventional gas chromatography (GC) or gas chromatography/mass spectrometry (GC/MS). A systematic evaluation of the GC x GC/TOFMS configuration for high-speed applications has received little attention in the literature. The feasibility of High Speed Comprehensive Two-Dimensional Gas Chromatography coupled to Mass Spectrometry (High speed GC x GC/MS) for complex mixtures has been investigated in this thesis. A particular focus was placed on comparing conventional scanning quadrupole mass spectrometry (qMS) with a newly available non-scanning time-of-flight instruments (TOFMS). Experiments were carried out using GC/qMS, GC x GC/qMS, GC/TOFMS and GC x GC/TOFMS both in normal (slow) and fast temperature rates coupled with high frequency modulation in GC x GC. Initially a complex mixture consists of 24 semivolatile compounds was used as the analyte for the above purpose. In the initial experiments parameters like acquisition rate and duty cycle for qMS were determined to evaluate the effectiveness of the instrument for fast analysis. The practical duty cycle value obtained for the qMS was only about 18 % for single ion and one compound at a dwell time of 10 ms in SIM mode. In both high-speed GC/qMS and high-speed GC x GC/qMS techniques only about 40 % of the components in the complex mixture were found to be well separated. The acquisition rate of scanning instruments like qMS is incompatible for fast eluting peaks in high speed GC. TOFMS that has an acquisition rate of several hundred spectra per second offer the potential to define the fast GC peaks accurately. The high quality spectra from TOFMS also enable deconvolution of coeluting peaks in the complex mixtures. The advantage of the automated spectral deconvolution is demonstrated for the identification of the coeluting peaks in the complex mixtures. Coelution of peaks is also observed with highspeed GC/TOFMS technique. The high-speed GC x GC/TOFMS was also tested with two different analyte system ??? A pesticide mixture and platformate (an aromatic mixture) to evaluate the suitability for high-speed analysis of complex mixtures. A poor resolution was observed for the pesticide mixture in the two-dimensional plane and it appeared, as there was nearly no orthogonal separation in the second dimension. The platformate mixture displayed a better two-dimensional separation. Chromatographic peak resolution is not really a primary requirement for locating and identifying the coeluting compounds in high-speed GC x GC/TOFMS technique. However, it was observed that the high-speed GC x GC/TOFMS too faced problem to unscramble the mass spectra of those compounds with similar structure and sharing the same unique masses.
3

Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images

Afshar, Yaser 08 November 2016 (has links) (PDF)
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation. / Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert.
4

Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images

Afshar, Yaser 17 October 2016 (has links)
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation. / Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert.

Page generated in 0.0983 seconds