Spelling suggestions: "subject:"highperformance"" "subject:"highperformance""
851 |
Use of liquid chromatography for assay of flavonoids as key constituents and antibiotics as trace elements in propolis. Investigation into the application of a range of liquid chromatography techniques for the analysis of flavonoids and antibiotics in propolis; and extraction studies of flavonoids in propolisKamble, Ujjwala Kerba January 2016 (has links)
Propolis is an approved food additive containing flavonoids as a major active constituent. Variability has been found in the composition of propolis in distinctive regions and it was noticed that there are limitations in the analysis of propolis. In this study, the identification of ten flavonoids and residual antibiotics in propolis was investigated by using several liquid chromatography techniques, including reversed-phase high-performance liquid chromatography (RP-HPLC), microemulsion LC (MELC) and ultra-performance LC (UPLC). The ten flavonoids that were selected for this research include rutin, myricetin, quercetin, apigenin, kaempferol, pinocembrin, CAPE, chrysin, galangin and acacetin while chlortetracycline, oxytetracycline and doxycycline were selected to examine the residual antibiotics in propolis. For the analysis of the selected flavonoids, routine RP-HPLC method was found to be the best method, while MELC technique was found more efficient for the analysis of the selected antibiotics. Solid phase extraction with HLB sorbent was utilised in the analysis of antibiotics for clean-up of propolis. In method development studies for flavonoids and antibiotics, one-factor-at-a-time (OFAT) approach was followed. The final optimised method for the analysis of flavonoids as well as the method.
for the analysis of antibiotics was validated using the ICH guidelines, and various aspects, such as the linearity, selectivity, accuracy, recovery, robustness and stability parameters, were examined. Development of efficient conventional method for the extraction of flavonoids from propolis was studied extensively in the present research work using different extraction techniques such as maceration, hot extraction, ultrasound assisted extraction. Among all extraction experiments, ethanolic extraction using ultrasound extraction method was the best efficient approach.
This thesis shows that, in general, the performance of O/W MELC is superior to that of conventional HPLC for the determination of residual antibiotics in propolis. UPLC was not suitable for the analysis of flavonoids and antibiotics. The conventional LC was the only technique to separate the ten flavonoids but MELC was able to separate nine of the flavonoids with faster analysis time. This work also showed that MELC uses cheaper solvents. This considerable saving in both cost and time will potentially improve efficiency within quality control. / Social Justice Department, Government of Maharashtra, India.
|
852 |
Interpolants, Error Bounds, and Mathematical Software for Modeling and Predicting Variability in Computer SystemsLux, Thomas Christian Hansen 23 September 2020 (has links)
Function approximation is an important problem. This work presents applications of interpolants to modeling random variables. Specifically, this work studies the prediction of distributions of random variables applied to computer system throughput variability. Existing approximation methods including multivariate adaptive regression splines, support vector regressors, multilayer perceptrons, Shepard variants, and the Delaunay mesh are investigated in the context of computer variability modeling. New methods of approximation using Box splines, Voronoi cells, and Delaunay for interpolating distributions of data with moderately high dimension are presented and compared with existing approaches. Novel theoretical error bounds are constructed for piecewise linear interpolants over functions with a Lipschitz continuous gradient. Finally, a mathematical software that constructs monotone quintic spline interpolants for distribution approximation from data samples is proposed. / Doctor of Philosophy / It is common for scientists to collect data on something they are studying. Often scientists want to create a (predictive) model of that phenomenon based on the data, but the choice of how to model the data is a difficult one to answer. This work proposes methods for modeling data that operate under very few assumptions that are broadly applicable across science. Finally, a software package is proposed that would allow scientists to better understand the true distribution of their data given relatively few observations.
|
853 |
Investigation of Transfer Length, Development Length, Flexural Strength and Prestress Loss Trend in Fully Bonded High Strength Lightweight Prestressed GirdersNassar, Adil J. 26 June 2002 (has links)
Encouraged by the performance of high performance normal weight composite girders, Virginia Department of Transportation has sought to exploit the use of high strength lightweight composite concrete (HSLWC) girders to achieve economies brought about by the reduction of dead loads in bridges. Transfer Length measurements conducted on two AASHTO Type IV HSLWC prestressed girders, resulted in an average transfer length of 17 inches, well below the AASHTO and ACI guidance.
Two girders composed of HSLWC AASHTO Type II girders and a 48" x 8" normal weight 4000-psi concrete deck were produced. The HSLWC Type II girders were cast of concretes with a compressive strength of 6380 psi and unit weight of 114 pcf. Full scale testing of the girders was conducted to evaluate development length and flexural strength in HSLWC composite girders. Embedment lengths of five, six and eight feet were evaluated. Tests indicated a development length of about 72 inches, marginally below the ACI and AASHTO stipulation. Four of eight strands in the girders showed general bond failure nevertheless, the tested girders exceeded their theoretical flexural capacity by 24 to 30 percent.
A third composite girder was cast of a high strength normal weight concrete (HSNWC) Type II girder, and topped with a 48" x 8" normal weight 4000-psi concrete deck. This girder was intended as a control specimen to contrast its test results with the HSLWC composite girders. The targeted compressive strength of both the HSLWC and HSNWC AASHTO beams was 8000 psi. The compressive strength of the HSNWC mixture, however, was about 8990 psi compared to 6380 psi for the HSLWC mixture.
Prestress losses in HSLWC AASHTO Type IV girders monitored over a nine-month period were found to be less than those calculated using the ACI and PCI models. Furthermore, the ACI model indicated that the effective prestressess retained in the HSLWC girders in 30 year's time are greater than 50% of the specified tensile strength of the strands. / Master of Science
|
854 |
The Production of 2-Keto-L-Gulonic Acid by Different Gluconobacter StrainsNassif, Lana Amine 14 February 1997 (has links)
Vitamin C is industrially produced by the Reichstein method, which uses gluconobacters to oxidize sorbitol to sorbose then a chemical process to convert sorbose to 2-keto-L-gulonic acid (2-KLG). The establishment of a more extensive microbial process for 2-KLG production translates into a less expensive and more efficient production of vitamin C. I examined pure strains and mixed cultures for their ability to produce 2-KLG using thin layer and high performance liquid chromatography. The DSM 4027 mixed culture produced the highest yield, 25 g/L, of 2-KLG from 100 g/L of sorbose, while the gram-negative rods isolated from DSM 4027 produced 8.8 g/L, and B. megaterium isolated from DSM 4027 produced 1.4 g/L. Thus, the gram-negative rods in the mixed culture were the primary 2-KLG producer, but B. megaterium in the DSM 4027 mixture enhanced this synthesis. Authentic pure cultures of Gluconobacter oxydans IFO strain 3293 and ATCC strain 621 produced 3.4 g/L and 5.7 g/L, respectively. Attempts to co-culture the isolated B. megaterium with the isolated gram-negative rods and authentic Gluconobacter strains did not increase 2-KLG production, nor did growing the cultures on B. megaterium spent media. Bacillus megaterium produced an unidentified keto-compound detected on the TLC chromatograms, which suggested that B. megaterium converted sorbose to an intermediate that may then be converted by the gram-negative rods in DSM 4027 to 2-KLG. Limited phenotypic tests suggested that the gram-negative rods in the DSM 4027 mixture are not gluconobacters. / Master of Science
|
855 |
Characterization of Sparsity-aware Optimization Paths for Graph Traversal on FPGAGondhalekar, Atharva 25 May 2023 (has links)
Breath-first search (BFS) is a fundamental building block in many graph-based applications, but it is difficult to optimize for a field-programmable gate array (FPGA) due to its irregular memory-access patterns. Prior work, based on hardware description languages (HDLs) and high-level synthesis (HLS), address the memory-access bottleneck of BFS by using techniques such as data alignment and compute-unit replication on FPGAs. The efficacy of such optimizations depends on factors such as the sparsity of target graph datasets. Optimizations intended for sparse graphs may not work as effectively for dense graphs on an FPGA and vice versa. This thesis presents two sets of FPGA optimization strategies for BFS, one for near-hypersparse graphs
and the other designed for sparse to moderately dense graphs.
For near-hypersparse graphs, a queue-based kernel with maximal use of local memory on FPGA is implemented. For denser graphs, an array-based kernel with compute-unit replication is implemented.
Across a diverse collection of graphs, our OpenCL optimization strategies for near-hypersparse graphs delivers a 5.7x to 22.3x speedup over a state-of-the-art OpenCL implementation, when evaluated on an Intel Stratix~10 FPGA. The optimization strategies for sparse to moderately dense graphs deliver 1.1x to 2.3x speedup over a state-of-the-art OpenCL implementation on the same FPGA. Finally, this work uses graph metrics such as average degree and Gini coefficient to observe the impact of graph properties on the performance of the proposed optimization strategies. / M.S. / A graph is a data structure that typically consists of two sets -- a set of vertices and a set of edges representing connections between the vertices. Graphs are used in a broad set of application domains such as the testing and verification of digital circuits, data mining of social networks, and analysis of road networks.
In such application areas, breadth-first search (BFS) is a fundamental building block.
BFS is used to identify the minimum number of edges needed to be traversed from a source vertex to one or many destination vertices. In recent years, several attempts have been made to optimize the performance of BFS on reconfigurable architectures such as field-programmable gate arrays (FPGAs). However, the optimization strategies for BFS are not necessarily applicable to all types of graphs. Moreover, the efficacy of such optimizations oftentimes depends on the sparsity of input graphs.
To that end, this work presents optimization strategies for graphs with varying levels of sparsity. Furthermore, this work shows that by tailoring the BFS design based on the sparsity of the input graph, significant performance improvements are obtained over the state-of-the-art BFS implementations on an FPGA.
|
856 |
Synthesis and Characterization of Wholly Aromatic, Water-Soluble Polyimides and Poly(amic acid)s Towards Fire Suppression FoamsStovall, Benjamin Joseph 28 May 2021 (has links)
Polyimides epitomize one of the most versatile high-performance engineering polymers. Polyimides are inherently mechanically robust, chemically inert, and thermooxidatively stable to 400+ °C depending on their chemical structure, enabling their function in numerous aerospace, electronic, medical, and flame-retardant applications. Polyimides can be highly modular even within synthetic limitations, which promotes and sustains innovative research. One recent interest concerns the innovation of fire suppression foams. Aqueous film-forming foams (AFFFs) are regularly sought when engaging liquid fuel (gasoline, jet fuel) fires. AFFFs utilize perfluorinated compounds (PFCs) like perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA), which exhibit toxicity, bioaccumulation, and persistence in the environment resulting in the presence of fluorosurfactant chemicals in environments either through direct or secondary exposure via chemical migration. Recently, the USEPA has even detected PFAS in drinking water at hundreds of military training facilities and civilian airports. While fluorinated compounds provide desirable thermooxidative stability and excellent fire retardancy, the environmental impact imposed by these chemicals strongly encourages research that targets the complete removal of PFCs in conventional formulations. This thesis focuses on the fundamental development of water-soluble sulfonated polyimide (sPI) and poly(amic acid) (sPAA) systems for next-generation polymer-based fire suppression foams. The use of sulfonated monomers and poly(amic acid) salt formation enables tunable structures and water solubilities. The polymers maintain competitive thermal stabilities to conventional polyimides and, when combined with readily available, non-toxic surfactants (SDS), produce stable foams. The MIL-F-24385F performance requirement evaluates foam quality/stability, drainage time, and burnback resistance to access viability and provides comparison to other systems; preliminary testing shows that sPI/sPAA formulations perform well. Solution rheology offers insights into fundamental scaling relationships of specific viscosity vs. concentration in both salt and salt-free solution that are important to future foam development. Additionally, the structural nature of the sPIs/ sPAAs allows for their modification with phosphonium moieties or siloxanes, which are slated to have positive effects on performance. Overall, these sPIs and sPAAs provide a promising platform for the future direction of fire suppression foams. / Master of Science / High-performance polymers are used in the most demanding of engineering applications. Polyimides represent one of the most versatile high-performance polymers. Polyimides are mechanically strong, chemically inert, and resistant to extreme temperatures depending on their chemical structure, allowing their use in numerous aerospace, electronic, medical, and flame-retardant applications. Polyimides are synthetically versatile, which enables the discovery of new uses after decades of research. One new targeted application is fire suppression foams. Aqueous film-forming foams (AFFFs) are the standard when battling liquid fuel (gasoline, jet fuel) fires. AFFFs contain perfluorinated compounds (PFCs), which are toxic and persist in the environment; they migrate easily to affect indirectly exposed ecosystems. Recently, the USEPA has even detected PFAS in drinking water at hundreds of military training facilities and civilian airports. While AFFFs with PFCs are highly effective, replacement materials are needed. This thesis focuses on the fundamental development of water-soluble sulfonated polyimide (sPI) and poly(amic acid) (sPAA) systems for fire suppression foams. The polymers remain thermally stable, and when combined with readily available surfactants (SDS), produce stable foams. Preliminary fire testing shows that sPI/sPAA formulations perform well against military specifications. Solution rheology (study of flow) explores the solution behavior of sPI, which offers insights into fundamental concentration-viscosity relationships that are important to future foam development. Additionally, the structural nature of the sPIs/ sPAAs allows for their modification with phosphonium groups or siloxanes, which changes their characteristics. Overall, these sPIs and sPAAs are initially promising for the future direction of fire suppression foams.
|
857 |
Dosage optimization and bolted connections for UHPFRC tiesCamacho Torregrosa, Esteban Efraím 07 January 2014 (has links)
Concrete technology has been in changeful evolution since the Roman Empire time. It is remarkable
that the technological progress became of higher magnitude from the second part of the XX Century.
Advances in the development of new cements, the appearance of the fibers as a reinforcement for
structural applications, and specially the grand progress in the field of the water reducing admixtures
enabled the emergence of several types of special concretes. One of the lasts is the Ultra High
Performance Fiber Reinforced Concrete (UHPFRC), which incorporates advances of the Self-Compacting
Concrete (SCC), Fiber-Reinforced Concrete (FRC) and Ultra High Strength Concrete (UHSC) technology.
This exclusive material requires a detailed analysis of the components compatibility and a high control
of the materials and processes. Mainly patented products have been used for the few structural elements
carried out so far today, but the costs makes doubtful the development of many other potential
applications.
In accordance with the previously explained, a simplification of the UHPFRC components and
processes is needed. This becomes the first main goal of this Ph.D. thesis, which emphasizes in the use
of local available components and simpler mixing processes. Moreover, the singular properties of this
material, between ordinary concrete and steel, allow not only the realization of slenderer structures, but
also the viability of new concepts unthinkable with ordinary concrete. In this field is focused the second
part of the Ph.D. thesis, which develops a bolted connection system between UHPFRC elements.
This research summarizes, first of all, the subfamilies belonging to the HPC-UHPC materials.
Afterwards, it is provided a detailed comparison between the dosage and properties of more than a
hundred of mixtures proposed by several authors in the last ten years of technology. This becomes a
useful tool to recognize correlations between dosages and properties and validate or no preconceived
ideas about this material.
Based on this state of art analysis was performed the later development of mixtures, on Chapter 4,
which analized the effect of use of simpler components and processes on the UHPFRC. The main idea
was use local components available in the Spanish market, identifying the combinations that provide
the best rheological and mechanical properties. Steam curing use was avoided since a process
simplification is intended. Diferent dosages were developed to be adapted to various levels of
performance, and always trying to be as economical as possible. The concretes designed were
selfcompacting and mainly combined two fiber types (hybrid), as the flexural performance was of
greater relevance. The compressive strength obtained varied in the range between 100 and 170 MPa
(cube L=100 mm), and the flexural strength between 15 and 45 MPa (prism 100 x 100 x 500 mm). Some
of the components introduced are very rarely used in UHPFRC, as limestone coarse aggregate or FC3R,
a white active residue from the petrol industry. As a result of the research, some simple and practical
tips are provided for designers of UHPFRC dosage. At the end of this chapter, five dosages are
characterized as examples of useful concretes for different requirement applications. In a second part, the idea of a bolted joint connection between UHPFRC elements was proposed. The
connection system would be especially useful for struts and ties elements, as truss structures. The
possible UHPFRC failure modes were introduced and two different types of tests were designed and
performed to evaluate the joint capacity. The geometry of the UHPFRC elements was modified in order
to correlate it with the failure mode and maximum load reached. Also a linear finite element analysis
was performed to analyze the UHPFRC elements connection. This supported the results of the
experimental tests to deduce formulations that predict the maximum load for each failure mode. Finally,
a real size truss structure was assembled with bolted joints and tested to verify the good structural
behavior of these connections.
To conclude, some applications designed and developed at the Universitat Politècnica de València
with the methods and knowledge acquired on UHPFRC are abstracted. In many of them the material was
mixed and poured in a traditional precast concrete company, providing adequate rheological and
mechanical results. This showed the viability of simpler UHPFRC technology enabling some of the first
applications in Spain with this material. / Camacho Torregrosa, EE. (2013). Dosage optimization and bolted connections for UHPFRC ties [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34790
|
858 |
High-dimensional Data in Scientific Visualization: Representation, Fusion and DifferenceMohammed, Ayat Mohammed Naguib 14 July 2017 (has links)
Visualization has proven to be an effective means for analyzing high-dimensional data, especially Multivariate Multidimensional (MVMD) scientific data. Scientific visualization deals with data that have natural spatial mapping such as maps, buildings interiors or even your physiological body parts, while information visualization involves abstract, non-spatial data. Visual analytics uses either visualization types to gain deep inferences about scientific data or information. In recent years, a variety of techniques have been developed combining statistical and visual analysis tools to represent data of different types in one view to enable data fusion. One vital feature of such visualization tools is the support for comparison: showing the differences between two or more objects. This feature is called visual differencing, or discrimination. Visual differencing is a common requirement across different research domains, helping analysts compare different objects in the data set or compare different attributes of the same object.
From a visual analytic point of view, this research examines humans' predictable bias in interpreting visual-spatial, spatiotemporal information, and inference-making in scientific visualization. Practically, I examined different case studies from different domains such as land suitability in agriculture, spectrum sensing in software-defined radio networks, raster images in remote sensing, pattern recognition in point cloud, airflow distribution in aerodynamics, galaxy catalogs in astrophysics and protein membrane interaction in molecular dynamics. Each case required different computing power, ranging from personal computer to high performance cluster.
Based on this experience across application domains, I propose a high-performance visualization paradigm for scientific visualization that supports three key features of scientific data analysis: representations, fusion, and visual discrimination. This paradigm is informed by practical work with multiple high-performance computing and visualization platforms from desktop displays to immersive CAVE displays. In order to evaluate the applicability of the proposed paradigm, I carried out two user studies. The first user study addressed the feature of data fusion with multivariate maps and the second one addressed visual differencing with three multi-view management techniques. The high-performance visualization paradigm and the results of these studies contribute to our knowledge of efficient MVMD designs and provides scientific visualization developers with a framework to mitigate the trade-offs of scalable visualization design such as the data mappings, computing power, and output modality. / Ph. D. / Visualization has proven to be an effective means for analyzing big data such as Multivariate Multidimensional (MVMD) scientific data. Scientific visualization deals with data that have natural spatial mapping such as maps, buildings interiors or even your physiological body parts, while information visualization involves abstract, non-spatial data. Visual analytics uses visualization to interactively manipulate data to gain deep inferences about scientific data or information. A variety of techniques combining statistical and visual analysis tools have been developed in the recent years, one of the most interesting techniques is Information Rich Virtual environments (IRVEs). With visual differencing, discrimination or interpretation as a vital feature addresses its functionality of showing the differences between two or more objects when comparison is needed. Visual differencing is widely needed across different research domains to help analysts identifying different objects in the data set or identifying different attributes of the same object. From a visual analytics point of view, this research is examining humans predictable bias in interpreting visual-spatial(1D, 2D, and 3D data) and spatiotemporal (datasets have space and time dimensions) information and inference making in scientific visualization. Also this research seeks to develop and evaluate new techniques to mitigate the trade-off between proximity and occlusion in the visualization scenes and enable analysts to explore high-dimensional scientific data sets. This research is seeking powerful computational techniques combined with natural human interactions and visual communication to analyze scientific data. This research proposes a high-performance visualization paradigm for scientific visualization that supports different representations of scientific data, fusion of different types of data, and visual discrimination that enables users to visually find the difference between multiple objects in the visualization scene. I examined different case studies from different domains such as land suitability in agriculture, spectrum sensing in software-defined radio networks, raster images in remote sensing, pattern recognition in point cloud, airflow distribution in aerodynamics, galaxy catalogs in astrophysics and protein membrane interaction in molecular dynamics. Each case required different computing power, ranging from personal computer to high-performance cluster. Also, different rendering venues were needed, starting with desktop displays to the immersive CAVE displays.
|
859 |
Parallel Algorithms for Switching Edges and Generating Random Graphs from Given Degree Sequences using HPC PlatformsBhuiyan, Md Hasanuzzaman 09 November 2017 (has links)
Networks (or graphs) are an effective abstraction for representing many real-world complex systems. Analyzing various structural properties of and dynamics on such networks reveal valuable insights about the behavior of such systems. In today's data-rich world, we are deluged by the massive amount of heterogeneous data from various sources, such as the web, infrastructure, and online social media. Analyzing this huge amount of data may take a prohibitively long time and even may not fit into the main memory of a single processing unit, thus motivating the necessity of efficient parallel algorithms in various high-performance computing (HPC) platforms. In this dissertation, we present distributed and shared memory parallel algorithms for some important network analytic problems.
First, we present distributed memory parallel algorithms for switching edges in a network. Edge switch is an operation on a network, where two edges are selected randomly, and one of their end vertices are swapped with each other. This operation is repeated either a given number of times or until a specified criterion is satisfied. It has diverse real-world applications such as in generating simple random networks with a given degree sequence and in modeling and studying various dynamic networks. One of the steps in our edge switch algorithm requires generating multinomial random variables in parallel. We also present the first non-trivial parallel algorithm for generating multinomial random variables.
Next, we present efficient algorithms for assortative edge switch in a labeled network. Assuming each vertex has a label, an assortative edge switch operation imposes an extra constraint, i.e., two edges are randomly selected and one of their end vertices are swapped with each other if the labels of the end vertices of the edges remain the same as before. It can be used to study the effect of the network structural properties on dynamics over a network. Although the problem of assortative edge switch seems to be similar to that of (regular) edge switch, the constraint on the vertex labels in assortative edge switch leads to a new difficulty, which needs to be addressed by an entirely new algorithmic approach. We first present a novel sequential algorithm for assortative edge switch; then we present an efficient distributed memory parallel algorithm based on our sequential algorithm.
Finally, we present efficient shared memory parallel algorithms for generating random networks with exact given degree sequence using a direct graph construction method, which involves computing a candidate list for creating an edge incident on a vertex using the Erdos-Gallai characterization and then randomly creating the edges from the candidates. / Ph. D. / Network analysis has become a popular topic in many disciplines including social sciences, epidemiology, biology, and business as it provides valuable insights about many real-world systems represented as networks. The recent advancement of science and technology has resulted in a massive growth of such networks, and mining and processing such massive networks poses significant challenges, which can be addressed by various high-performance computing (HPC) platforms. In this dissertation, we present parallel algorithms for a few network analytic problems using HPC platforms.
Random networks are widely used for modeling many complex real-world systems such as the Internet, biological, social, and infrastructure networks. Most prior work on generating random graphs involves sequential algorithms, and they can be broadly categorized in two classes: (i) edge switching and (ii) stub-matching. We present parallel algorithms for generating random graphs using both the edge switching and stub-matching methods. Our parallel algorithms for switching edges can generate random networks with billions of edges in a few minutes with 1024 processors. We have studied several load balancing methods to equally distribute workload among the processors to achieve the best performance. The parallel algorithm for generating random graphs using the stub-matching method also shows good speedup for medium-sized networks. We believe the proposed parallel algorithms will prove useful in analyzing and mining of emerging networks.
|
860 |
Potential and limits of Raman spectroscopy for carotenoid detection in microorganisms: implications for astrobiologyJehlička, J., Edwards, Howell G.M., Osterrothova, K., Novotna, J., Nedbalova, L., Kopecky, J., Nemec, I., Oren, A. 13 December 2014 (has links)
No / In this paper, it is demonstrated how Raman spectroscopy can be used to detect different carotenoids as possible biomarkers in various groups of microorganisms. The question which arose from previous studies concerns the level of unambiguity of discriminating carotenoids using common Raman microspectrometers. A series of laboratory-grown microorganisms of different taxonomic affiliation was investigated, such as halophilic heterotrophic bacteria, cyanobacteria, the anoxygenic phototrophs, the non-halophilic heterotrophs as well as eukaryotes (Ochrophyta, Rhodophyta and Chlorophyta). The data presented show that Raman spectroscopy is a suitable tool to assess the presence of carotenoids of these organisms in cultures. Comparison is made with the high-performance liquid chromatography approach of analysing pigments in extracts. Direct measurements on cultures provide fast and reliable identification of the pigments. Some of the carotenoids studied are proposed as tracers for halophiles, in contrast with others which can be considered as biomarkers of other genera. The limits of application of Raman spectroscopy are discussed for a few cases where the current Raman spectroscopic approach does not allow discriminating structurally very similar carotenoids. The database reported can be used for applications in geobiology and exobiology for the detection of pigment signals in natural settings.
|
Page generated in 0.076 seconds