Spelling suggestions: "subject:"fish"" "subject:"fins""
321 |
Electrical activity in neurons exposed to low level electromagnetic fields: theory and experimentsMesirca, Pietro <1972> 17 May 2007 (has links)
No description available.
|
322 |
Sviluppo di un tomografo multi-energy per lo studio pre-clinico di nuove metodiche diagnostiche finalizzate al riconoscimento precoce della patologia tumoraleMasetti, Simone <1970> 12 June 2008 (has links)
A new multi-energy CT for small animals is being developed at the Physics Department of the
University of Bologna, Italy. The system makes use of a set of quasi-monochromatic X-ray beams,
with energy tunable in a range from 26 KeV to 72 KeV. These beams are produced by Bragg
diffraction on a Highly Oriented Pyrolytic Graphite crystal. With quasi-monochromatic sources it is
possible to perform multi-energy investigation in a more effective way, as compared with conventional
X-ray tubes.
Multi-energy techniques allow extracting physical information from the materials, such as effective
atomic number, mass-thickness, density, that can be used to distinguish and quantitatively characterize
the irradiated tissues. The aim of the system is the investigation and the development of new pre-clinic
methods for the early detection of the tumors in small animals.
An innovative technique, the Triple-Energy Radiography with Contrast Medium (TER), has been
successfully implemented on our system.
TER consist in combining a set of three quasi-monochromatic images of an object, in order to obtain a
corresponding set of three single-tissue images, which are the mass-thickness map of three reference
materials. TER can be applied to the quantitative mass-thickness-map reconstruction of a contrast
medium, because it is able to remove completely the signal due to other tissues (i.e. the structural
background noise). The technique is very sensitive to the contrast medium and is insensitive to the
superposition of different materials. The method is a good candidate to the early detection of the tumor
angiogenesis in mice.
In this work we describe the tomographic system, with a particular focus on the quasi-monochromatic
source. Moreover the TER method is presented with some preliminary results about small animal
imaging.
|
323 |
New approaches to open problems in gene expression microarray dataMarconi, Daniela <1979> 12 June 2008 (has links)
In the past decade, the advent of efficient genome sequencing tools and high-throughput
experimental biotechnology has lead to enormous progress in the life science. Among
the most important innovations is the microarray tecnology. It allows to quantify the
expression for thousands of genes simultaneously by measurin the hybridization from a
tissue of interest to probes on a small glass or plastic slide. The characteristics of these
data include a fair amount of random noise, a predictor dimension in the thousand, and
a sample noise in the dozens.
One of the most exciting areas to which microarray technology has been applied is
the challenge of deciphering complex disease such as cancer. In these studies, samples are
taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to
reach a standard organization (through the effort of preposed International project like
Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble
in a clinician's question that do not have a compelling statistical method that could
permit to answer it.The contribution of this dissertation in deciphering disease regards
the development of new approaches aiming at handle open problems posed by clinicians
in handle specific experimental designs.
In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the
production of the array, to the quality controls ending with preprocessing steps that will
be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical
review of standard analysis methods are provided stressing most of problems that
In Chapter 3 is introduced a method to adress the issue of unbalanced design of
miacroarray experiments. In microarray experiments, experimental design is a crucial
starting-point for obtaining reasonable results. In a two-class problem, an equal or
similar number of samples it should be collected between the two classes. However in
some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose
to address this issue by applying a modified version of SAM [2]. MultiSAM consists in
a reiterated application of a SAM analysis, comparing the less populated class (LPC)
with 1,000 random samplings of the same size from the more populated class (MPC) A
list of the differentially expressed genes is generated for each SAM application. After
1,000 reiterations, each single probe given a "score"
ranging from 0 to 1,000 based on its
recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM
was compared to the performance of SAM and LIMMA [3] over two simulated data
sets via beta and exponential distribution. The results of all three algorithms over low-
noise data sets seems acceptable However, on a real unbalanced two-channel data set
reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds
23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical
clustering. We also report extra-assay validation in terms of differentially expressed
genes Although standard algorithms perform well over low-noise simulated data sets,
multi-SAM seems to be the only one able to reveal subtle differences in gene expression
profiles on real unbalanced data.
In Chapter 4 a method to adress similarities evaluation in a three-class prblem by
means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in
a prognostic and diagnostic clinical framework, not only differences could have a crucial
role. In some cases similarities can give useful and, sometimes even more, important
information. The goal, given three classes, could be to establish, with a certain level
of confidence, if the third one is similar to the first or the second one. In this work
we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the
limitation of standard supervised classification. In fact, RVM offers many advantages
compared, for example, with his well-known precursor (Support Vector Machine - SVM
[3]). Among these advantages, the estimate of posterior probability of class membership
represents a key feature to address the similarity issue. This is a highly important, but
often overlooked, option of any practical pattern recognition system. We focused on
Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of
grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate
G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for
samples of G2 to be member of class G1 or class G3. The analysis showed that breast
cancer samples of grade II have a molecular profile more similar to breast cancer samples
of grade I. Looking at the literature this result have been guessed, but no measure of
significance was gived before.
|
324 |
Computational methods for genome screeningMontanucci, Ludovica <1978> 12 June 2008 (has links)
Motivation An actual issue of great interest, both under a theoretical and an
applicative perspective, is the analysis of biological sequences for disclosing the information
that they encode. The development of new technologies for genome sequencing
in the last years, opened new fundamental problems since huge amounts of biological
data still deserve an interpretation. Indeed, the sequencing is only the first step
of the genome annotation process that consists in the assignment of biological information
to each sequence. Hence given the large amount of available data, in silico
methods became useful and necessary in order to extract relevant information from
sequences. The availability of data from Genome Projects gave rise to new strategies
for tackling the basic problems of computational biology such as the determination of
the tridimensional structures of proteins, their biological function and their reciprocal
interactions.
Results The aim of this work has been the implementation of predictive methods
that allow the extraction of information on the properties of genomes and proteins
starting from the nucleotide and aminoacidic sequences, by taking advantage of the
information provided by the comparison of the genome sequences from different species.
In the first part of the work a comprehensive large scale genome comparison of 599
organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48
eukaryotic genomes were aligned and clustered on the basis of their sequence identity.
This procedure led to the identification of classes of proteins that are peculiar to the
different groups of organisms. Moreover the adopted similarity threshold produced
clusters that are homogeneous on the structural point of view and that can be used
for structural annotation of uncharacterized sequences.
The second part of the work focuses on the characterization of thermostable proteins
and on the development of tools able to predict the thermostability of a protein
starting from its sequence. By means of Principal Component Analysis the codon
composition of a non redundant database comprising 116 prokaryotic genomes has
been analyzed and it has been showed that a cross genomic approach can allow the
extraction of common determinants of thermostability at the genome level, leading
to an overall accuracy in discriminating thermophilic coding sequences equal to 95%.
This result outperform those obtained in previous studies. Moreover, we investigated
the effect of multiple mutations on protein thermostability. This issue is of great importance
in the field of protein engineering, since thermostable proteins are generally
more suitable than their mesostable counterparts in technological applications. A Support
Vector Machine based method has been trained to predict if a set of mutations
can enhance the thermostability of a given protein sequence. The developed predictor
achieves 88% accuracy.
|
325 |
An experimental and theoretical approach to correct for the scattered radiation in an X-ray computer tomography system for industrial applicationsMiceli, Alice <1978> 12 June 2008 (has links)
The main problem connected to cone beam computed tomography (CT) systems for
industrial applications employing 450 kV X-ray tubes is the high amount of scattered
radiation which is added to the primary radiation (signal). This stray radiation leads to
a significant degradation of the image quality. A better understanding of the scattering
and methods to reduce its effects are therefore necessary to improve the image quality.
Several studies have been carried out in the medical field at lower energies, whereas
studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover,
the studies reported in literature do not consider the scattered radiation generated by
the CT system structure and the walls of the X-ray room (environmental scatter). In
order to investigate the scattering on CT projections a GEANT4-based Monte Carlo
(MC) model was developed. The model, which has been validated against
experimental data, has enabled the calculation of the scattering including the
environmental scatter, the optimization of an anti-scatter grid suitable for the CT
system, and the optimization of the hardware components of the CT system. The
investigation of multiple scattering in the CT projections showed that its contribution
is 2.3 times the one of primary radiation for certain objects. The results of the
environmental scatter showed that it is the major component of the scattering for
aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the
thickness of the object and therefore on the projection. For that reason, its correction is
one of the key factors for achieving high quality images. The anti-scatter grid
optimized by means of the developed MC model was found to reduce the scatter-toprimary
ratio in the reconstructed images by 20 %. The object and environmental
scatter calculated by means of the simulation were used to improve the scatter
correction algorithm which could be patented by Empa. The results showed that the
cupping effect in the corrected image is strongly reduced. The developed CT
simulation is a powerful tool to optimize the design of the CT system and to evaluate
the contribution of the scattered radiation to the image. Besides, it has offered a basis
for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a
gain in the reconstruction time of a factor 10. This result has a high economic impact
in non-destructive testing and evaluation, and reverse engineering.
|
326 |
Elastic Propagation in random media: applications to the imaging of volcano structuresTramelli, Anna <1979> 20 June 2008 (has links)
High-frequency seismograms contain features that reflect the random inhomogeneities of the earth. In this work I use an imaging method to locate the high contrast small-
scale heterogeneity respect to the background earth medium. This method was first
introduced by Nishigami (1991) and than applied to different volcanic and tectonically
active areas (Nishigami, 1997, Nishigami, 2000, Nishigami, 2006).
The scattering imaging method is applied to two volcanic areas: Campi Flegrei
and Mt. Vesuvius. Volcanic and seismological active areas are often characterized
by complex velocity structures, due to the presence of rocks with different elastic
properties. I introduce some modifications to the original method in order to make it
suitable for small and highly complex media. In particular, for very complex media
the single scattering approximation assumed by Nishigami (1991) is not applicable as
the mean free path becomes short. The multiple scattering or diffusive approximation
become closer to the reality. In this thesis, differently from the ordinary Nishigami’s
method (Nishigami, 1991), I use the mean of the recorded coda envelope as reference
curve and calculate the variations from this average envelope. In this way I implicitly
do not assume any particular scattering regime for the "average" scattered radiation,
whereas I consider the variations as due to waves that are singularly scattered from
the strongest heterogeneities. The imaging method is applied to a relatively small area
(20 x 20 km), this choice being justified by the small length of the analyzed codas of
the low magnitude earthquakes.
I apply the unmodified Nishigami’s method to the volcanic area of Campi Flegrei
and compare the results with the other tomographies done in the same area. The
scattering images, obtained with frequency waves around 18 Hz, show the presence
of high scatterers in correspondence with the submerged caldera rim in the southern
part of the Pozzuoli bay. Strong scattering is also found below the Solfatara crater,
characterized by the presence of densely fractured, fluid-filled rocks and by a strong
thermal anomaly.
The modified Nishigami’s technique is applied to the Mt. Vesuvius area. Results
show a low scattering area just below the central cone and a high scattering area
around it. The high scattering zone seems to be due to the contrast between the high
rigidity body located beneath the crater and the low rigidity materials located around
it. The central low scattering area overlaps the hydrothermal reservoirs located below
the central cone.
An interpretation of the results in terms of geological properties of the medium
is also supplied, aiming to find a correspondence of the scattering properties and the
geological nature of the material.
A complementary result reported in this thesis is that the strong heterogeneity
of the volcanic medium create a phenomenon called "coda localization". It has been
verified that the shape of the seismograms recorded from the stations located at the top
of the volcanic edifice of Mt. Vesuvius is different from the shape of the seismograms
recorded at the bottom. This behavior is justified by the consideration that the coda
energy is not uniformly distributed within a region surrounding the source for great
lapse time.
|
327 |
Retrieval of trace gases vertical profile in the lower atmosphere combining. Differential Optical Absorption Spectroscopy with radiative transfer modelsPalazzi, Elisa <1978> 27 June 2008 (has links)
The motivation for the work presented in this thesis is to retrieve profile
information for the atmospheric trace constituents nitrogen dioxide (NO2)
and ozone (O3) in the lower troposphere from remote sensing measurements.
The remote sensing technique used, referred to as Multiple AXis Differential
Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that
represents a significant advance on the well-established DOAS, especially for
what it concerns the study of tropospheric trace consituents.
NO2 is an important trace gas in the lower troposphere due to the fact that
it is involved in the production of tropospheric ozone; ozone and nitrogen
dioxide are key factors in determining the quality of air with consequences,
for example, on human health and the growth of vegetation. To understand
the NO2 and ozone chemistry in more detail not only the concentrations at
ground but also the acquisition of the vertical distribution is necessary. In
fact, the budget of nitrogen oxides and ozone in the atmosphere is determined
both by local emissions and non-local chemical and dynamical processes (i.e.
diffusion and transport at various scales) that greatly impact on their vertical
and temporal distribution: thus a tool to resolve the vertical profile
information is really important.
Useful measurement techniques for atmospheric trace species should fulfill
at least two main requirements. First, they must be sufficiently sensitive to
detect the species under consideration at their ambient concentration levels.
Second, they must be specific, which means that the results of the measurement
of a particular species must be neither positively nor negatively
influenced by any other trace species simultaneously present in the probed
volume of air. Air monitoring by spectroscopic techniques has proven to be
a very useful tool to fulfill these desirable requirements as well as a number
of other important properties. During the last decades, many such instruments
have been developed which are based on the absorption properties of
the constituents in various regions of the electromagnetic spectrum, ranging
from the far infrared to the ultraviolet. Among them, Differential Optical
Absorption Spectroscopy (DOAS) has played an important role.
DOAS is an established remote sensing technique for atmospheric trace
gases probing, which identifies and quantifies the trace gases in the atmosphere
taking advantage of their molecular absorption structures in the near
UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm
to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace
gas in terms of its integrated concentration over the atmospheric path from
the sun to the receiver (the so called slant column density). The receiver
can be located at ground, as well as on board an aircraft or a satellite platform.
Passive DOAS has, therefore, a flexible measurement configuration
that allows multiple applications.
The ability to properly interpret passive DOAS measurements of atmospheric
constituents depends crucially on how well the optical path of light
collected by the system is understood. This is because the final product of
DOAS is the concentration of a particular species integrated along the path
that radiation covers in the atmosphere. This path is not known a priori and
can only be evaluated by Radiative Transfer Models (RTMs). These models
are used to calculate the so called vertical column density of a given trace
gas, which is obtained by dividing the measured slant column density to the
so called air mass factor, which is used to quantify the enhancement of the
light path length within the absorber layers.
In the case of the standard DOAS set-up, in which radiation is collected
along the vertical direction (zenith-sky DOAS), calculations of the air mass
factor have been made using “simple” single scattering radiative transfer
models. This configuration has its highest sensitivity in the stratosphere,
in particular during twilight. This is the result of the large enhancement in
stratospheric light path at dawn and dusk combined with a relatively short
tropospheric path.
In order to increase the sensitivity of the instrument towards tropospheric
signals, measurements with the telescope pointing the horizon (offaxis
DOAS) have to be performed. In this circumstances, the light path in the
lower layers can become very long and necessitate the use of radiative transfer
models including multiple scattering, the full treatment of atmospheric
sphericity and refraction.
In this thesis, a recent development in the well-established DOAS technique
is described, referred to as Multiple AXis Differential Optical Absorption
Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous
use of several off-axis directions near the horizon: using this configuration,
not only the sensitivity to tropospheric trace gases is greatly improved,
but vertical profile information can also be retrieved by combining the simultaneous
off-axis measurements with sophisticated RTM calculations and
inversion techniques.
In particular there is a need for a RTM which is capable of dealing with
all the processes intervening along the light path, supporting all DOAS geometries
used, and treating multiple scattering events with varying phase
functions involved. To achieve these multiple goals a statistical approach
based on the Monte Carlo technique should be used. A Monte Carlo RTM
generates an ensemble of random photon paths between the light source and
the detector, and uses these paths to reconstruct a remote sensing measurement.
Within the present study, the Monte Carlo radiative transfer
model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation)
has been developed and used to correctly interpret the slant column densities
obtained from MAX-DOAS measurements.
In order to derive the vertical concentration profile of a trace gas from
its slant column measurement, the AMF is only one part in the quantitative
retrieval process. One indispensable requirement is a robust approach to
invert the measurements and obtain the unknown concentrations, the air
mass factors being known. For this purpose, in the present thesis, we have
used the Chahine relaxation method.
Ground-based Multiple AXis DOAS, combined with appropriate radiative
transfer models and inversion techniques, is a promising tool for atmospheric
studies in the lower troposphere and boundary layer, including the retrieval of
profile information with a good degree of vertical resolution. This thesis has
presented an application of this powerful comprehensive tool for the study of
a preserved natural Mediterranean area (the Castel Porziano Estate, located
20 km South-West of Rome) where pollution is transported from remote
sources.
Application of this tool in densely populated or industrial areas is beginning
to look particularly fruitful and represents an important subject for future
studies.
|
328 |
A dynamical system approach to data assimilation in chaotic modelsPilolli, Massimo <1966> 27 June 2008 (has links)
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi
in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast
errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which
may be thought of as a system forced by observations. In the AUS scheme the assimilation is
obtained by confining the analysis increment in the unstable subspace of the forecast-analysis
cycle system so that it will have the same structure of the dominant instabilities of the system.
The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS-
BDAS has already been tested in realistic models and observational configurations, including a
Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments
include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly
reduces the analysis error, with reasonable computational costs for data assimilation with respect,
for example, to a prohibitive full Extended Kalman Filter.
This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly
nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a
perfect model setting, and with two types of model error as well: random and systematic. In the
different configurations examined, and in a perfect model setting, AUS once again shows better
efficiency than other advanced data assimilation schemes. In the present study, we develop an
iterative scheme that leads to a significant improvement of the overall assimilation performance
with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes
tracking, with a low computational cost.
Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned
for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters —
and in particular an estimate of the model error covariance matrix — may turn out to be quite
difficult. Our proposed approach, instead, may be easier to implement in operational models.
|
329 |
High frequency seismic and underwater acoustic wave propagation and imaging techniquesStabile, Tony Alfredo <1977> 30 June 2008 (has links)
No description available.
|
330 |
Computational methods for the analysis of protein structure and functionBartoli, Lisa <1980> 21 May 2009 (has links)
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications
in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is
estimated to account for 5-10% of the protein sequences in the various genomes.
Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil
sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would
be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins
in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share
similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters
were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein
sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes
the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of
molecular functions and structures.
|
Page generated in 0.028 seconds