591 |
Airborne Particles in Indoor Residential Environment: Source Contribution, Characteristics, Concentration, and Time VariabilityHe, Congrong January 2005 (has links)
The understanding of human exposure to indoor particles of all sizes is important to enable exposure control and reduction, but especially for smaller particles since the smaller particles have a higher probability of penetration into the deeper parts of the respiratory tract and also contain higher levels of trace elements and toxins. Due to the limited understanding of the relationship between particle size and the health effects they cause, as well as instrument limitations, the available information on submicrometer (d < 1.0 µm) particles indoors, both in terms of mass and number concentrations, is still relatively limited. This PhD project was conducted as part of the South-East Queensland Air Quality program and Queensland Housing Study aimed at providing a better understanding of ambient particle concentrations within the indoor environment with a focus on exposure assessment and control. This PhD project was designed to investigate comprehensively the sources and sinks of indoor aerosol particles and the relationship between indoor and outdoor aerosol particles, particle and gaseous pollutant, as well as the association between indoor air pollutants and house characteristics by using, analysing and interpreting existing experimental data which were collected before this project commenced, as well as data from additional experiments which were designed and conducted for the purpose of this project. The focus of this research was on submicrometer particles with a diameter between 0.007 - 0.808 µm. The main outcome of this project may be summarised as following: * A comprehensive review of particle concentration levels and size distributions characteristics in the residential and non-industrial workplace environments was conducted. This review included only those studies in which more general trends were investigated, or could be concluded based on information provided in the papers. This review included four parts: 1) outdoor particles and their effect on indoor environments; 2) the relationship between indoor and outdoor concentration levels in the absence of indoor sources for naturally ventilated buildings; 3) indoor sources of particles: contribution to indoor concentration levels and the effect on I/O ratios for naturally ventilated buildings; and 4) indoor/outdoor relationship in mechanically ventilated buildings. * The relationship between indoor and outdoor airborne particles was investigated for sixteen residential houses in Brisbane, Australia, in the absence of operating indoor sources. Comparison of the ratios of indoor to outdoor particle concentrations revealed that while temporary values of the ratio vary in a broad range from 0.2 to 2.5 for both lower and higher ventilation conditions, average values of the ratios were very close to one regardless of ventilation conditions and of particle size range. The ratios were in the range from 0.78 to 1.07 for submicrometer particles, from 0.95 to 1.0 for supermicrometer particles and from 1.01 to 1.08 for PM2.5 fraction. Comparison of the time series of indoor to outdoor particle concentrations showed a clear positive relationship existing for many houses under normal ventilation conditions (estimated to be about and above 2 h-1), but not under minimum ventilation conditions (estimated to be about and below 1 h-1). These results suggest that for normal ventilation conditions and in the absence of operating indoor sources, outdoor particle concentrations could be used to predict instantaneous indoor particle concentrations but not for minium ventilation, unless air exchange rate is known, thus allowing for estimation of the "delay constant". * Diurnal variation of indoor submicrometer particle number and particle mass (approximation of PM2.5) concentrations was investigated in fifteen of the houses. The results show that there were clear diurnal variations in both particle number and approximation of PM2.5 concentrations, for all the investigated houses. The pattern of diurnal variations varied from house to house, however, there was always a close relationship between the concentration and human indoor activities. The average number and mass concentrations during indoor activities were (18.2±3.9)×10³ particles cm-³ and (15.5±7.9) µg m-³ respectively, and under non-activity conditions, (12.4±2.7)x10³ particles cm-³ (11.1±2.6) µg m-³, respectively. In general, there was a poor correlation between mass and number concentrations and the correlation coefficients were highly variable from day to day and from house to house. This implies that conclusions cannot be drawn about either one of the number or mass concentration characteristics of indoor particles, based on measurement of the other. The study also showed that it is unlikely that particle concentrations indoors could be represented by measurements conducted at a fixed monitoring station due to the large impact of indoor and local sources. * Emission characteristics of indoor particle sources in fourteen residential houses were quantified. In addition, characterizations of particles resulting from cooking conducted in an identical way in all the houses were measured. All the events of elevated particle concentrations were linked to indoor activities using house occupants diary entries, and catalogued into 21 different types of indoor activities. This enabled quantification of the effect of indoor sources on indoor particle concentrations as well as quantification of emission rates from the sources. For example, the study found that frying, grilling, stove use, toasting, cooking pizza, smoking, candle vaporizing eucalyptus oil and fan heater use, could elevate the indoor submicrometer particle number concentration levels by more than 5 times, while PM2.5 concentrations could be up to 3, 30 and 90 times higher than the background levels during smoking, frying and grilling, respectively. * Indoor particle deposition rates of size classified particles in the size range from 0.015 to 6 µm were quantified. Particle size distribution resulting from cooking, repeated under two different ventilation conditions in 14 houses, as well as changes to particle size distribution as a function of time, were measured using a scanning mobility particle sizer (SMPS), an aerodynamic particle sizer (APS), and a DustTrak. Deposition rates were determined by regression fitting of the measured size-resolved particle number and PM2.5 concentration decay curves, and accounting for air exchange rate. The measured deposition rates were shown to be particle size dependent and they varied from house to house. The lowest deposition rates were found for particles in the size range from 0.2 to 0.3 µm for both minimum (air exchange rate: 0.61±0.45 h-1) and normal (air exchange rate: 3.00±1.23 h-1) ventilation conditions. The results of statistical analysis indicated that ventilation condition (measured in terms of air exchange rate) was an important factor affecting deposition rates for particles in the size range from 0.08 to 1.0 µm, but not for particles smaller than 0.08 µm or larger than 1.0 µm. Particle coagulation was assessed to be negligible compared to the two other processes of removal: ventilation and deposition. This study of particle deposition rates, the largest conducted so far in terms of the number of residential houses investigated, demonstrated trends in deposition rates comparable with studies previously reported, usually for significantly smaller samples of houses (often only one). However, the results compare better with studies which, similarly to this study, investigated cooking as a source of particles (particle sources investigated in other studies included general activity, cleaning, artificial particles, etc). * Residential indoor and outdoor 48 h average levels of nitrogen dioxide (NO2), 48h indoor submicrometer particle number concentration and the approximation of PM2.5 concentrations were measured simultaneously for fourteen houses. Statistical analyses of the correlation between indoor and outdoor pollutants (NO2 and particles) and the association between house characteristics and indoor pollutants were conducted. The average indoor and outdoor NO2 levels were 13.8 ± 6.3 ppb and 16.7 ± 4.2 ppb, respectively. The indoor/outdoor NO2 concentration ratio ranged from 0.4 to 2.3, with a median value of 0.82. Despite statistically significant correlations between outdoor and fixed site NO2 monitoring station concentrations (p = 0.014, p = 0.008), there was no significant correlation between either indoor and outdoor NO2 concentrations (p = 0.428), or between indoor and fixed site NO2 monitoring station concentrations (p = 0.252, p = 0.465,). However, there was a significant correlation between indoor NO2 concentration and indoor submicrometer aerosol particle number concentrations (p = 0.001), as well as between indoor PM2.5 and outdoor NO2 (p = 0.004). These results imply that the outdoor or fixed site monitoring concentration alone is a poor predictor of indoor NO2 concentration. * Analysis of variance indicated that there was no significant association between indoor PM2.5 and any of the house characteristics investigated (p > 0.05). However, associations between indoor submicrometer particle number concentration and some house characteristics (stove type, water heater type, number of cars and condition of paintwork) were significant at the 5% level. Associations between indoor NO2 and some house characteristics (house age, stove type, heating system, water heater type and floor type) were also significant (p < 0.05). The results of these analyses thus strongly suggest that the gas stove, gas heating system and gas water heater system are main indoor sources of indoor submicrometer particle and NO2 concentrations in the studied residential houses. The significant contributions of this PhD project to the knowledge of indoor particle included: 1) improving an understanding of indoor particles behaviour in residential houses, especially for submicrometer particle; 2) improving an understanding of indoor particle source and indoor particle sink characteristics, as well as their effects on indoor particle concentration levels in residential houses; 3) improving an understanding of the relationship between indoor and outdoor particles, the relationship between particle mass and particle number, correlation between indoor NO2 and indoor particles, as well as association between indoor particle, NO2 and house characteristics.
|
592 |
The transfer of distributions by LULU smoothersButler, Pieter-Willem 12 1900 (has links)
Thesis (MSc (Mathematics))--Stellenbosch University, 2008. / LULU smoothers is a class of nonlinear smoothers and they are compositions
of the maximum and minimum operators. By analogy to the discrete Fourier
transform and the discrete wavelet transform, one can use LULU smoothers
to create a nonlinear multiresolution analysis of a sequence with pulses. This
tool is known as the Discrete Pulse Transform (DPT).
Some research have been done into the distributional properties of the LULU
smoothers. There exist results on the distribution transfers of the basic
LULU smoothers, which are the building blocks of the discrete pulse transform.
The output distributions of further smoothers used in the DPT, in
terms of input distributions, has been a challenging problem.
We motivate the use of these smoothers by first considering linear filters as
well as the median smoother, which has been very popular in signal and
image processing. We give an overview of the attractive properties of the
LULU smoothers after which we tackle their output distributions.
The main result is the proof of a recursive formula for the output distribution
of compositions of LULU smoothers in terms of a given input distribution.
|
593 |
Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on mixture models via modified stochastic EM algorithmAssis, Raul Caram de 02 June 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:30Z
No. of bitstreams: 1
DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:38Z (GMT) No. of bitstreams: 1
DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:44Z (GMT) No. of bitstreams: 1
DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Made available in DSpace on 2017-08-22T14:32:50Z (GMT). No. of bitstreams: 1
DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5)
Previous issue date: 2017-06-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components. / Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos.
|
594 |
Les techniques Monte Carlo par chaînes de Markov appliquées à la détermination des distributions de partons / Markov chain Monte Carlo techniques applied to parton distribution functions determination : proof of conceptGbedo, Yémalin Gabin 22 September 2017 (has links)
Nous avons développé une nouvelle approche basée sur les méthodes Monte Carlo par chaînes de Markov pour déterminer les distributions de Partons et quantifier leurs incertitudes expérimentales. L’intérêt principal d’une telle étude repose sur la possibilité de remplacer la minimisation standard avec MINUIT de la fonction χ 2 par des procédures fondées sur les méthodes Statistiques et sur l’inférence Bayésienne en particulier,offrant ainsi une meilleure compréhension de la détermination des distributions de partons. Après avoir examiné ces techniques Monte Carlo par chaînes de Markov, nous introduisons l’algorithme que nous avons choisi de mettre en œuvre, à savoir le Monte Carlo hybride (ou Hamiltonien). Cet algorithme, développé initialement pour la chromodynamique quantique sur réseau, s’avère très intéressant lorsqu’il est appliqué à la détermination des distributions de partons par des analyses globales. Nous avons montré qu’il permet de contourner les difficultés techniques dues à la grande dimensionnalité du problème, en particulier celle relative au taux d’acceptation. L’étude de faisabilité réalisée et présentée dans cette thèse indique que la méthode Monte Carlo par chaînes de Markov peut être appliquée avec succès à l’extraction des distributions de partons et à leurs in-certitudes expérimentales. / We have developed a new approach to determine parton distribution functions and quantify their experimental uncertainties, based on Markov Chain Monte Carlo methods.The main interest devoted to such a study is that we can replace the standard χ 2 MINUIT minimization by procedures grounded on Statistical Methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination.After reviewing these Markov chain Monte Carlo techniques, we introduce the algorithm we have chosen to implement – namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for lattice quantum chromodynamique, turns out to be very interesting when applied to parton distribution functions determination by global analyses ; we have shown that it allows to circumvent the technical difficulties due to the high dimensionality of the problem, in particular concerning the acceptance rate. The feasibility study performed and presented in this thesis, indicates that Markov chain Monte Carlo method can successfully be applied to the extraction of PDFs and of their experimental uncertainties.
|
595 |
Estudo computacional de líquidos iônicos do tipo imidazólio com substituintes insaturados / Computational study of imidazolium tetrafluorborates ionic liquids with unsaturated side chainsBöes, Elvis Sidnei January 2012 (has links)
Os métodos computacionais da química quântica foram empregados para estudar as estruturas moleculares e as energias de interação de cátions e ânions que são componentes de alguns líquidos iônicos funcionalizados, derivados do imidazólio. O estudo teve como objetivo comparar e relacionar os efeitos da presença de funcionalização nos substituintes das posições 1 e 3 do cátion imidazólio, nas propriedades desses líquidos iônicos. Essa funcionalização pode ocorrer pela presença de insaturações, grupos aromáticos, éteres, álcoois, tióis, aminas, nitrilas entre outros grupos nas cadeias dos substituintes. Nesta tese são reportados os estudos dos complexos formados por ânions tetrafluorborato e cátions imidazólio substituídos por grupos metila, etila, propila, butila, isobutila, vinila, propargila, alila, crotila e metalila, observando assim o efeito da presença de substituintes contendo cadeias insaturadas em comparação com os de cadeias saturadas nas estruturas, distribuições de carga, energias de interação e propriedades físico-químicas desses sistemas. Nesses sistemas foram observados intensos efeitos de polarização e transferência de carga ânion-cátion. Foram encontradas diversas relações entre volumes iônicos, energias de interação dos íons e as propriedades de transporte dos respectivos líquidos iônicos. / The methods of computational quantum chemistry have been used to study the molecular structures and the interaction energies of cations and anions which are components of some functionalized ionic liquids derived from imidazolium. The objective of this study is comparing and relating the effects of the presence of functionalization of the side chains of the imidazolium with the properties of these ionic liquids. This functionalization can occur by the presence of unsaturated side chains, aromatic groups, ether, alcohols, thiols, amines, nitriles among other groups in the side chains. In this thesis are reported the studies of the complexes formed of tetrafluorborate anions and imidazolium cations with side chains methyl, ethyl, propyl, butyl, isobutyl, vinyl, propargyl, allyl, crotyl and methallyl, thus observing the effects of the presence of unsaturated side chains compared to saturated ones on the structures, charge distributions, interaction energies and physicochemical properties of these systems. It was observed in these systems strong effects of polarization and anion-cation charge transfer. It was found several relations between ionic volumes, interation energies of the ions and the transport properties of the respective ionic liquids.
|
596 |
Characterizing and modeling visual persistence, search strategies and fixation timesAmor, Tatiana María Alonso January 2017 (has links)
AMOR, T. M. A. Characterizing and modeling visual persistence, search strategies and fixation times. 2017. 114 f. Tese (Doutorado em Física) – Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Pós-Graduação em Física (posgrad@fisica.ufc.br) on 2017-04-05T18:55:10Z
No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Rejected by Giordana Silva (giordana.nascimento@gmail.com), reason: Boa tarde Ana cleide,
Fiz algumas alterações. Só não consegui deletar o arquivo anexado a fim de renomeá-lo. Isto porque o arquivo,conforme as orientações daquele guia, deverá ter a seguimte nomenclatura: 2017_tese_tmaamor
O co-orientador é aquele que está no registro? Pergunto isso porque procurei o nome no trabalho e não localizei.
Estou concluindo o manual e já lhe envio.
on 2017-04-05T19:39:41Z (GMT) / Submitted by Pós-Graduação em Física (posgrad@fisica.ufc.br) on 2017-04-07T16:49:43Z
No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Approved for entry into archive by Giordana Silva (giordana.nascimento@gmail.com) on 2017-04-07T18:13:24Z (GMT) No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Made available in DSpace on 2017-04-07T18:13:24Z (GMT). No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5)
Previous issue date: 2017 / To gather information from the world around us, we move our eyes constantly. In different
occasions we find ourselves performing visual searches, such as trying to find someone in a
crowd or a book in a shelf. While searching, our eyes “jump” from one location to another
giving rise to a wide repertoire of patterns, exhibiting distinctive persistent behaviors.
Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the
probability distributions of these measures show a clear preference of participants towards a
reading-like mechanism (geometrical persistence), whose features and potential advantages
for searching/foraging are discussed.We then perform a Multifractal Detrended Fluctuation
Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find
that it exhibits a typical multifractal behavior arising from the sequential combination
of saccades and fixations. By inspecting the time series composed of only fixational
movements, our results reveal instead a monofractal behavior with a Hurst exponent
H ∼ 0.7, which indicates the presence of long-range power-law positive correlations
(statistical persistence). Motivated by the experimental findings from the study of the
distribution of the intersaccadic angles, we developed a simple visual search model that
quantifies the wide variety of possible search strategies. From our experiments we know
that when searching a target within an image our brain can adopt different strategies. The
question then is which one does it choose? We present a simple two-parameter visual search
model (VSM) based on a persistent random walk and the experimental inter-saccadic
angle distribution. The model captures the basic observed visual search strategies that
range from systematic or reading-like to completely random. We compare the results
of the model to the experimental data by measuring the space-filling efficiency of the
searches. Within the parameter space of the model, we are able to quantify the strategies
used by different individuals for three searching tasks and show how the average search
strategy changes along these three groups. Even though participants tend to explore a vast
range of parameters, when all the items are placed on a regular lattice, participants are
more likely to perform a systematic search, whereas in a more complex field, the search
trajectories resemble a random walk. In this way we can discern with high sensitivity
the relation between the visual landscape and the average strategy, disclosing how small
variations in the image induce strategy changes. Finally, we move beyond visual search
and study the fixation time distributions across different visual tasks. Fixation times are
commonly associated to some cognitive process, as it is in this instances where most of the
visual information is gathered. However, the distribution for the fixation durations exhibits
certain similarities across a wide range of visual tasks and foveated species. We studied
how similar these distributions are, and found that, even though they share some common
properties, such as similar mean values, most of them are statistically different. Because
fixations durations can be controlled by two different mechanisms: cognitive or ocular, we
focus our research into finding a model for the fixation times distribution flexible enough
to capture the observed behaviors in experiments that tested these concepts. At the same
time, the candidate function to model the distribution needs to be the response of some
very robust inner mechanism found in all the aforementioned scenarios. Hence, we discuss
the idea of a model based on the microsacaddic inter event time statistics, resulting in the
sum of Gamma distributions, each of these related to the presence of a distinctive number
of microsaccades in a fixation. / To gather information from the world around us, we move our eyes constantly. In different
occasions we find ourselves performing visual searches, such as trying to find someone in a
crowd or a book in a shelf. While searching, our eyes “jump” from one location to another
giving rise to a wide repertoire of patterns, exhibiting distinctive persistent behaviors.
Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the
probability distributions of these measures show a clear preference of participants towards a
reading-like mechanism (geometrical persistence), whose features and potential advantages
for searching/foraging are discussed.We then perform a Multifractal Detrended Fluctuation
Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find
that it exhibits a typical multifractal behavior arising from the sequential combination
of saccades and fixations. By inspecting the time series composed of only fixational
movements, our results reveal instead a monofractal behavior with a Hurst exponent
H ∼ 0.7, which indicates the presence of long-range power-law positive correlations
(statistical persistence). Motivated by the experimental findings from the study of the
distribution of the intersaccadic angles, we developed a simple visual search model that
quantifies the wide variety of possible search strategies. From our experiments we know
that when searching a target within an image our brain can adopt different strategies. The
question then is which one does it choose? We present a simple two-parameter visual search
model (VSM) based on a persistent random walk and the experimental inter-saccadic
angle distribution. The model captures the basic observed visual search strategies that
range from systematic or reading-like to completely random. We compare the results
of the model to the experimental data by measuring the space-filling efficiency of the
searches. Within the parameter space of the model, we are able to quantify the strategies
used by different individuals for three searching tasks and show how the average search
strategy changes along these three groups. Even though participants tend to explore a vast
range of parameters, when all the items are placed on a regular lattice, participants are
more likely to perform a systematic search, whereas in a more complex field, the search
trajectories resemble a random walk. In this way we can discern with high sensitivity
the relation between the visual landscape and the average strategy, disclosing how small
variations in the image induce strategy changes. Finally, we move beyond visual search
and study the fixation time distributions across different visual tasks. Fixation times are
commonly associated to some cognitive process, as it is in this instances where most of the
visual information is gathered. However, the distribution for the fixation durations exhibits
certain similarities across a wide range of visual tasks and foveated species. We studied
how similar these distributions are, and found that, even though they share some common
properties, such as similar mean values, most of them are statistically different. Because
fixations durations can be controlled by two different mechanisms: cognitive or ocular, we
focus our research into finding a model for the fixation times distribution flexible enough
to capture the observed behaviors in experiments that tested these concepts. At the same
time, the candidate function to model the distribution needs to be the response of some
very robust inner mechanism found in all the aforementioned scenarios. Hence, we discuss
the idea of a model based on the microsacaddic inter event time statistics, resulting in the
sum of Gamma distributions, each of these related to the presence of a distinctive number
of microsaccades in a fixation.
|
597 |
Statistical modeling and processing of high frequency ultrasound images : application to dermatologic oncology / Modélisation et traitement statistiques d’images d’ultrasons de haute fréquence. Application à l’oncologie dermatologique.Pereyra, Marcelo 04 July 2012 (has links)
Cette thèse étudie le traitement statistique des images d’ultrasons de haute fréquence, avec application à l’exploration in-vivo de la peau humaine et l’évaluation non invasive de lésions. Des méthodes Bayésiennes sont considérées pour la segmentation d’images échographiques de la peau. On y établit que les ultrasons rétrodiffusés par la peau convergent vers un processus aléatoire complexe de type Levy-Flight, avec des statistiques non Gaussiennes alpha-stables. L’enveloppe du signal suit une distribution Rayleigh généralisée à queue lourde. A partir de ces résultats, il est proposé de modéliser l’image ultrason de multiples tissus comme un mélange spatialement cohérent de lois Rayleigh à queues lourdes. La cohérence spatiale inhérente aux tissus biologiques est modélisée par un champ aléatoire de Potts-Markov pour représenter la dépendance locale entre les composantes du mélange. Un algorithme Bayésien original combiné à une méthode Monte Carlo par chaine de Markov (MCMC) est proposé pour conjointement estimer les paramètres du modèle et classifier chaque voxel dans un tissu. L’approche proposée est appliquée avec succès à la segmentation de tumeurs de la peau in-vivo dans des images d’ultrasons de haute fréquence en 2D et 3D. Cette méthode est ensuite étendue en incluant l’estimation du paramètre B de régularisation du champ de Potts dans la chaine MCMC. Les méthodes MCMC classiques ne sont pas directement applicables à ce problème car la vraisemblance du champ de Potts ne peut pas être évaluée. Ce problème difficile est traité en adoptant un algorithme Metropolis-Hastings “sans vraisemblance” fondé sur la statistique suffisante du Potts. La méthode de segmentation non supervisée, ainsi développée, est appliquée avec succès à des images échographiques 3D. Finalement, le problème du calcul de la borne de Cramer-Rao (CRB) du paramètre B est étudié. Cette borne dépend des dérivées de la constante de normalisation du modèle de Potts, dont le calcul est infaisable. Ce problème est résolu en proposant un algorithme Monte Carlo original, qui est appliqué avec succès au calcul de la borne CRB des modèles d’Ising et de Potts. / This thesis studies statistical image processing of high frequency ultrasound imaging, with application to in-vivo exploration of human skin and noninvasive lesion assessment. More precisely, Bayesian methods are considered in order to perform tissue segmentation in ultrasound images of skin. It is established that ultrasound signals backscattered from skin tissues converge to a complex Levy Flight random process with non-Gaussian _-stable statistics. The envelope signal follows a generalized (heavy-tailed) Rayleigh distribution. Based on these results, it is proposed to model the distribution of multiple-tissue ultrasound images as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by a Potts Markov random field. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. The proposed method is successfully applied to the segmentation of in-vivo skin tumors in high frequency 2D and 3D ultrasound images. This method is subsequently extended by including the estimation of the Potts regularization parameter B within the Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because the likelihood of B is intractable. This difficulty is addressed by using a likelihood-free Metropolis-Hastings algorithm based on the sufficient statistic of the Potts model. The resulting unsupervised segmentation method is successfully applied to tridimensional ultrasound images. Finally, the problem of computing the Cramer-Rao bound (CRB) of B is studied. The CRB depends on the derivatives of the intractable normalizing constant of the Potts model. This is resolved by proposing an original Monte Carlo algorithm, which is successfully applied to compute the CRB of the Ising and Potts models.
|
598 |
Comparação da capacidade preditiva de modelos heterocedásticos através da estimação do value-at-risk / Predictive ability comparison of heteroskedastic models by estimating the value-at-riskAmaro, Raphael Silveira 22 July 2016 (has links)
In an increasingly competitive economic environment, as in the current global context, risk management becomes essential for the survival of companies and investment portfolio managers. Both companies and managers need to have a model that can be able to quantify the risks inherent in their investments in the best possible way in order to guide them in making decisions to get the highest expected return on their investments. Currently, there are several heterogeneous models which seek to quantify risk, making the choice of a particular model very complex. In order to confront and find models that can serve, efficiently, to the quantification of risk, the objective of this research is to compare the predictive ability of five models of conditional heteroskedasticity by estimating the Value-at-Risk, assuming eight different statistical probability distributions, for the series of financial ratios of the capital market of the five largest emerging countries: Brazil, Russia, India, China and South Africa, in the period between February 26, 2001 and December 31, 2015. For this goal was achieved, were held predictions of Value-at-Risk for 50 steps ahead, for all competing models in the study, with adjustment of parameters at every step. Since all the forecasts have been computed for every steps forward, it was possible to compare predictive ability of competing models studied by means of some loss functions. The evidences suggests that heterocedastic Component GARCH is preferable, to make predictions of Value-at-Risk, to all other competing models, however the distribution of statistical probability that this model uses interferes too much in the results of forecasts obtained by it. The data for each financial index studied showed to adapt themselves to a particular different type of probability density function, not reflecting a distribution which can be considered superior to all other. Thus, the results do not provide a single and ideal tool for use in the risk measurement, of generalized form, for all capital markets of emerging countries studied, only provide specific tools to be used in each financial index individually. The results found can be used for the purposes previously described or to elaborate statistical formulas that combine different models estimated in order to get better volatilities forecast measures so that it can measure, more precisely, the market risks. / Em um ambiente econômico cada vez mais competitivo, como é no atual contexto mundial, a gestão de risco torna-se indispensável para a sobrevivência de empresas e de gestores de carteiras de investimento. Tanto as empresas quanto os gestores precisam de um modelo que seja capaz de quantificar os riscos inerentes aos seus investimentos financeiros da melhor maneira possível, de forma a orientá-los na tomada de decisões para que obtenham o maior retorno esperado de seus investimentos. Atualmente, existem inúmeros modelos heterogêneos que buscam quantificar riscos, tornando a escolha de um determinado modelo bastante complexa. Com o intuito de confrontar e encontrar modelos que possam servir, de forma eficiente, à quantificação de riscos, o objetivo desta pesquisa é o de comparar a capacidade preditiva de cinco modelos de heterocedasticidade condicional através da estimação do Value-at-Risk, levando em consideração oito distribuições de probabilidade estatística diferentes, para as séries de índices financeiros do mercado de capitais dos cinco maiores países emergentes: Brasil, Rússia, Índia, China e África do Sul, no período compreendido entre 26 de fevereiro de 2001 e 31 de dezembro de 2015. Para alcançar tal objetivo, realizaram-se previsões do Value-at-Risk para 50 passos à frente, em todos os modelos concorrentes em estudo, com reajuste dos parâmetros a cada passo. Uma vez que todas as previsões foram computadas para todos os passos à frente, foi possível realizar a comparação da capacidade preditiva dos modelos concorrentes estudados por meio de determinadas funções de perda específicas. As evidências encontradas sugerem que o modelo heterocedástico Component GARCH é preferível, para realizar previsões do Value-at-Risk, a todos os outros modelos concorrentes, porém a distribuição de probabilidade estatística que este modelo utiliza interfere demasiadamente nos resultados das previsões obtidas por ele. Os dados de cada índice financeiro estudado mostraram-se adequar-se a um determinado tipo de função de densidade de probabilidade diferente, não refletindo uma distribuição que possa ser considerada superior a todas as outras. Deste modo, os resultados encontrados não oferecem uma ferramenta única e ideal para ser utilizada na mensuração de risco, de forma generalizada, para todos os mercados de capitais dos países emergentes estudados, apenas fornecem ferramentas pontuais para serem utilizadas em cada índice financeiro de forma individual. Os resultados obtidos podem servir para os fins descritos anteriormente ou para elaborar fórmulas estatísticas que combinem diferentes modelos estimados com a finalidade de obter melhores medidas de previsão de volatilidades para que se possa mensurar, de forma mais precisa, os riscos de mercado.
|
599 |
Hipoelipticidade global de campos vetoriais no toro TNNascimento, Moisés Aparecido do 21 June 2010 (has links)
Made available in DSpace on 2016-06-02T20:28:25Z (GMT). No. of bitstreams: 1
3207.pdf: 939340 bytes, checksum: b708a600566bb7e50aa91c249a665893 (MD5)
Previous issue date: 2010-06-21 / In this work, we will see that if the transpose operator of a smooth real vector field L defined on the N-dimensional torus, regarded as a linear differential operator with coefficients in C1(TN), is globally hypoelliptic, then there exists a vector field with constant coefficients L0 such that L and L0 are C1-conjugated, with such constants satisfying a condition called Diofantina (*). We will also show the converse of this fact, that is, if there is a coordinate system such that in this new system L has constant coefficients with such constant satisfying the Diophantine condition (*) then its transpose L* is globally hypoelliptic. We will see that the Diophantine condition implies that the flow generated by the field, regarded as a Dynamical system is minimal. / Neste trabalho, veremos que se o operador transposto de um campo vetorial real suave L definido no toro N-dimensional, visto como um operador diferencial linear com coeficientes em C1(TN), for globalmente hipoelíptico, então existe um campo vetorial com coeficientes constantes L0 tal que L e L0 são C1- conjugados, com tais constantes satisfazendo uma condição chamada de Diofantina (*). Mostraremos também a recíproca deste fato, isto é, se existir um sistema de coordenadas tal que, neste novo sitema L possui coeficientes constantes com tais constantes satisfazendo a condição Diofantina (*) então, seu transposto L* é globalmente hipoelíptico. Veremos que a condição Diofantina implica que, os fluxos gerados pelo campo, vistos como um sistema dinânico, são minimais.
|
600 |
Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on Mixture Models via Modified Stochastic EMRaul Caram de Assis 02 June 2017 (has links)
Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos. / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components.
|
Page generated in 0.1197 seconds