• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 148
  • 148
  • 30
  • 25
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Development of stopping rule methods for the MLEM and OSEM algorithms used in PET image reconstruction / Ανάπτυξη κριτηρίων παύσης των αλγορίθμων MLEM και OSEM που χρησιμοποιούνται στην ανακατασκευή εικόνας σε PET

Γαϊτάνης, Αναστάσιος 11 January 2011 (has links)
The aim of this Thesis is the development of stopping rule methods for the MLEM and OSEM algorithms used in image reconstruction positron emission tomography (PET). The development of the stopping rules is based on the study of the properties of both algorithms. Analyzing their mathematical expressions, it can be observed that the pixel updating coefficients (PUC) play a key role in the upgrading process of the reconstructed image from iteration k to k+1. For the analysis of the properties of the PUC, a PET scanner geometry was simulated using Monte Carlo methods. For image reconstruction using iterative techniques, the calculation of the transition matrix is essential. And it fully depends on the geometrical characteristics of the PET scanner. The MLEM and OSEM algorithms were used to reconstruct the projection data. In order to compare the reconstructed and true images, two figures of merit (FOM) were used; a) the Normalized Root Mean Square Deviation (NRMSD) and b) the chi-square χ2. The behaviour of the PUC C values for a zero and non-zero pixel in the phantom image was analyzed and it has been found different behavior for zero and non-zero pixels. Based on this assumption, the vector of all C values was analyzed for all non-zero pixels of the reconstructed image and it was found that the histograms of the values of the PUC have two components: one component around C(i)=1.0 and a tail component, for values C(i)<1.0. In this way, a vector variable has been defined, where I is the total number of pixels in the image and k is the iteration number. is the minimum value of the vector of the pixel updating coefficients among the non-zero pixels of the reconstructed image at iteration k. Further work was performed to find out the dependence of Cmin on the image characteristics, image topology and activity level. The analysis shows that the parameterization of Cmin is reliable and allows the establishment of a robust stopping rule for the MLEM algorithm. Furthermore, following a different approach, a new stopping rule using the log-likelihood properties of the MLEM algorithm has been developed. The two rules were evaluated using the independent Digimouse phantom. The study revealed that both stopping rules produce reconstructed images with similar properties. The same study was performed for the OSEM algorithm and a stopping rule for the OSEM algorithm dedicated to each number of subset was developed. / Σκοπός της διατριβής είναι η ανάπτυξη κριτηρίων παύσης για τους επαναληπτικούς αλγόριθμους (MLEM και OSEM) που χρησιμοποιούνται στην ανακατασκευή ιατρικής εικόνας στους τομογράφους εκπομπής ποζιτρονίου (PET). Η ανάπτυξη των κριτηρίων παύσης βασίστηκε στη μελέτη των ιδιοτήτων των αλγόριθμων MLEM & OSEM. Απο τη μαθηματική έκφραση των δύο αλγορίθμων προκύπτει ότι οι συντελεστές αναβάθμισης (ΣΑ) των pixels της εικόνας παίζουν σημαντικό ρόλο στην ανακατασκευή της απο επανάληψη σε επανάληψη. Για την ανάλυση ένας τομογράφος PET προσομοιώθηκε με τη χρήση των μεθόδων Μόντε Κάρλο.Για την ανακατασκευή της εικόνας με τη χρήση των αλγόριθμων MLEM και OSEM, υπολογίστηκε ο πίνακας μετάβασης. Ο πίνακας μετάβασης εξαρτάται απο τα γεωμετρικά χαρακτηριστικά του τομογράφου PET και για τον υπολογισμό του χρησιμοποιήθηκαν επίσης μέθοδοι Μόντε Κάρλο. Ως ψηφιακά ομοιώματα χρησιμοποιήθηκαν το ομοίωμα εγκεφάλου Hoffman και το 4D MOBY. Για κάθε ένα απο τα ομοιώματα δημιουργήθηκαν προβολικά δεδομένα σε διαφορετικές ενεργότητες. Για τη σύγκριση της ανακατασκευασμένης και της αρχικής εικόνας χρησιμοποιήθηκαν δύο ξεχωριστοί δείκτες ποίοτητας, το NRMSD και το chi square. Η ανάλυση έδειξε οτι οι ΣΑ για τα μη μηδενικά pixels της εικόνας τείνουν να λάβουν την τιμή 1.0 με την αύξηση των επαναλήψεων, ενώ για τα μηδενικά pixels αυτό δε συμβαίνει. Αναλύοντας περισσότερο το διάνυσμα των ΣΑ για τα μη μηδενικά pixels της ανακατασκευασμένης εικόνας διαπιστώθηκε ότι αυτό έχει δύο μέρη: α) Μια κορυφή για τιμές των ΣΑ = 1.0 και β) μια ουρά με τιμές των ΣΑ<1.0. Αυξάνοντας τις επαναλήψεις, ο αριθμός των pixels με ΣΑ=1.0 αυξάνονταν ενώ ταυτόχρονα η ελάχιστη τιμή του διανύσματος των ΣΑ μετακινούνταν προς το 1.0. Με αυτό τον τρόπο προσδιορίστηκε μια μεταβλητή της μορφής όπου N είναι ο αριθμός των pixels της εικόνας, k η επανάληψη και η ελάχιστη τιμή του διανύσματος των ΣΑ. Η ανάλυση που έγινε έδειξε ότι η μεταβλητή Cmin συσχετίζεται μόνο με την ενεργότητα της εικόνας και όχι με το είδος ή το μέγεθός της. Η παραμετροποίηση αυτής της σχέσης οδήγησε στην ανάπτυξη του κριτηρίου παύσης για τον MLEM αλγόριθμο. Μια άλλη προσέγγιση βασισμένη στις ιδιότητες πιθανοφάνειας του MLEM αλγόριθμου, οδήγησε στην ανάπτυξη ενός διαφορετικού κριτηρίου παύσης του MLEM. Τα δύο κριτήρια αποτιμήθηκαν με τη χρήση του ομοιώματος Digimouse και βρέθηκε να παράγουν παρόμοιες εικόνες. Η ίδια μελέτη έγινε και για τον OSEM αλγόριθμο και αναπτύχθηκε κριτήριο παύσης για διαφορετικό αριθμό subsets.
122

Análise espacial do potencial fotovoltaico em telhados de residências usando modelagem hierárquica bayesiana / Análisis espacial del potencial fotovoltaico en tejados de residencias usando modelamiento jerárquico bayesiano

Villavicencio Gastelu, Joel [UNESP] 01 March 2016 (has links)
Submitted by JOÉL VILLAVICENCIO GASTELÚ null (tear_295@hotmail.com) on 2016-03-30T17:36:01Z No. of bitstreams: 1 Dissertação_Rev1_13 - Joel Gastelu.pdf: 3335802 bytes, checksum: 93fbe0689da0072cc77a9120a8e24b02 (MD5) / Rejected by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija estas informações e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-04-01T13:14:50Z (GMT) / Submitted by JOÉL VILLAVICENCIO GASTELÚ null (tear_295@hotmail.com) on 2016-04-01T19:04:22Z No. of bitstreams: 1 Dissertação_Joel.pdf: 4253690 bytes, checksum: 75d9921d8416eec7341f8bf0e2182766 (MD5) / Rejected by Ana Paula Grisoto (grisotoana@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: A data informada na capa do documento está diferente da data de defesa que consta na ficha catalográfica e folha de aprovação. Corrija esta informação no arquivo PDF e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-04-05T13:53:33Z (GMT) / Submitted by JOÉL VILLAVICENCIO GASTELÚ null (tear_295@hotmail.com) on 2016-04-06T22:35:57Z No. of bitstreams: 1 Dissertação_Joel.pdf: 4231140 bytes, checksum: 4bd6143a52dc3a6846abd4f996ba9306 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-04-07T12:21:23Z (GMT) No. of bitstreams: 1 gastelu_jv_me_ilha.pdf: 4231140 bytes, checksum: 4bd6143a52dc3a6846abd4f996ba9306 (MD5) / Made available in DSpace on 2016-04-07T12:21:23Z (GMT). No. of bitstreams: 1 gastelu_jv_me_ilha.pdf: 4231140 bytes, checksum: 4bd6143a52dc3a6846abd4f996ba9306 (MD5) Previous issue date: 2016-03-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / No presente trabalho tem-se como objetivo estimar o potencial fotovoltaico devido à instalação de sistemas fotovoltaicos em telhados de áreas residenciais. Na estimação desse potencial foram consideradas quatro grandezas: o nível de irradiação solar, a área aproveitável de telhado para a instalação dos sistemas fotovoltaicos, a eficiência de conversão dos sistemas fotovoltaicos e as probabilidades de instalação dos sistemas fotovoltaicos, que caracterizam as preferências dos habitantes à instalação desses sistemas. Um modelo hierárquico bayesiano foi proposto para o cálculo das probabilidades de instalação dos sistemas fotovoltaicos. Nesse modelo bayesiano é estabelecida uma relação entre as probabilidades de instalação, as variáveis socioeconômicas e as interações entre as subáreas, através de um modelo linear generalizado misto. O cálculo do valor esperado das probabilidades de instalação foi realizado usando o método de Monte Carlo via cadeias de Markov. Os resultados do potencial fotovoltaico são apresentados através de mapas temáticos, que permitem a visualização da distribuição espacial do seu valor esperado. Esta informação pode ajudar as concessionárias de distribuição no planejamento e expansão de suas redes elétricas em regiões com maior potencial de geração fotovoltaica. / The present work aims to estimate the photovoltaic potential for installing solar panel on the rooftop of residential areas. The estimation of this potential considers four quantities: the solar radiation level, rooftop availability for installation of photovoltaic systems, conversion efficiency of the photovoltaic systems and the probabilities for the installation of photovoltaic systems that characterize the preferences of the inhabitants to the installation of such systems. A bayesian hierarchical model is proposed to calculate the installation probabilities of photovoltaic systems. This bayesian model establishes a relation among the installation probabilities, socioeconomic variables and interactions between subareas, through a generalized linear mixed model. The calculation of expected value of installation probabilities in each subarea is performed using the Markov Chain Monte Carlo method. Photovoltaic potential results are presented through thematic maps that allow the visualization of the spatial distribution of its expected value. This information can help to distribution utilities for planning and expansion of their networks in regions with the greatest potential for photovoltaic generation.
123

Méthodes conjointes de détection et suivi basé-modèle de cibles distribuées par filtrage non-linéaire dans les données lidar à balayage / Joint detection and model-based tracking methods of extended targets in scanning laser rangefinder data using non-linear filtering techniques

Fortin, Benoît 22 November 2013 (has links)
Dans les systèmes de perception multicapteurs, un point central concerne le suivi d'objets multiples. Dans mes travaux de thèse, le capteur principal est un télémètre laser à balayage qui perçoit des cibles étendues. Le problème desuivi multi-objets se décompose généralement en plusieurs étapes (détection, association et suivi) réalisées de manière séquentielle ou conjointe. Mes travaux ont permis de proposer des alternatives à ces méthodes en adoptant une approche "track-before-detect" sur cibles distribuées qui permet d'éviter la succession des traitements en proposant un cadre global de résolution de ce problème d'estimation. Dans une première partie, nous proposons une méthode de détection travaillant directement en coordonnées naturelles (polaires) qui exploite les propriétés d'invariance géométrique des objets suivis. Cette solution est ensuite intégrée dans le cadre des approches JPDA et PHD de suivi multicibles résolues grâce aux méthodes de Monte-Carlo séquentielles. La seconde partie du manuscrit vise à s'affranchir du détecteur pour proposer une méthode dans laquelle le modèle d'objet est directement intégré au processus de suivi. C'est sur ce point clé que les avancées ont été les plus significatives permettant d'aboutir à une méthode conjointe de détection et de suivi. Un processus d'agrégation a été développé afin de permettre une formalisation des données qui évite tout prétraitement sous-optimal. Nous avons finalement proposé un formalisme général pour les systèmes multicapteurs (multilidar, centrale inertielle, GPS). D'un point de vue applicatif, ces travaux ont été validés dans le domaine du suivi de véhicules pour les systèmes d'aide à la conduite. / In multi-sensor perception systems, an active topic concerns the multiple object tracking methodes. In this work, the main sensor is a scanning laser rangefinder perceiving extended targets. Tracking methods are generally composed of a three-step scheme (detection, association and tracking) which is jointly or sequentially implemented. This work proposes alternative solutions by considering a track-before-detect approach on extended targets. It avoids the classic procedures by proposing a global framework to solve this estimation problem. Firstly, we propose a detection method dealing with measurements in natural coordinates (polar) which is founded on geometrical invariance properties of the tracked objects. This solution is then integrated in the JPDA and PHD multi-target tracking frameworks solved with the sequential Monte-Carlo methods. The second part of this thesis aims at avoiding the detection step to propose an approach where the object model is directly embedded in the tracking process. This lets to build a novel joint detection and tracking approach. An aggregation process was developed to construct a measurement modeling avoiding any suboptimal preprocessing. We finally proposed a general framework for multi-sensor systems ( multiple lidar, inertial sensor, GPS). Theses methods were applied in the area of multiple vehicle tracking for the Advanced Driver Assistance Systems.
124

Quantifying the impact of contact tracing on ebola spreading

Montazeri Shahtori, Narges January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Faryad Darabi Sahneh / Recent experience of Ebola outbreak of 2014 highlighted the importance of immediate response to impede Ebola transmission at its very early stage. To this aim, efficient and effective allocation of limited resources is crucial. Among standard interventions is the practice of following up with physical contacts of individuals diagnosed with Ebola virus disease -- known as contact tracing. In an effort to objectively understand the effect of possible contact tracing protocols, we explicitly develop a model of Ebola transmission incorporating contact tracing. Our modeling framework has several features to suit early–stage Ebola transmission: 1) the network model is patient–centric because when number of infected cases are small only the myopic networks of infected individuals matter and the rest of possible social contacts are irrelevant, 2) the Ebola disease model is individual–based and stochastic because at the early stages of spread, random fluctuations are significant and must be captured appropriately, 3) the contact tracing model is parameterizable to analyze the impact of critical aspects of contact tracing protocols. Notably, we propose an activity driven network approach to contact tracing, and develop a Monte-Carlo method to compute the basic reproductive number of the disease spread in different scenarios. Exhaustive simulation experiments suggest that while contact tracing is important in stopping the Ebola spread, it does not need to be done too urgently. This result is due to rather long incubation period of Ebola disease infection. However, immediate hospitalization of infected cases is crucial and requires the most attention and resource allocation. Moreover, to investigate the impact of mitigation strategies in the 2014 Ebola outbreak, we consider reported data in Guinea, one the three West Africa countries that had experienced the Ebola virus disease outbreak. We formulate a multivariate sequential Monte Carlo filter that utilizes mechanistic models for Ebola virus propagation to simultaneously estimate the disease progression states and the model parameters according to reported incidence data streams. This method has the advantage of performing the inference online as the new data becomes available and estimating the evolution of the basic reproductive ratio R₀(t) throughout the Ebola outbreak. Our analysis identifies a peak in the basic reproductive ratio close to the time of Ebola cases reports in Europe and the USA.
125

Análise espacial do potencial fotovoltaico em telhados de residências usando modelagem hierárquica bayesiana /

Villavicencio Gastelu, Joel January 2016 (has links)
Orientador: Antônio Padilha Feltrin / Resumo: No presente trabalho tem-se como objetivo estimar o potencial fotovoltaico devido à instalação de sistemas fotovoltaicos em telhados de áreas residenciais. Na estimação desse potencial foram consideradas quatro grandezas: o nível de irradiação solar, a área aproveitável de telhado para a instalação dos sistemas fotovoltaicos, a eficiência de conversão dos sistemas fotovoltaicos e as probabilidades de instalação dos sistemas fotovoltaicos, que caracterizam as preferências dos habitantes à instalação desses sistemas. Um modelo hierárquico bayesiano foi proposto para o cálculo das probabilidades de instalação dos sistemas fotovoltaicos. Nesse modelo bayesiano é estabelecida uma relação entre as probabilidades de instalação, as variáveis socioeconômicas e as interações entre as subáreas, através de um modelo linear generalizado misto. O cálculo do valor esperado das probabilidades de instalação foi realizado usando o método de Monte Carlo via cadeias de Markov. Os resultados do potencial fotovoltaico são apresentados através de mapas temáticos, que permitem a visualização da distribuição espacial do seu valor esperado. Esta informação pode ajudar as concessionárias de distribuição no planejamento e expansão de suas redes elétricas em regiões com maior potencial de geração fotovoltaica. / Abstract: The present work aims to estimate the photovoltaic potential for installing solar panel on the rooftop of residential areas. The estimation of this potential considers four quantities: the solar radiation level, rooftop availability for installation of photovoltaic systems, conversion efficiency of the photovoltaic systems and the probabilities for the installation of photovoltaic systems that characterize the preferences of the inhabitants to the installation of such systems. A bayesian hierarchical model is proposed to calculate the installation probabilities of photovoltaic systems. This bayesian model establishes a relation among the installation probabilities, socioeconomic variables and interactions between subareas, through a generalized linear mixed model. The calculation of expected value of installation probabilities in each subarea is performed using the Markov Chain Monte Carlo method. Photovoltaic potential results are presented through thematic maps that allow the visualization of the spatial distribution of its expected value. This information can help to distribution utilities for planning and expansion of their networks in regions with the greatest potential for photovoltaic generation. / Mestre
126

Simulações numéricas de Monte Carlo aplicadas no estudo das transições de fase do modelo de Ising dipolar bidimensional / Numerical Monte Carlo simulations applied to study of phase transitions in two-dimensional dipolar Ising model

Leandro Gutierrez Rizzi 24 April 2009 (has links)
O modelo de Ising dipolar bidimensional inclui, além da interação ferromagnética entre os primeiros vizinhos, interações de longo alcance entre os momentos de dipolo magnético dos spins. A presença da interação dipolar muda completamente o sistema, apresentando um rico diagrama de fase, cujas características têm originado inúmeros estudos na literatura. Além disso, a possibilidade de explicar fenômenos observados em filmes magnéticos ultrafinos, os quais possuem diversas aplicações em àreas tecnológicas, também motiva o estudo deste modelo. O estado fundamental ferromagnético do modelo de Ising puro é alterado para uma série de fases do tipo faixas, as quais consistem em domínios ferromagnéticos de largura $h$ com magnetizações opostas. A largura das faixas depende da razao $\\delta$ das intensidades dos acoplamentos ferromagnético e dipolar. Através de simulações de Monte Carlo e técnicas de repesagem em histogramas múltiplos identificamos as temperaturas críticas de tamanho finito para as transições de fase quando $\\delta=2$, o que corresponde a $h=2$. Calculamos o calor específico e a susceptibilidade do parâmetro de ordem, no intervalo de temperaturas onde as transições são observadas, para diferentes tamanhos de rede. As técnicas de repesagem permitem-nos explorar e identificar máximos distintos nessas funções da temperatura e, desse modo, estimar as temperaturas críticas de tamanho finito com grande precisão. Apresentamos evidências numéricas da existência de uma fase nemática de Ising para tamanhos grandes de rede. Em nossas simulações, observamos esta fase para tamanhos de rede a partir de $L=48$. Para verificar o quanto a interação dipolar de longo alcance afeta as estimativas físicas, nós calculamos o tempo de autocorrelação integrado nas séries temporais da energia. Inferimos daí quão severo é o critical slowing down (decaimento lento crítico) para esse sistema próximo às transições de fase termodinâmicas. Os resultados obtidos utilizando um algoritmo de atualização local foram comparados com os resultados obtidos utilizando o algoritmo multicanônico. / Two-dimensional spin model with nearest-neighbor ferromagnetic interaction and long-range dipolar interactions exhibit a rich phase diagram, whose characteristics have been exploited by several studies in the recent literature. Furthermore, the possibility of explain observed phenomena in ultrathin magnetic films, which have many technological applications, also motivates the study of this model. The presence of dipolar interaction term changes the ferromagnetic ground state expected for the pure Ising model to a series of striped phases, which consist of ferromagnetic domains of width $h$ with opposite magnetization. The width of the stripes depends on the ratio $\\delta$ of the ferromagnetic and dipolar couplings. Monte Carlo simulations and reweighting multiple histograms techniques allow us to identify the finite-size critical temperatures of the phase transitions when $\\delta=2$, which corresponds to $h=2$. We calculate, for different lattice sizes, the specific heat and susceptibility of the order parameter around the transition temperatures by means of reweighting techniques. This allows us to identify in these observables, as functions of temperature, the distinct maxima and thereby to estimate the finite-size critical temperatures with high precision. We present numerical evidence of the existence of a Ising nematic phase for large lattice sizes. Our results show that simulations need to be performed for lattice sizes at least as large as $L=48$ to clearly observe the Ising nematic phase. To access how the long-range dipolar interaction may affect physical estimates we also evaluate the integrated autocorrelation time in energy time series. This allows us to infer how severe is the critical slowing down for this system with long-range interaction and nearby thermodynamic phase transitions. The results obtained using a local update algorithm are compared with results obtained using the multicanonical algorithm.
127

Estimation Bayésienne non Paramétrique de Systèmes Dynamiques en Présence de Bruits Alpha-Stables / Nonparametric Bayesian Estimition of Dynamical Systems in the Presence of Alpha-Stable Noise

Jaoua, Nouha 06 June 2013 (has links)
Dans un nombre croissant d'applications, les perturbations rencontrées s'éloignent fortement des modèles classiques qui les modélisent par une gaussienne ou un mélange de gaussiennes. C'est en particulier le cas des bruits impulsifs que nous rencontrons dans plusieurs domaines, notamment celui des télécommunications. Dans ce cas, une modélisation mieux adaptée peut reposer sur les distributions alpha-stables. C'est dans ce cadre que s'inscrit le travail de cette thèse dont l'objectif est de concevoir de nouvelles méthodes robustes pour l'estimation conjointe état-bruit dans des environnements impulsifs. L'inférence est réalisée dans un cadre bayésien en utilisant les méthodes de Monte Carlo séquentielles. Dans un premier temps, cette problématique a été abordée dans le contexte des systèmes de transmission OFDM en supposant que les distorsions du canal sont modélisées par des distributions alpha-stables symétriques. Un algorithme de Monte Carlo séquentiel a été proposé pour l'estimation conjointe des symboles OFDM émis et des paramètres du bruit $\alpha$-stable. Ensuite, cette problématique a été abordée dans un cadre applicatif plus large, celui des systèmes non linéaires. Une approche bayésienne non paramétrique fondée sur la modélisation du bruit alpha-stable par des mélanges de processus de Dirichlet a été proposée. Des filtres particulaires basés sur des densités d'importance efficaces sont développés pour l'estimation conjointe du signal et des densités de probabilité des bruits / In signal processing literature, noise's sources are often assumed to be Gaussian. However, in many fields the conventional Gaussian noise assumption is inadequate and can lead to the loss of resolution and/or accuracy. This is particularly the case of noise that exhibits impulsive nature. The latter is found in several areas, especially telecommunications. $\alpha$-stable distributions are suitable for modeling this type of noise. In this context, the main focus of this thesis is to propose novel methods for the joint estimation of the state and the noise in impulsive environments. Inference is performed within a Bayesian framework using sequential Monte Carlo methods. First, this issue has been addressed within an OFDM transmission link assuming a symmetric alpha-stable model for channel distortions. For this purpose, a particle filter is proposed to include the joint estimation of the transmitted OFDM symbols and the noise parameters. Then, this problem has been tackled in the more general context of nonlinear dynamic systems. A flexible Bayesian nonparametric model based on Dirichlet Process Mixtures is introduced to model the alpha-stable noise. Moreover, sequential Monte Carlo filters based on efficient importance densities are implemented to perform the joint estimation of the state and the unknown measurement noise density
128

Étude et simulation des processus de diffusion biaisés / Study and simulation of skew diffusion processes

Lenôtre, Lionel 27 November 2015 (has links)
Nous considérons les processus de diffusion biaisés et leur simulation. Notre étude se divise en quatre parties et se concentre majoritairement sur les processus à coefficients constants par morceaux dont les discontinuités se trouvent le long d'un hyperplan simple. Nous commençons par une étude théorique dans le cas de la dimension un pour une classe de coefficients plus large. Nous donnons en particulier un résultat sur la structure des densités des résolvantes associées à ces processus et obtenons ainsi une méthode de calcul. Lorsque cela est possible, nous effectuons une inversion de Laplace de ces densités et donnons quelques fonctions de transition. Nous nous concentrons ensuite sur la simulation des processus de diffusions baisées. Nous construisons un schéma numérique utilisant la densité de la résolvante pour tout processus de Feller. Avec ce schéma et les densités calculées dans la première partie, nous obtenons une méthode de simulation des processus de diffusions biaisées en dimension un. Après cela, nous regardons le cas de la dimension supérieure. Nous effectuons une étude théorique et calculons des fonctionnelles des processus de diffusions biaisées. Ceci nous permet d'obtenir entre autre la fonction de transition du processus marginal orthogonal à l'hyperplan de discontinuité. Enfin, nous abordons la parallélisation des méthodes particulaires et donnons une stratégie permettant de simuler de grand lots de trajectoires de processus de diffusions biaisées sur des architectures massivement parallèle. Une propriété de cette stratégie est de permettre de simuler à nouveau quelques trajectoires des précédentes simulations. / We consider the skew diffusion processes and their simulation. This study are divided into four parts and concentrate on the processes whose coefficients are piecewise constant with discontinuities along a simple hyperplane. We start by a theoretical study of the one-dimensional case when the coefficients belong to a broader class. We particularly give a result on the structure of the resolvent densities of these processes and obtain a computational method. When it is possible, we perform a Laplace inversion of these densities and provide some transition functions. Then we concentrate on the simulation of skew diffusions process. We build a numerical scheme using the resolvent density for any Feller processes. With this scheme and the resolvent densities computed in the previous part, we obtain a simulation method for the skew diffusion processes in dimension one. After that, we consider the multidimensional case. We provide a theoretical study and compute some functionals of the skew diffusions processes. This allows to obtain among others the transition function of the marginal process orthogonal to the hyperplane of discontinuity. Finally, we consider the parallelization of Monte Carlo methods. We provide a strategy which allows to simulate a large batch of skew diffusions processes sample paths on massively parallel architecture. An interesting feature is the possibility to replay some the sample paths of previous simulations.
129

Rare event simulation for statistical model checking / Simulation d'événements rares pour le model checking statistique

Jegourel, Cyrille 19 November 2014 (has links)
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés. / In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties.
130

Algorithmes de restauration bayésienne mono- et multi-objets dans des modèles markoviens / Single and multiple object(s) Bayesian restoration algorithms for Markovian models

Petetin, Yohan 27 November 2013 (has links)
Cette thèse est consacrée au problème d'estimation bayésienne pour le filtrage statistique, dont l'objectif est d'estimer récursivement des états inconnus à partir d'un historique d'observations, dans un modèle stochastique donné. Les modèles stochastiques considérés incluent principalement deux grandes classes de modèles : les modèles de Markov cachés et les modèles de Markov à sauts conditionnellement markoviens. Ici, le problème est abordé sous sa forme générale dans la mesure où nous considérons le problème du filtrage mono- et multi objet(s), ce dernier étant abordé sous l'angle de la théorie des ensembles statistiques finis et du filtre « Probability Hypothesis Density ». Tout d'abord, nous nous intéressons à l'importante classe d'approximations que constituent les algorithmes de Monte Carlo séquentiel, qui incluent les algorithmes d'échantillonnage d'importance séquentiel et de filtrage particulaire auxiliaire. Les boucles de propagation mises en jeux dans ces algorithmes sont étudiées et des algorithmes alternatifs sont proposés. Les algorithmes de filtrage particulaire dits « localement optimaux », c'est à dire les algorithmes d'échantillonnage d'importance avec densité d'importance conditionnelle optimale et de filtrage particulaire auxiliaire pleinement adapté sont comparés statistiquement, en fonction des paramètres du modèle donné. Ensuite, les méthodes de réduction de variance basées sur le théorème de Rao-Blackwell sont exploitées dans le contexte du filtrage mono- et multi-objet(s) Ces méthodes, utilisées principalement en filtrage mono-objet lorsque la dimension du vecteur d'état à estimer est grande, sont dans un premier temps étendues pour les approximations Monte Carlo du filtre Probability Hypothesis Density. D'autre part, des méthodes de réduction de variance alternatives sont proposées : bien que toujours basées sur le théorème de Rao-Blackwell, elles ne se focalisent plus sur le caractère spatial du problème mais plutôt sur son caractère temporel. Enfin, nous abordons l'extension des modèles probabilistes classiquement utilisés. Nous rappelons tout d'abord les modèles de Markov couple et triplet dont l'intérêt est illustré à travers plusieurs exemples pratiques. Ensuite, nous traitons le problème de filtrage multi-objets, dans le contexte des ensembles statistiques finis, pour ces modèles. De plus, les propriétés statistiques plus générales des modèles triplet sont exploitées afin d'obtenir de nouvelles approximations de l'estimateur bayésien optimal (au sens de l'erreur quadratique moyenne) dans les modèles à sauts classiquement utilisés; ces approximations peuvent produire des estimateurs de performances comparables à celles des approximations particulaires, mais ont l'avantage d'être moins coûteuses sur le plan calculatoire / This thesis focuses on the Bayesian estimation problem for statistical filtering which consists in estimating hidden states from an historic of observations over time in a given stochastic model. The considered models include the popular Hidden Markov Chain models and the Jump Markov State Space Systems; in addition, the filtering problem is addressed under a general form, that is to say we consider the mono- and multi-object filtering problems. The latter one is addressed in the Random Finite Sets and Probability Hypothesis Density contexts. First, we focus on the class of particle filtering algorithms, which include essentially the sequential importance sampling and auxiliary particle filter algorithms. We explore the recursive loops for computing the filtering probability density function, and alternative particle filtering algorithms are proposed. The ``locally optimal'' filtering algorithms, i.e. the sequential importance sampling with optimal conditional importance distribution and the fully adapted auxiliary particle filtering algorithms, are statistically compared in function of the parameters of a given stochastic model. Next, variance reduction methods based on the Rao-Blackwell theorem are exploited in the mono- and multi-object filtering contexts. More precisely, these methods are mainly used in mono-object filtering when the dimension of the hidden state is large; so we first extend them for Monte Carlo approximations of the Probabilty Hypothesis Density filter. In addition, alternative variance reduction methods are proposed. Although we still use the Rao-Blackwell decomposition, our methods no longer focus on the spatial aspect of the problem but rather on its temporal one. Finally, we discuss on the extension of the classical stochastic models. We first recall pairwise and triplet Markov models and we illustrate their interest through several practical examples. We next address the multi-object filtering problem for such models in the random finite sets context. Moreover, the statistical properties of the more general triplet Markov models are used to build new approximations of the optimal Bayesian estimate (in the sense of the mean square error) in Jump Markov State Space Systems. These new approximations can produce estimates with performances alike those given by particle filters but with lower computational cost

Page generated in 0.0875 seconds