• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modelling and inference for a class of doubly stochastic point processes

Wei, Gang January 1995 (has links)
No description available.
2

Processos de Cox com intensidade difusiva afim / Cox Processes with Affine Intensity

Dario, Alan de Genaro 24 August 2011 (has links)
Esta Tese explora o Processo de Cox quando sua intensidade pertence a uma família de difusões afim. A forma da funçâo densidade de Probabilidade do Processo de Cox é obtida quando a intensidade é descrita por uma difusão fim d-dimensional arbitrária. Analisa-se também o acoplamento e convergência para o Processo de Cox com intensidade afim. Para ilustrar assume-se que a intensidade do Processo é governada por uma difusão de Feller e resultados mais detalhados são obtidos. Adicionalmente, os parâmetros da intensidade do Processo são estimados por meio do Filtro de Kalman conjugado com o estimador de Quase-Máxima Verossimilhança. / This Thesis deals with the Cox Process when its intensity belongs to a family of affine diffusions. The form of the probability density function of the Cox process is obtained when the density is described by an arbitrary d-dimensional affine diffusion. Coupling and convergence results are also addressed for a general Cox process with affine intensity. We adopted the Feller diffusion for driving the underlying intensity of the Cox Process to illustrate our results. Additionally the parameters of the underlying intensity processes are estimated by means of the Kalman Filter in conjunction with Quasi-Maximum Likelihood estimation.
3

Bayesian Analysis of Spatial Point Patterns

Leininger, Thomas Jeffrey January 2014 (has links)
<p>We explore the posterior inference available for Bayesian spatial point process models. In the literature, discussion of such models is usually focused on model fitting and rejecting complete spatial randomness, with model diagnostics and posterior inference often left as an afterthought. Posterior predictive point patterns are shown to be useful in performing model diagnostics and model selection, as well as providing a wide array of posterior model summaries. We prescribe Bayesian residuals and methods for cross-validation and model selection for Poisson processes, log-Gaussian Cox processes, Gibbs processes, and cluster processes. These novel approaches are demonstrated using existing datasets and simulation studies.</p> / Dissertation
4

Processos de Cox com intensidade difusiva afim / Cox Processes with Affine Intensity

Alan de Genaro Dario 24 August 2011 (has links)
Esta Tese explora o Processo de Cox quando sua intensidade pertence a uma família de difusões afim. A forma da funçâo densidade de Probabilidade do Processo de Cox é obtida quando a intensidade é descrita por uma difusão fim d-dimensional arbitrária. Analisa-se também o acoplamento e convergência para o Processo de Cox com intensidade afim. Para ilustrar assume-se que a intensidade do Processo é governada por uma difusão de Feller e resultados mais detalhados são obtidos. Adicionalmente, os parâmetros da intensidade do Processo são estimados por meio do Filtro de Kalman conjugado com o estimador de Quase-Máxima Verossimilhança. / This Thesis deals with the Cox Process when its intensity belongs to a family of affine diffusions. The form of the probability density function of the Cox process is obtained when the density is described by an arbitrary d-dimensional affine diffusion. Coupling and convergence results are also addressed for a general Cox process with affine intensity. We adopted the Feller diffusion for driving the underlying intensity of the Cox Process to illustrate our results. Additionally the parameters of the underlying intensity processes are estimated by means of the Kalman Filter in conjunction with Quasi-Maximum Likelihood estimation.
5

Etude de consistance et applications du modèle Poisson-gamma : modélisation d'une dynamique de recrutement multicentrique / Concistency study and applications of Poisson-gamma model : modelisation of a multicentric recruitment dynamic

Minois, Nathan 07 November 2016 (has links)
Un essai clinique est une recherche biomédicale pratiquée sur l'Homme dont l'objectif est la consolidation et le perfectionnement des connaissances biologiques ou médicales. Le nombre de sujets nécessaire (NSN) est le nombre minimal de patients à inclure dans l'essai afin d'assurer au test statistique une puissance donnée pour observer un effet donné. Pour ce faire plusieurs centres investigateurs sont sollicités. La période entre l'ouverture du premier centre investigateur et le recrutement du dernier patient est appelée période de recrutement que l'on souhaite modéliser. Les premières modélisations remontent à presque 50 ans avec les travaux de Lee, Williford et al. et Morgan avec l'idée déjà d'une modélisation de la dynamique de recrutement par des processus de Poisson. Un problème émerge lors de recrutement multicentriques du fait du manque de caractérisation de l'ensemble des sources de variabilité agissant sur les différentes dynamiques de recrutement. Le modèle dit Poisson-gamma basé sur un processus de Poisson dont les intensités par centre sont considérées comme un échantillon de loi gamma permet l'étude de variabilité. Ce modèle est au coeur de notre projet. Différents objectifs ont motivés la réalisation de cette thèse. Le premier questionnement porte sur la validité de ces modèles. Elle est établie de façon asymptotique et une étude par simulation permet de donner des informations précises sur la validité du modèle. Par la suite l'analyse de bases de données réelles a permis de constater que lors de certaines phases de recrutement, des pauses dans le recrutement sont observables. Une question se pose alors naturellement : comment et faut-il prendre en compte ces informations dans le modèle de dynamique de recrutement ? Il résulte d'études par simulation que la prise en compte de ces données n'améliore pas les performances prédictives du modèle lorsque les sources d'interruptions sont aléatoires mais dont la loi est inchangée au cours du temps. Une autre problématique observable sur les données et inhérente au problème de recrutement de patients est celle des dites sorties d'étude. Une technique Bayésienne empirique analogue à celle du processus de recrutement peut être introduite pour modéliser les sorties d'étude. Ces deux modélisations se couplent très bien et permettent d'estimer la durée de recrutement ainsi que la probabilité de sorties d'étude en se basant sur les données de recrutement d'une étude intermédiaire, donnant des prédictions concernant le processus de randomisation. La dynamique de recrutement possède de multiples facteurs autre que le temps de recrutement. Ces aspects fondamentaux couplés au modèle Poisson-gamma fournissent des indicateurs pertinents pour le suivi des essais. Ainsi est-il possible d'ajuster le nombre de centres au cours de l'essai en fonction d'objectifs prédéfinis, de modéliser et prévoir la chaîne d'approvisionnement nécessaire lors de l'essai et de prévoir l'effet de la randomisation des patients par région sur la puissance du test de l'essai. Il permet également d'avoir un suivi des patients après randomisation permettant ainsi de prévoir un ajustement du nombre de patients en cas de pertes significative d'effectif, ou d'abandonner un essai si les résultats préliminaires sont trop faibles par rapport aux risques connus et observés. La problématique de la dynamique de recrutement peut être couplée avec la dynamique de l'étude en elle-même quand celle-ci est longitudinale. L'indépendance des deux processus permet une estimation facile des différents paramètres. Le résultat est un modèle global du parcours du patient dans l'essai. Deux exemples clés de telles situations sont les données de survie - la modélisation permet alors d'estimer la durée d'un essai quand le critère d'arrêt est le nombre d'événements observés et les modèles de Markov - la modélisation permet alors d'estimer le nombre de patients dans un certain état au bout d'un certain temps. / A clinical trial is a biomedical research which aims to consolidate and improve the biological and medical knowledges. The number of patients required il the minimal number of patients to include in the trial in order to insure a given statistical power of a predefined test. The constitution of this patients' database is one of the fundamental issues of a clinical trial. To do so several investigation centres are opened. The duration between the first opening of a centre and the last recruitment of the needed number of patients is called the recruitemtn duration that we aim to model. The fisrt model goes back 50 years ago with the work of Lee, Williford et al. and Morgan with the idea to model the recruitment dynamic using Poisson processes. One problem emerge, that is the lack of caracterisation of the variabliity of recruitment between centers that is mixed with the mean of the recruitment rates. The most effective model is called the Poisson-gamma model which is based on Poisson processes with random rates (Cox process) with gamma distribution. This model is at the very heart of this project. Different objectives have motivated the realisation of this thesis. First of all the validity of the Poisson-gamma model is established asymptotically. A simulation study that we made permits to give precise informations on the model validity in specific cases (function of the number of centers, the recruitement duration and the mean rates). By studying database, one can observe that there can be breaks during the recruitment dynamic. A question that arise is : How and must we take into account this phenomenon for the prediction of the recruitment duration. The study made tends to show that it is not necessary to take them into account when they are random but their law is stable in time. It also veered around to measure the impact of these breaks on the estimations of the model, that do not impact its validity under some stability hypothesis. An other issue inherent to a patient recruitment dynamic is the phenomenon of screening failure. An empirical Bayesian technique analogue to the one of the recruitment process is used to model the screening failure issue. This hierarchical Bayesian model permit to estimate the duartion of recruitment with screening failure consideration as weel as the probability to drop out from the study using the data at some interim time of analysis, giving predictions on the randomisation dynamic. The recruitment dynamic can be studied in many different ways than just the duration of recruitment. These fundamental aspects coupled with the Poisson-gamma model give relevant indicators for the study follow-up. Multiples applications in this sense are computed. It is therefore possible to adjust the number of centers according to predefined objectives, to model the drug's supply chain per region or center and to predict the effect of the randomisation on the power of the test's study. It also allows to model the folow-up period of the patients by means of transversal or longitudinal methods, that can serve to adjust the number of patients if too many quit during the foloww-up period, or to stop the study if dangerous side effects or no effects are observed on interim data. The problematic of the recruitment dynamic can also be coupled with the dynamic of the study itself when it is longitudinal. The independance between these two processes allows easy estimations of the different parameters. The result is a global model of the patient pathway in the trail. Two key examples of such situations are survival data - the model permit to estimate the duration of the trail when the stopping criterion is the number of events observed, and the Markov model - the model permit to estimate the number of patients in a certain state for a given duartion of analysis.
6

Modely kótovaných bodových procesů / Models of marked point processes

Héda, Ivan January 2016 (has links)
Title: Models of Marked Point Processes Author: Ivan Héda Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Zbyněk Pawlas, Ph.D. Abstract: In the first part of the thesis, we present necessary theoretical basics as well as the definition of functional characteristics used for examination of marked point patterns. Second part is dedicated to review some known marking strategies. The core of the thesis lays in the study of intensity-marked point processes. General formula for the characteristics is proven for this marking strategy and general class of the models with analytically computable characteristics is introduced. This class generalizes some known models. Theoretical results are used for real data analysis in the last part of the thesis. Keywords: marked point process, marked log-Gaussian Cox process, intensity-marked point process 1
7

Stochastic Geometry for Vehicular Networks

Chetlur Ravi, Vishnu Vardhan 11 September 2020 (has links)
Vehicular communication networks are essential to the development of intelligent navigation systems and improvement of road safety. Unlike most terrestrial networks of today, vehicular networks are characterized by stringent reliability and latency requirements. In order to design efficient networks to meet these requirements, it is important to understand the system-level performance of vehicular networks. Stochastic geometry has recently emerged as a powerful tool for the modeling and analysis of wireless communication networks. However, the canonical spatial models such as the 2D Poisson point process (PPP) does not capture the peculiar spatial layout of vehicular networks, where the locations of vehicular nodes are restricted to roadways. Motivated by this, we consider a doubly stochastic spatial model that captures the spatial coupling between the vehicular nodes and the roads and analyze the performance of vehicular communication networks. We model the spatial layout of roads by a Poisson line process (PLP) and the locations of nodes on each line (road) by a 1D PPP, thereby forming a Cox process driven by a PLP or Poisson line Cox process (PLCP). In this dissertation, we develop the theory of the PLCP and apply it to study key performance metrics such as coverage probability and rate coverage for vehicular networks under different scenarios. First, we compute the signal-to-interference plus noise ratio (SINR)-based success probability of the typical communication link in a vehicular ad hoc network (VANET). Using this result, we also compute the area spectral efficiency (ASE) of the network. Our results show that the optimum transmission probability that maximizes the ASE of the network obtained for the Cox process differs significantly from that of the conventional 1D and 2D PPP models. Second, we calculate the signal-to-interference ratio (SIR)-based downlink coverage probability of the typical receiver in a vehicular network for the cellular network model in which each receiver node connects to its closest transmitting node in the network. The conditioning on the serving node imposes constraints on the spatial configuration of interfering nodes and also the underlying distribution of lines. We carefully handle these constraints using various fundamental distance properties of the PLCP and derive the exact expression for the coverage probability. Third, building further on the above mentioned works, we consider a more complex cellular vehicle-to-everything (C-V2X) communication network in which the vehicular nodes are served by roadside units (RSUs) as well as cellular macro base stations (MBSs). For this setup, we present the downlink coverage analysis of the typical receiver in the presence of shadowing effects. We address the technical challenges induced by the inclusion of shadowing effects by leveraging the asymptotic behavior of the Cox process. These results help us gain useful insights into the behavior of the networks as a function of key network parameters, such as the densities of the nodes and selection bias. Fourth, we characterize the load on the MBSs due to vehicular users, which is defined as the number of vehicular nodes that are served by the MBS. Since the limited network resources are shared by multiple users in the network, the load distribution is a key indicator of the demand of network resources. We first compute the distribution of the load on MBSs due to vehicular users in a single-tier vehicular network. Building on this, we characterize the load on both MBSs and RSUs in a heterogeneous C-V2X network. Using these results, we also compute the rate coverage of the typical receiver in the network. Fifth and last, we explore the applications of the PLCP that extend beyond vehicular communications. We derive the exact distribution of the shortest path distance between the typical point and its nearest neighbor in the sense of path distance in a Manhattan Poisson line Cox process (MPLCP), which is a special variant of the PLCP. The analytical framework developed in this work allows us to answer several important questions pertaining to transportation networks, urban planning, and personnel deployment. / Doctor of Philosophy / Vehicular communication networks are essential to the development of intelligent transportation systems (ITS) and improving road safety. As the in-vehicle sensors can assess only their immediate environment, vehicular nodes exchange information about critical events, such as accidents and sudden braking, with other vehicles, pedestrians, roadside infrastructure, and cellular base stations in order to make critical decisions in a timely manner. Considering the time-sensitive nature of this information, it is of paramount importance to design efficient communication networks that can support the exchange of this information with reliable and high-speed wireless links. Typically, prior to actual deployment, any design of a wireless network is subject to extensive analysis under various operational scenarios using computer simulations. However, it is not viable to rely entirely on simulations for the system design of highly complex systems, such as the vehicular networks. Hence, it is necessary to develop analytical methods that can complement simulators and also serve as a benchmark. One of the approaches that has gained popularity in the recent years for the modeling and analysis of large-scale wireless networks is the use of tools from stochastic geometry. In this approach, we endow the locations of wireless nodes with some distribution and analyze various aspects of the network by leveraging the properties of the distribution. Traditionally, wireless networks have been studied using simple spatial models in which the wireless nodes can lie anywhere on the domain of interest (often a 1D or a 2D plane). However, vehicular networks have a unique spatial geometry because the locations of vehicular nodes are restricted to roadways. Therefore, in order to model the locations of vehicular nodes in the network, we have to first model the underlying road systems. Further, we should also consider the randomness in the locations of vehicles on each road. So, we consider a doubly stochastic model called Poisson line Cox process (PLCP), in which the spatial layout of roads are modeled by random lines and the locations of vehicles on the roads are modeled by random set of points on these lines. As is usually the case in wireless networks, multiple vehicular nodes and roadside units (RSUs) operate at the same frequency due to the limited availability of radio frequency spectrum, which causes interference. Therefore, any receiver in the network obtains a signal that is a mixture of the desired signal from the intended transmitter and the interfering signals from the other transmitters. The ratio of the power of desired signal to the aggregate power of the interfering signals, which is called as the signal-to-interference ratio (SIR), depends on the locations of the transmitters with respect to the receiver. A receiver in the network is said to be in coverage if the SIR measured at the location of the receiver exceeds the required threshold to successfully decode the message. The probability of occurrence of this event is referred to as the coverage probability and it is one of the fundamental metrics that is used to characterize the performance of a wireless network. In our work, we have analytically characterized the coverage probability of the typical vehicular node in the network. This was the first work to present the coverage analysis of a vehicular network using the aforementioned doubly stochastic model. In addition to coverage probability, we have also explored other performance metrics such as data rate, which is the number of bits that can be successfully communicated per unit time, and spectral efficiency. Our analysis has revealed interesting trends in the coverage probability as a function of key system parameters such as the density of roads in a region (total length of roads per unit area), and the density of vehicles on the roads. We have shown that the vehicular nodes in areas with high density of roads have lower coverage than those in areas with sparsely distributed roads. On the other hand, the coverage probability of a vehicular node improves as the density of vehicles on the roads increases. Such insights are quite useful in the design and deployment of network infrastructure. While our research was primarily focused on communication networks, the utility of the spatial models considered in these works extends to other areas of engineering. For a special variant of the PLCP, we have derived the distribution of the shortest path distance between an arbitrary point and its nearest neighbor in the sense of path distance. The analytical framework developed in this work allows us to answer several important questions pertaining to infrastructure planning and personnel deployment.
8

Heterogeneous Sensor Data based Online Quality Assurance for Advanced Manufacturing using Spatiotemporal Modeling

Liu, Jia 21 August 2017 (has links)
Online quality assurance is crucial for elevating product quality and boosting process productivity in advanced manufacturing. However, the inherent complexity of advanced manufacturing, including nonlinear process dynamics, multiple process attributes, and low signal/noise ratio, poses severe challenges for both maintaining stable process operations and establishing efficacious online quality assurance schemes. To address these challenges, four different advanced manufacturing processes, namely, fused filament fabrication (FFF), binder jetting, chemical mechanical planarization (CMP), and the slicing process in wafer production, are investigated in this dissertation for applications of online quality assurance, with utilization of various sensors, such as thermocouples, infrared temperature sensors, accelerometers, etc. The overarching goal of this dissertation is to develop innovative integrated methodologies tailored for these individual manufacturing processes but addressing their common challenges to achieve satisfying performance in online quality assurance based on heterogeneous sensor data. Specifically, three new methodologies are created and validated using actual sensor data, namely, (1) Real-time process monitoring methods using Dirichlet process (DP) mixture model for timely detection of process changes and identification of different process states for FFF and CMP. The proposed methodology is capable of tackling non-Gaussian data from heterogeneous sensors in these advanced manufacturing processes for successful online quality assurance. (2) Spatial Dirichlet process (SDP) for modeling complex multimodal wafer thickness profiles and exploring their clustering effects. The SDP-based statistical control scheme can effectively detect out-of-control wafers and achieve wafer thickness quality assurance for the slicing process with high accuracy. (3) Augmented spatiotemporal log Gaussian Cox process (AST-LGCP) quantifying the spatiotemporal evolution of porosity in binder jetting parts, capable of predicting high-risk areas on consecutive layers. This work fills the long-standing research gap of lacking rigorous layer-wise porosity quantification for parts made by additive manufacturing (AM), and provides the basis for facilitating corrective actions for product quality improvements in a prognostic way. These developed methodologies surmount some common challenges of advanced manufacturing which paralyze traditional methods in online quality assurance, and embody key components for implementing effective online quality assurance with various sensor data. There is a promising potential to extend them to other manufacturing processes in the future. / Ph. D.
9

Statistical methods for variant discovery and functional genomic analysis using next-generation sequencing data

Tang, Man 03 January 2020 (has links)
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data, allowing the identification of biomarkers in early disease diagnosis and driving the transformation of most disciplines in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. This dissertation focuses on modeling ``omics'' data in various NGS applications with a primary goal of developing novel statistical methods to identify sequence variants, find transcription factor (TF) binding patterns, and decode the relationship between TF and gene expression levels. Accurate and reliable identification of sequence variants, including single nucleotide polymorphisms (SNPs) and insertion-deletion polymorphisms (INDELs), plays a fundamental role in NGS applications. Existing methods for calling these variants often make simplified assumption of positional independence and fail to leverage the dependence of genotypes at nearby loci induced by linkage disequilibrium. We propose vi-HMM, a hidden Markov model (HMM)-based method for calling SNPs and INDELs in mapped short read data. Simulation experiments show that, under various sequencing depths, vi-HMM outperforms existing methods in terms of sensitivity and F1 score. When applied to the human whole genome sequencing data, vi-HMM demonstrates higher accuracy in calling SNPs and INDELs. One important NGS application is chromatin immunoprecipitation followed by sequencing (ChIP-seq), which characterizes protein-DNA relations through genome-wide mapping of TF binding sites. Multiple TFs, binding to DNA sequences, often show complex binding patterns, which indicate how TFs with similar functionalities work together to regulate the expression of target genes. To help uncover the transcriptional regulation mechanism, we propose a novel nonparametric Bayesian method to detect the clustering pattern of multiple-TF bindings from ChIP-seq datasets. Simulation study demonstrates that our method performs best with regard to precision, recall, and F1 score, in comparison to traditional methods. We also apply the method on real data and observe several TF clusters that have been recognized previously in mouse embryonic stem cells. Recent advances in ChIP-seq and RNA sequencing (RNA-Seq) technologies provides more reliable and accurate characterization of TF binding sites and gene expression measurements, which serves as a basis to study the regulatory functions of TFs on gene expression. We propose a log Gaussian cox process with wavelet-based functional model to quantify the relationship between TF binding site locations and gene expression levels. Through the simulation study, we demonstrate that our method performs well, especially with large sample size and small variance. It also shows a remarkable ability to distinguish real local feature in the function estimates. / Doctor of Philosophy / The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data and bring out innovations in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. In this dissertation, we mainly focus on three problems closely related to NGS and its applications: (1) how to improve variant calling accuracy, (2) how to model transcription factor (TF) binding patterns, and (3) how to quantify of the contribution of TF binding on gene expression. We develop novel statistical methods to identify sequence variants, find TF binding patterns, and explore the relationship between TF binding and gene expressions. We expect our findings will be helpful in promoting a better understanding of disease causality and facilitating the design of personalized treatments.
10

Bodové procesy v čase a prostoru / Bodové procesy v čase a prostoru

Koubek, Antonín January 2013 (has links)
In this work we present an introduction to the theory of point processes in space and time with focus on space--time shot--noise Cox process. Further from theoretical point of view we study its simulation, space--time separability, kernel estimate of intensity function and non--parametric estimation of some summary statistics using edge corrections. For two ambit models and one space--time separable model we do numerical calculations using the presented theory and software Wolfram Mathematica 9.0. For these three models we do simulations, we select the best bandwidth for kernel estimate of the intensity function and we also calculate some theoretical summary statistics including the pair correlation function. Powered by TCPDF (www.tcpdf.org)

Page generated in 0.0746 seconds