• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modely kótovaných bodových procesů / Models of marked point processes

Héda, Ivan January 2016 (has links)
Title: Models of Marked Point Processes Author: Ivan Héda Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Zbyněk Pawlas, Ph.D. Abstract: In the first part of the thesis, we present necessary theoretical basics as well as the definition of functional characteristics used for examination of marked point patterns. Second part is dedicated to review some known marking strategies. The core of the thesis lays in the study of intensity-marked point processes. General formula for the characteristics is proven for this marking strategy and general class of the models with analytically computable characteristics is introduced. This class generalizes some known models. Theoretical results are used for real data analysis in the last part of the thesis. Keywords: marked point process, marked log-Gaussian Cox process, intensity-marked point process 1
2

Quantifying the strength of evidence in forensic fingerprints

Forbes, Peter G. M. January 2014 (has links)
Part I presents a model for fingerprint matching using Bayesian alignment on unlabelled point sets. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio between the hypothesis that an observed fingerprint and fingermark pair originate from the same finger and the hypothesis that they originate from different fingers. The model achieves good performance on the NIST-FBI fingerprint database of 258 matched fingerprint pairs, though the computed likelihood ratios are implausibly extreme due to oversimplification in our model. Part II moves to a more theoretical study of proper scoring rules. The chapters in this section are designed to be independent of each other. Chapter 9 uses proper scoring rules to calibrate the implausible likelihood ratios computed in Part I. Chapter 10 defines the class of compatible weighted proper scoring rules. Chapter 11 derives new results for the score matching estimator, which can quickly generate point estimates for a parametric model even when the normalization constant of the distribution is intractable. It is used to find an initial value for the iterative maximization procedure in §3.3. Appendix A describes a novel algorithm to efficiently sample from the posterior of a von Mises distribution. It is used within the fingerprint model sampling procedure described in §5.6. Appendix B includes various technical results which would otherwise disrupt the flow of the main dissertation.
3

Rate Estimators for Non-stationary Point Processes

Anna N Tatara (6629942) 11 June 2019 (has links)
<div>Non-stationary point processes are often used to model systems whose rates vary over time. Estimating underlying rate functions is important for input to a discrete-event simulation along with various statistical analyses. We study nonparametric estimators to the marked point process, the infinite-server queueing model, and the transitory queueing model. We conduct statistical inference for these estimators by establishing a number of asymptotic results.</div><div><br></div><div>For the marked point process, we consider estimating the offered load to the system over time. With direct observations of the offered load sampled at fixed intervals, we establish asymptotic consistency, rates of convergence, and asymptotic covariance through a Functional Strong Law of Large Numbers, a Functional Central Limit Theorem, and a Law of Iterated Logarithm. We also show that there exists an asymptotically optimal interval width as the sample size approaches infinity.</div><div><br></div><div>The infinite-server queueing model is central in many stochastic models. Specifically, the mean number of busy servers can be used as an estimator for the total load faced to a multi-server system with time-varying arrivals and in many other applications. Through an omniscient estimator based on observing both the arrival times and service requirements for n samples of an infinite-server queue, we show asymptotic consistency and rate of convergence. Then, we establish the asymptotics for a nonparametric estimator based on observations of the busy servers at fixed intervals.</div><div><br></div><div>The transitory queueing model is crucial when studying a transitory system, which arises when the time horizon or population is finite. We assume we observe arrival counts at fixed intervals. We first consider a natural estimator which applies an underlying nonhomogeneous Poisson process. Although the estimator is asymptotically unbiased, we see that a correction term is required to retrieve an accurate asymptotic covariance. Next, we consider a nonparametric estimator that exploits the maximum likelihood estimator of a multinomial distribution to see that this estimator converges appropriately to a Brownian Bridge.</div>
4

On the separation of preferences among marked point process wager alternatives

Park, Jee Hyuk 15 May 2009 (has links)
A wager is a one time bet, staking money on one among a collection of alternatives having uncertain reward. Wagers represent a common class of engineering decision, where “bets” are placed on the design, deployment, and/or operation of technology. Often such wagers are characterized by alternatives having value that evolves according to some future cash flow. Here, the values of specific alternatives are derived from a cash flow modeled as a stochastic marked point process. A principal difficulty with these engineering wagers is that the probability laws governing the dynamics of random cash flow typically are not (completely) available; hence, separating the gambler’s preference among wager alternatives is quite difficult. In this dissertation, we investigate a computational approach for separating preferences among alternatives of a wager where the alternatives have values that evolve according to a marked point processes. We are particularly concerned with separating a gambler’s preferences when the probability laws on the available alternatives are not completely specified.
5

A Study of the Calibration Regression Model with Censored Lifetime Medical Cost

Lu, Min 03 August 2006 (has links)
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
6

Remotely Sensed Data Segmentation under a Spatial Statistics Framework

Li, Yu 08 January 2010 (has links)
In remote sensing, segmentation is a procedure of partitioning the domain of a remotely sensed dataset into meaningful regions which correspond to different land use and land cover (LULC) classes or part of them. So far, the remotely sensed data segmentation is still one of the most challenging problems addressed by the remote sensing community, partly because of the availability of remotely sensed data from diverse sensors of various platforms with very high spatial resolution (VHSR). Thus, there is a strong motivation to propose a sophisticated data representation that can capture the significant amount of details presented in a VHSR dataset and to search for a more powerful scheme suitable for multiple remotely sensed data segmentations. This thesis focuses on the development of a segmentation framework for multiple VHSR remotely sensed data. The emphases are on VHSR data model and segmentation strategy. Starting with the domain partition of a given remotely sensed dataset, a hierarchical data model characterizing the structures hidden in the dataset locally, regionally and globally is built by three random fields: Markova random field (MRF), strict stationary random field (RF) and label field. After defining prior probability distributions which should capture and characterize general and scene-specific knowledge about model parameters and the contextual structure of accurate segmentations, the Bayesian based segmentation framework, which can lead to algorithmic implementation for multiple remotely sensed data, is developed by integrating both the data model and the prior knowledge. To verify the applicability and effectiveness of the proposed segmentation framework, the segmentation algorithms for different types of remotely sensed data are designed within the proposed segmentation framework. The first application relates to SAR intensity image processing, including segmentation and dark spot detection by marked point process. In the second application, the algorithms for LiDAR point cloud segmentation and building detection are developed. Finally, texture and colour texture segmentation problems are tackled within the segmentation framework. All applications demonstrate that the proposed data model provides efficient representations for hierarchical structures hidden in remotely sensed data and the developed segmentation framework leads to successful data processing algorithms for multiple data and task such as segmentation and object detection.
7

Remotely Sensed Data Segmentation under a Spatial Statistics Framework

Li, Yu 08 January 2010 (has links)
In remote sensing, segmentation is a procedure of partitioning the domain of a remotely sensed dataset into meaningful regions which correspond to different land use and land cover (LULC) classes or part of them. So far, the remotely sensed data segmentation is still one of the most challenging problems addressed by the remote sensing community, partly because of the availability of remotely sensed data from diverse sensors of various platforms with very high spatial resolution (VHSR). Thus, there is a strong motivation to propose a sophisticated data representation that can capture the significant amount of details presented in a VHSR dataset and to search for a more powerful scheme suitable for multiple remotely sensed data segmentations. This thesis focuses on the development of a segmentation framework for multiple VHSR remotely sensed data. The emphases are on VHSR data model and segmentation strategy. Starting with the domain partition of a given remotely sensed dataset, a hierarchical data model characterizing the structures hidden in the dataset locally, regionally and globally is built by three random fields: Markova random field (MRF), strict stationary random field (RF) and label field. After defining prior probability distributions which should capture and characterize general and scene-specific knowledge about model parameters and the contextual structure of accurate segmentations, the Bayesian based segmentation framework, which can lead to algorithmic implementation for multiple remotely sensed data, is developed by integrating both the data model and the prior knowledge. To verify the applicability and effectiveness of the proposed segmentation framework, the segmentation algorithms for different types of remotely sensed data are designed within the proposed segmentation framework. The first application relates to SAR intensity image processing, including segmentation and dark spot detection by marked point process. In the second application, the algorithms for LiDAR point cloud segmentation and building detection are developed. Finally, texture and colour texture segmentation problems are tackled within the segmentation framework. All applications demonstrate that the proposed data model provides efficient representations for hierarchical structures hidden in remotely sensed data and the developed segmentation framework leads to successful data processing algorithms for multiple data and task such as segmentation and object detection.
8

Modélisation de structures curvilignes et ses applications en vision par ordinateur / Curvilinear structure modeling and its applications in computer vision

Jeong, Seong-Gyun 23 November 2015 (has links)
Dans cette thèse, nous proposons des modèles de reconstruction de la structure curviligne fondée sur la modélisation stochastique et sur un système d’apprentissage structuré. Nous supposons que le réseau de lignes, dans sa totalité, peut être décomposé en un ensemble de segments de ligne avec des longueurs et orientations variables. Cette hypothèse nous permet de reconstituer des formes arbitraires de la structure curviligne pour différents types de jeux de données. Nous calculons les descripteurs des caractéristiques curvilignes fondés sur les profils des gradients d’image et les profils morphologiques. Pour le modèle stochastique, nous proposons des contraintes préalables qui définissent l'interaction spatiale des segments de ligne. Pour obtenir une configuration optimale correspondant à la structure curviligne latente, nous combinons plusieurs hypothèses de ligne qui sont calculées par échantillonnage MCMC avec différents jeux de paramètres. De plus, nous apprenons une fonction de classement qui prédit la correspondance du segment de ligne donné avec les structures curvilignes latentes. Une nouvelle méthode fondée sur les graphes est proposée afin d’inférer la structure sous-jacente curviligne en utilisant les classements de sortie des segments de ligne. Nous utilisons nos modèles pour analyser la structure curviligne sur des images statiques. Les résultats expérimentaux sur de nombreux types de jeux de données démontrent que les modèles de structure curviligne proposés surpassent les techniques de l'état de l'art. / In this dissertation, we propose curvilinear structure reconstruction models based on stochastic modeling and ranking learning system. We assume that the entire line network can be decomposed into a set of line segments with variable lengths and orientations. This assumption enables us to reconstruct arbitrary shapes of curvilinear structure for different types of datasets. We compute curvilinear feature descriptors based on the image gradient profiles and the morphological profiles. For the stochastic model, we propose prior constraints that define the spatial interaction of line segments. To obtain an optimal configuration corresponding to the latent curvilinear structure, we combine multiple line hypotheses which are computed by MCMC sampling with different parameter sets. Moreover, we learn a ranking function which predicts the correspondence of the given line segment and the latent curvilinear structures. A novel graph-based method is proposed to infer the underlying curvilinear structure using the output rankings of the line segments. We apply our models to analyze curvilinear structure on static images. Experimental results on wide types of datasets demonstrate that the proposed curvilinear structure modeling outperforms the state-of-the-art techniques.
9

CURVILINEAR STRUCTURE DETECTION IN IMAGES BY CONNECTED-TUBE MARKED POINT PROCESS AND ANOMALY DETECTION IN TIME SERIES

Tianyu Li (15349048) 26 April 2023 (has links)
<p><em>Curvilinear structure detection in images has been investigated for decades. In general, the detection of curvilinear structures includes two aspects, binary segmentation of the image and  inference of the graph representation of the curvilinear network. In our work, we propose a connected-tube model based on a marked point process (MPP) for addressing the two issues. The proposed tube model is applied to fiber detection in microscopy images by combining connected-tube and ellipse models. Moreover, a tube-based segmentation algorithm has been proposed to improve the segmentation accuracy. Experiments on fiber-reinforced polymer images, satellite images, and retinal vessel images will be presented. Additionally, we extend the 2D tube model to a 3D tube model, with each tube be modeled as a cylinder. To investigate the supervised curvilinear structure detection method, we focus on the application of road detection in satellite images and propose a two-stage learning strategy for road segmentation. A probability map is generated in the first stage by a selected neural network, then we attach the probability map image to the original RGB images and feed the resulting four images to a U-Net-like network in the second stage to get a refined result.</em></p> <p><br></p> <p><em>Anomaly detection in time series is a key step in diagnosing abnormal behavior in some systems. Long Short-Term Memory networks (LSTMs) have been demonstrated to be useful for anomaly detection in time series, due to their predictive power. However, for a system with thousands of different time sequences, a single LSTM predictor may not perform well for all the sequences. To enhance adaptability, we propose a stacked predictor framework. Also, we propose a novel dynamic thresholding algorithm based on the prediction errors to extract the potential anomalies. To further improve the accuracy of anomaly detection, we propose a post-detection verification method based on a fast and accurate time series subsequence matching algorithm.</em></p> <p><br></p> <p><em>To detect anomalies from multi-channel time series, a bi-directional transformer-based predictor is applied to generate the prediction error sequences, and a statistical model referred as an anomaly marked point process (Anomaly-MPP) is proposed to extract the anomalies from the error sequences. The effectiveness of our methods is demonstrated by testing on a variety of time series datasets.</em></p>
10

Valuation, hedging and the risk management of insurance contracts

Barbarin, Jérôme 03 June 2008 (has links)
This thesis aims at contributing to the study of the valuation of insurance liabilities and the management of the assets backing these liabilities. It consists of four parts, each devoted to a specific topic. In the first part, we study the pricing of a classical single premium life insurance contract with profit, in terms of a guaranteed rate on the premium and a participation rate on the (terminal) financial surplus. We argue that, given the asset allocation of the insurer, these technical parameters should be determined by taking explicitly into account the risk management policy of the insurance company, in terms of a risk measure such as the value-at-risk or the conditional value-at-risk. We then design a methodology that allows us to fix both parameters in such a way that the contract is fairly priced and simultaneously exhibits a risk consistent with the risk management policy. In the second part, we focus on the management of the surrender option embedded in most life insurance contracts. In Chapter 2, we argue that we should model the surrender time as a random time not adapted to the filtration generated by the financial assets prices, instead of assuming that the surrender time is an optimal stopping time as it is usual in the actuarial literature. We then study the valuation of insurance contracts with a surrender option in such a model. We here follow the financial literature on the default risk and in particular, the reduced-form models. In Chapter 3 and 4, we study the hedging strategies of such insurance contracts. In Chapter 3, we study their risk-minimizing strategies and in Chapter 4, we focus on their ``locally risk-minimizing' strategies. As a by-product, we study the impact of a progressive enlargement of filtration on the so-called ``minimal martingale measure'. The third part is devoted to the systematic mortality risk. Due to its systematic nature, this risk cannot be diversified through increasing the size of the portfolio. It is thus also important to study the hedging strategies an insurer should follow to mitigate its exposure to this risk. In Chapter 5, we study the risk-minimizing strategies for a life insurance contract when no mortality-linked financial assets are traded on the financial market. We here extend Dahl and Moller’s results and show that the risk-minimizing strategy of a life insurance contract is given by a weighted average of risk-minimizing strategies of purely financial claims, where the weights are given by the (stochastic) survival probabilities. In Chapter 6, we first study the application of the HJM methodology to the modelling of a longevity bonds market and describe a coherent theoretical setting in which we can properly define the longevity bond prices. Then, we study the risk-minimizing strategies for pure endowments and annuities portfolios when these longevity bonds are traded. Finally, the fourth part deals with the design of ALM strategies for a non-life insurance portfolio. In particular, this chapter aims at studying the risk-minimizing strategies for a non life insurance company when inflation risk and interest rate risk are taken into account. We derive the general form of these strategies when the cumulative payments of the insurer are described by an arbitrary increasing process adapted to the natural filtration of a general marked point process and when the inflation and the term structure of interest rates are simultaneously described by the HJM model of Jarrow and Yildirim. We then systematically apply this result to four specific models of insurance claims. We first study two ``collective' models. We then study two ``individual' models where the claims are notified at a random time and settled through time.

Page generated in 0.0385 seconds