• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 24
  • 14
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Extensions to Gaussian copula models

Fang, Yan 01 May 2012 (has links)
A copula is the representation of a multivariate distribution. Copulas are used to model multivariate data in many fields. Recent developments include copula models for spatial data and for discrete marginals. We will present a new methodological approach for modeling discrete spatial processes and for predicting the process at unobserved locations. We employ Bayesian methodology for both estimation and prediction. Comparisons between the new method and Generalized Additive Model (GAM) are done to test the performance of the prediction. Although there exists a large variety of copula functions, only a few are practically manageable and in certain problems one would like to choose the Gaussian copula to model the dependence. Furthermore, most copulas are exchangeable, thus implying symmetric dependence. However, none of them is flexible enough to catch the tailed (upper tailed or lower tailed) distribution as well as elliptical distributions. An elliptical copula is the copula corresponding to an elliptical distribution by Sklar's theorem, so it can be used appropriately and effectively only to fit elliptical distributions. While in reality, data may be better described by a "fat-tailed" or "tailed" copula than by an elliptical copula. This dissertation proposes a novel pseudo-copula (the modified Gaussian pseudo-copula) based on the Gaussian copula to model dependencies in multivariate data. Our modified Gaussian pseudo-copula differs from the standard Gaussian copula in that it can model the tail dependence. The modified Gaussian pseudo-copula captures properties from both elliptical copulas and Archimedean copulas. The modified Gaussian pseudo-copula and its properties are described. We focus on issues related to the dependence of extreme values. We give our pseudo-copula characteristics in the bivariate case, which can be extended to multivariate cases easily. The proposed pseudo-copula is assessed by estimating the measure of association from two real data sets, one from finance and one from insurance. A simulation study is done to test the goodness-of-fit of this new model. / Graduation date: 2012
62

Cornish-Fisher Expansion and Value-at-Risk method in application to risk management of large portfolios

Sjöstrand, Maria, Aktaş, Özlem January 2011 (has links)
One of the major problem faced by banks is how to manage the risk exposure in large portfolios. According to Basel II regulation banks has to measure the risk using Value-at-Risk with confidence level 99%. However, this regulation does not specify the way to calculate Valueat- Risk. The easiest way to calculate Value-at-Risk is to assume that portfolio returns are normally distributed. Altough, this is the most common way to calculate Value-at-Risk, there exists also other methods. The previous crisis shows that the regular methods are unfortunately not always enough to prevent bankruptcy. This paper is devoted to compare the classical methods of estimating risk with other methods such as Cornish-Fisher Expansion (CFVaR) and assuming generalized hyperbolic distribution. To be able to do this study, we estimate the risk in a large portfolio consisting of ten stocks. These stocks are chosen from the NASDAQ 100-list in order to have highly liquid stocks (bluechips). The stocks are chosen from different sectors to make the portfolio welldiversified. To investigate the impact of dependence between the stocks in the portfolio we remove the two most correlated stocks and consider the resulting eight stock portfolio as well. In both portfolios we put equal weight to the included stocks. The results show that for a well-diversified large portfolio none of the risk measures are violated. However, for a portfolio consisting of only one highly volatile stock we prove that we have a violation in the classical methods but not when we use the modern methods mentioned above.
63

Speech Enhancement Using Nonnegative MatrixFactorization and Hidden Markov Models

Mohammadiha, Nasser January 2013 (has links)
Reducing interference noise in a noisy speech recording has been a challenging task for many years yet has a variety of applications, for example, in handsfree mobile communications, in speech recognition, and in hearing aids. Traditional single-channel noise reduction schemes, such as Wiener filtering, do not work satisfactorily in the presence of non-stationary background noise. Alternatively, supervised approaches, where the noise type is known in advance, lead to higher-quality enhanced speech signals. This dissertation proposes supervised and unsupervised single-channel noise reduction algorithms. We consider two classes of methods for this purpose: approaches based on nonnegative matrix factorization (NMF) and methods based on hidden Markov models (HMM).  The contributions of this dissertation can be divided into three main (overlapping) parts. First, we propose NMF-based enhancement approaches that use temporal dependencies of the speech signals. In a standard NMF, the important temporal correlations between consecutive short-time frames are ignored. We propose both continuous and discrete state-space nonnegative dynamical models. These approaches are used to describe the dynamics of the NMF coefficients or activations. We derive optimal minimum mean squared error (MMSE) or linear MMSE estimates of the speech signal using the probabilistic formulations of NMF. Our experiments show that using temporal dynamics in the NMF-based denoising systems improves the performance greatly. Additionally, this dissertation proposes an approach to learn the noise basis matrix online from the noisy observations. This relaxes the assumption of an a-priori specified noise type and enables us to use the NMF-based denoising method in an unsupervised manner. Our experiments show that the proposed approach with online noise basis learning considerably outperforms state-of-the-art methods in different noise conditions.  Second, this thesis proposes two methods for NMF-based separation of sources with similar dictionaries. We suggest a nonnegative HMM (NHMM) for babble noise that is derived from a speech HMM. In this approach, speech and babble signals share the same basis vectors, whereas the activation of the basis vectors are different for the two signals over time. We derive an MMSE estimator for the clean speech signal using the proposed NHMM. The objective evaluations and performed subjective listening test show that the proposed babble model and the final noise reduction algorithm outperform the conventional methods noticeably. Moreover, the dissertation proposes another solution to separate a desired source from a mixture with arbitrarily low artifacts.  Third, an HMM-based algorithm to enhance the speech spectra using super-Gaussian priors is proposed. Our experiments show that speech discrete Fourier transform (DFT) coefficients have super-Gaussian rather than Gaussian distributions even if we limit the speech data to come from a specific phoneme. We derive a new MMSE estimator for the speech spectra that uses super-Gaussian priors. The results of our evaluations using the developed noise reduction algorithm support the super-Gaussianity hypothesis. / <p>QC 20130916</p>
64

Statistical analysis of multiuser and narrowband interference and superior system designs for impulse radio ultra-wide bandwidth wireless

Shao, Hua Unknown Date
No description available.
65

Generating Generalized Inverse Gaussian Random Variates

Hörmann, Wolfgang, Leydold, Josef January 2013 (has links) (PDF)
The generalized inverse Gaussian distribution has become quite popular in financial engineering. The most popular random variate generator is due to Dagpunar (1989). It is an acceptance-rejection algorithm method based on the Ratio-of-uniforms method. However, it is not uniformly fast as it has a prohibitive large rejection constant when the distribution is close to the gamma distribution. Recently some papers have discussed universal methods that are suitable for this distribution. However, these methods require an expensive setup and are therefore not suitable for the varying parameter case which occurs in, e.g., Gibbs sampling. In this paper we analyze the performance of Dagpunar's algorithm and combine it with a new rejection method which ensures a uniformly fast generator. As its setup is rather short it is in particular suitable for the varying parameter case. (authors' abstract) / Series: Research Report Series / Department of Statistics and Mathematics
66

Degradation modeling for reliability analysis with time-dependent structure based on the inverse gaussian distribution / Modelagem de degradação para análise de confiabilidade com estrutura dependente do tempo baseada na distribuição gaussiana inversa

Morita, Lia Hanna Martins 07 April 2017 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-08-29T19:13:47Z No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:48Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:55Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Made available in DSpace on 2017-09-25T18:27:54Z (GMT). No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) Previous issue date: 2017-04-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Conventional reliability analysis techniques are focused on the occurrence of failures over time. However, in certain situations where the occurrence of failures is tiny or almost null, the estimation of the quantities that describe the failure process is compromised. In this context the degradation models were developed, which have as experimental data not the failure, but some quality characteristic attached to it. Degradation analysis can provide information about the components lifetime distribution without actually observing failures. In this thesis we proposed different methodologies for degradation data based on the inverse Gaussian distribution. Initially, we introduced the inverse Gaussian deterioration rate model for degradation data and a study of its asymptotic properties with simulated data. We then proposed an inverse Gaussian process model with frailty as a feasible tool to explore the influence of unobserved covariates, and a comparative study with the traditional inverse Gaussian process based on simulated data was made. We also presented a mixture inverse Gaussian process model in burn-in tests, whose main interest is to determine the burn-in time and the optimal cutoff point that screen out the weak units from the normal ones in a production row, and a misspecification study was carried out with the Wiener and gamma processes. Finally, we considered a more flexible model with a set of cutoff points, wherein the misclassification probabilities are obtained by the exact method with the bivariate inverse Gaussian distribution or an approximate method based on copula theory. The application of the methodology was based on three real datasets in the literature: the degradation of LASER components, locomotive wheels and cracks in metals. / As técnicas convencionais de análise de confiabilidade são voltadas para a ocorrência de falhas ao longo do tempo. Contudo, em determinadas situações nas quais a ocorrência de falhas é pequena ou quase nula, a estimação das quantidades que descrevem os tempos de falha fica comprometida. Neste contexto foram desenvolvidos os modelos de degradação, que possuem como dado experimental não a falha, mas sim alguma característica mensurável a ela atrelada. A análise de degradação pode fornecer informações sobre a distribuição de vida dos componentes sem realmente observar falhas. Assim, nesta tese nós propusemos diferentes metodologias para dados de degradação baseados na distribuição gaussiana inversa. Inicialmente, nós introduzimos o modelo de taxa de deterioração gaussiana inversa para dados de degradação e um estudo de suas propriedades assintóticas com dados simulados. Em seguida, nós apresentamos um modelo de processo gaussiano inverso com fragilidade considerando que a fragilidade é uma boa ferramenta para explorar a influência de covariáveis não observadas, e um estudo comparativo com o processo gaussiano inverso usual baseado em dados simulados foi realizado. Também mostramos um modelo de mistura de processos gaussianos inversos em testes de burn-in, onde o principal interesse é determinar o tempo de burn-in e o ponto de corte ótimo para separar os itens bons dos itens ruins em uma linha de produção, e foi realizado um estudo de má especificação com os processos de Wiener e gamma. Por fim, nós consideramos um modelo mais flexível com um conjunto de pontos de corte, em que as probabilidades de má classificação são estimadas através do método exato com distribuição gaussiana inversa bivariada ou em um método aproximado baseado na teoria de cópulas. A aplicação da metodologia foi realizada com três conjuntos de dados reais de degradação de componentes de LASER, rodas de locomotivas e trincas em metais.
67

Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion / Échantillonnage gaussien en grande dimension basé sur le principe du matrix splitting. : application à l’inversion bayésienne

Bӑrbos, Andrei-Cristian 10 January 2018 (has links)
La thèse traite du problème de l’échantillonnage gaussien en grande dimension.Un tel problème se pose par exemple dans les problèmes inverses bayésiens en imagerie où le nombre de variables atteint facilement un ordre de grandeur de 106_109.La complexité du problème d’échantillonnage est intrinsèquement liée à la structure de la matrice de covariance. Pour résoudre ce problème différentes solutions ont déjà été proposées,parmi lesquelles nous soulignons l’algorithme de Hogwild qui exécute des mises à jour de Gibbs locales en parallèle avec une synchronisation globale périodique.Notre algorithme utilise la connexion entre une classe d’échantillonneurs itératifs et les solveurs itératifs pour les systèmes linéaires. Il ne cible pas la distribution gaussienne requise, mais cible une distribution approximative. Cependant, nous sommes en mesure de contrôler la disparité entre la distribution approximative est la distribution requise au moyen d’un seul paramètre de réglage.Nous comparons d’abord notre algorithme avec les algorithmes de Gibbs et Hogwild sur des problèmes de taille modérée pour différentes distributions cibles. Notre algorithme parvient à surpasser les algorithmes de Gibbs et Hogwild dans la plupart des cas. Notons que les performances de notre algorithme dépendent d’un paramètre de réglage.Nous comparons ensuite notre algorithme avec l’algorithme de Hogwild sur une application réelle en grande dimension, à savoir la déconvolution-interpolation d’image.L’algorithme proposé permet d’obtenir de bons résultats, alors que l’algorithme de Hogwild ne converge pas. Notons que pour des petites valeurs du paramètre de réglage, notre algorithme ne converge pas non plus. Néanmoins, une valeur convenablement choisie pour ce paramètre permet à notre échantillonneur de converger et d’obtenir de bons résultats. / The thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results.
68

Defective models for cure rate modeling

Rocha, Ricardo Ferreira da 01 April 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-03T11:30:55Z No. of bitstreams: 1 TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T17:37:43Z (GMT) No. of bitstreams: 1 TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T17:37:50Z (GMT) No. of bitstreams: 1 TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Made available in DSpace on 2016-10-10T17:37:59Z (GMT). No. of bitstreams: 1 TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) Previous issue date: 2016-04-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Modeling of a cure fraction, also known as long-term survivors, is a part of survival analysis. It studies cases where supposedly there are observations not susceptible to the event of interest. Such cases require special theoretical treatment, in a way that the modeling assumes the existence of such observations. We need to use some strategy to make the survival function converge to a value p 2 (0; 1), representing the cure rate. A way to model cure rates is to use defective distributions. These distributions are characterized by having probability density functions which integrate to values less than one when the domain of some of their parameters is di erent from that usually de ned. There is not so much literature about these distributions. There are at least two distributions in the literature that can be used for defective modeling: the Gompertz and inverse Gaussian distribution. The defective models have the advantage of not need the assumption of the presence of immune individuals in the data set. In order to use the defective distributions theory in a competitive way, we need a larger variety of these distributions. Therefore, the main objective of this work is to increase the number of defective distributions that can be used in the cure rate modeling. We investigate how to extend baseline models using some family of distributions. In addition, we derive a property of the Marshall-Olkin family of distributions that allows one to generate new defective models. / A modelagem da fração de cura e uma parte importante da an álise de sobrevivência. Essa área estuda os casos em que, supostamente, existem observa ções não suscetíveis ao evento de interesse. Tais casos requerem um tratamento teórico especial, de forma que a modelagem pressuponha a existência de tais observações. E necessário usar alguma estratégia para tornar a função de sobrevivência convergente para um valor p 2 (0; 1), que represente a taxa de cura. Uma forma de modelar tais frações e por meio de distribui ções defeituosas. Essas distribuições são caracterizadas por possuirem funções de densidade de probabilidade que integram em valores inferiores a um quando o domínio de alguns dos seus parâmetros e diferente daquele em que e usualmente definido. Existem, pelo menos, duas distribuições defeituosas na literatura: a Gompertz e a inversa Gaussiana. Os modelos defeituosos têm a vantagem de não precisar pressupor a presença de indivíduos imunes no conjunto de dados. Para utilizar a teoria de d istribuições defeituosas de forma competitiva e necessário uma maior variedade dessas distribuições. Portanto, o principal objetivo deste trabalho e aumentar o n úmero de distribuições defeituosas que podem ser utilizadas na modelagem de frações de curas. Nós investigamos como estender os modelos defeituosos básicos utilizando certas famílias de distribuições. Além disso, derivamos uma propriedade da famí lia Marshall-Olkin de distribuições que permite gerar uma nova classe de modelos defeituosos.
69

Uma extensão da distribuição Birnbaum-Saunders baseada na distribuição gaussiana inversa / An extension of the Birnbaum-Saunders distribution based on the inverse gaussian distribution

Ramos Quispe, Luz Marina, 1985- 27 August 2018 (has links)
Orientador: Filidor Edilfonso Vilca Labra / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T16:25:27Z (GMT). No. of bitstreams: 1 RamosQuispe_LuzMarina_M.pdf: 6411257 bytes, checksum: 6e1e798cf8f6d7586fe5d9a057492a77 (MD5) Previous issue date: 2015 / Resumo: Vários trabalhos têm sido feitos sobre a distribuição Birnbaum-Saunders (BS) univariada e suas extensões. A distribuição bivariada Birnbaum-Saunders (BS) foi apresentada apenas recentemente por Kundu et al. (2010) e algumas extensões já foram discutidas por Vilca et al. (2014) e Kundu et al. (2013). Eles propuseram uma distribuição BS bivariada com estrutura de dependência e estabeleceram várias propriedades atraentes. Este trabalho fornece extensões, univariada e bivariada, da distribuição BS. Estas extensões são baseadas na distribuição Gaussiana Inversa (IG) que é usada como uma distribuição de mistura no contexto de misturas de escala normal. As distribuições resultantes são distribuições absolutamente contínuas e muitas propriedades da distribuição BS são preservadas. Sob caso bivariado, as marginais e condicionais são do tipo Birnbaum-Saunders univariada. Para a obtenção da estimativa de máxima verossimilhança (EMV) é desenvolvido um algoritmo EM. Ilustramos os resultados obtidos com dados reais e simulados / Abstract: Several works have been done on the univariate Birnbaum-Saunders (BS) distribution and its extensions. The bivariate Birnbaum-Saunders (BS) distribution was presented only recently by Kundu et al. (2010) and some extensions have already been discussed by Vilca et al. (2014) and Kundu et al. (2013). They proposed a bivariate BS distribution with dependence structure and established several attractive properties. This work provides extensions, univariate and bivariate, of the BS distribution. These extensions are based on the Inverse Gaussian (IG) distribution that is used as a mixing distribution in the context of scale mixtures of normal. The resulting distributions are absolutely continuous distributions and many properties of the BS distribution are preserved. Under bivariate case, the marginals and conditionals are of type univariate Birnbaum-Saunders. For obtaining the maximum likelihood estimates (MLE) of the model parameters is developed an algorithm EM. We illustrate the obtained results with real and simulated dataset / Mestrado / Estatistica / Mestra em Estatística
70

Detekce a rozpoznávání obličeje / Face Detection and Recognition

Ponzer, Martin January 2009 (has links)
This paper discusses problems of computer vision, which deals with face detection and recognition in image and video sequence at real time. All methods are designed for color images and are based on skin detection on the basis of information of human skin color. For skin detection is used very effective method Gaussian distribution. All of the areas, which have human skin color, are classified. This classification specifies, which area is or isn’t face. For face detection is used correlation method, complete with eigenfaces method. All areas classified as a face are subsequently recognized by the eigenfaces method. Result of recognition phase is information about human identity.

Page generated in 0.1129 seconds