Spelling suggestions: "subject:"gaussian bydistribution"" "subject:"gaussian 5mmdistribution""
61 |
Conflict detection and resolution for autonomous vehiclesVan Daalen, Corne Edwin 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Autonomous vehicles have recently received much attention from researchers. The prospect of
safe and reliable autonomous vehicles for general, unregulated environments promises several
advantages over human-controlled vehicles, including increased efficiency, reliability and capability
with the associated decrease in danger to humans and reduction in operating costs. A
critical requirement for the safe operation of fully autonomous vehicles is their ability to avoid
collisions with obstacles and other vehicles. In addition, they are often required to maintain a
minimum separation from obstacles and other vehicles, which is called conflict avoidance. The
research presented in thesis focuses on methods for effective conflict avoidance.
Existing conflict avoidance methods either make limiting assumptions or cannot execute in
real-time due to computational complexity. This thesis proposes methods for real-time conflict
avoidance in uncertain, cluttered and dynamic environments. These methods fall into the
category of non-cooperative conflict avoidance. They allow very general vehicle and environment
models, with the only notable assumption being that the position and velocity states of the
vehicle and obstacles have a jointly Gaussian probability distribution.
Conflict avoidance for fully autonomous vehicles consists of three functions, namely modelling
and identification of the environment, conflict detection and conflict resolution. We
present an architecture for such a system that ensures stable operation.
The first part of this thesis comprises the development of a novel and efficient probabilistic
conflict detection method. This method processes the predicted vehicle and environment states
to compute the probability of conflict for the prediction period. During the method derivation,
we introduce the concept of the flow of probability through the boundary of the conflict region,
which enables us to significantly reduce the complexity of the problem. The method also assumes
Gaussian distributed states and defines a tight upper bound to the conflict probability, both
of which further reduce the problem complexity, and then uses adaptive numerical integration
for efficient evaluation. We present the results of two simulation examples which show that the
proposed method can calculate in real-time the probability of conflict for complex and cluttered
environments and complex vehicle maneuvers, offering a significant improvement over existing
methods.
The second part of this thesis adapts existing kinodynamic motion planning algorithms
for conflict resolution in uncertain, dynamic and cluttered environments. We use probabilistic
roadmap methods and suggest three changes to them, namely using probabilistic conflict detection
methods, sampling the state-time space instead of the state space and batch generation of
samples. In addition, we propose a robust and adaptive way to choose the size of the sampling
space using a maximum least connection cost bound. We then put all these changes together in
a proposed motion planner for conflict resolution. We present the results of two simulation examples
which show that the proposed motion planner can only find a feasible path in real-time
for simple and uncluttered environments. However, the manner in which we handle uncertainty
and the sampling space bounds offer significant contributions to the conflict resolution field / AFRIKAANSE OPSOMMING: Outonome voertuie het die afgelope tyd heelwat aandag van navorsers geniet. Die vooruitsig
van veilige en betroubare outonome voertuie vir algemene en ongereguleerde omgewings beloof
verskeie voordele bo menslik-beheerde voertuie en sluit hoër effektiwiteit, betroubaarheid
en vermoëns asook die gepaardgaande veiligheid vir mense en laer bedryfskoste in. ’n Belangrike
vereiste vir die veilige bedryf van volledig outonome voertuie is hul vermoë om botsings
met hindernisse en ander voertuie te vermy. Daar word ook dikwels van hulle vereis om ’n
minimum skeidingsafstand tussen hulle en die hindernisse of ander voertuie te handhaaf – dit
word konflikvermyding genoem. Die navorsing in hierdie tesis fokus op metodes vir effektiewe
konflikvermyding.
Bestaande konflikvermydingsmetodes maak óf beperkende aannames óf voer te stadig uit as
gevolg van bewerkingskompleksiteit. Hierdie tesis stel metodes voor vir intydse konflikvermyding
in onsekere en dinamiese omgewings wat ook baie hindernisse bevat. Die voorgestelde
metodes val in die klas van nie-samewerkende konflikvermydingsmetodes. Hulle kan algemene
voertuig- en omgewingsmodelle hanteer en hul enigste noemenswaardige aanname is dat die
posisie- en snelheidstoestande van die voertuig en hindernisse Gaussiese waarskynliksheidverspreidings
toon.
Konflikvermyding vir volledig outonome voertuie bestaan uit drie stappe, naamlik modellering
en identifikasie van die omgewing, konflikdeteksie en konflikresolusie. Ons bied ’n
argitektuur vir so ’n stelsel aan wat stabiele werking verseker.
Die eerste deel van die tesis beskryf die ontwikkeling van ’n oorspronklike en doeltreffende
metode vir waarskynliksheid-konflikdeteksie. Die metode gebruik die voorspelde toestande van
die voertuig en omgewing en bereken die waarskynlikheid van konflik vir die betrokke voorspellingsperiode.
In die afleiding van die metode definiëer ons die konsep van waarskynliksheidvloei
oor die grens van die konflikdomein. Dit stel ons in staat om die kompleksiteit van die
probleem beduidend te verminder. Die metode aanvaar ook Gaussiese waarskynlikheidsverspreiding
van toestande en definiëer ’n nou bogrens tot die waarskynlikheid van konflik om
die kompleksiteit van die probleem verder te verminder. Laastens gebruik die metode aanpasbare
integrasiemetodes vir vinnige berekening van die waarskynlikheid van konflik. Die eerste
deel van die tesis sluit af met twee simulasies wat aantoon dat die voorgestelde konflikdeteksiemetode
in staat is om die waarskynlikheid van konflik intyds te bereken, selfs vir komplekse
omgewings en voertuigbewegings. Die metode lewer dus ’n beduidende bydrae tot die veld van
konflikdeteksie.
Die tweede deel van die tesis pas bestaande kinodinamiese beplanningsalgoritmes aan vir
konflikresolusie in komplekse omgewings. Ons stel drie veranderings voor, naamlik die gebruik
van waarskynliksheid-konflikdeteksiemetodes, die byvoeg van ’n tyd-dimensie in die monsterruimte
en die generasie van meervoudige monsters. Ons stel ook ’n robuuste en aanpasbare
manier voor om die grootte van die monsterruimte te kies. Al die voorafgaande voorstelle word
saamgevoeg in ’n beplanner vir konflikresolusie. Die tweede deel van die tesis sluit af met twee
simulasies wat aantoon dat die voorgestelde beplanner slegs intyds ’n oplossing kan vind vir
eenvoudige omgewings. Die manier hoe die beplanner onsekerheid hanteer en die begrensing
van die monsterruimte lewer egter waardevolle bydraes tot die veld van konflikresolusie
|
62 |
Effective and efficient estimation of distribution algorithms for permutation and scheduling problemsAyodele, Mayowa January 2018 (has links)
Estimation of Distribution Algorithm (EDA) is a branch of evolutionary computation that learn a probabilistic model of good solutions. Probabilistic models are used to represent relationships between solution variables which may give useful, human-understandable insights into real-world problems. Also, developing an effective PM has been shown to significantly reduce function evaluations needed to reach good solutions. This is also useful for real-world problems because their representations are often complex needing more computation to arrive at good solutions. In particular, many real-world problems are naturally represented as permutations and have expensive evaluation functions. EDAs can, however, be computationally expensive when models are too complex. There has therefore been much recent work on developing suitable EDAs for permutation representation. EDAs can now produce state-of-the-art performance on some permutation benchmark problems. However, models are still complex and computationally expensive making them hard to apply to real-world problems. This study investigates some limitations of EDAs in solving permutation and scheduling problems. The focus of this thesis is on addressing redundancies in the Random Key representation, preserving diversity in EDA, simplifying the complexity attributed to the use of multiple local improvement procedures and transferring knowledge from solving a benchmark project scheduling problem to a similar real-world problem. In this thesis, we achieve state-of-the-art performance on the Permutation Flowshop Scheduling Problem benchmarks as well as significantly reducing both the computational effort required to build the probabilistic model and the number of function evaluations. We also achieve competitive results on project scheduling benchmarks. Methods adapted for solving a real-world project scheduling problem presents significant improvements.
|
63 |
Extensions to Gaussian copula modelsFang, Yan 01 May 2012 (has links)
A copula is the representation of a multivariate distribution. Copulas are used to model multivariate data in many fields. Recent developments include copula models for spatial data and for discrete marginals. We will present a new methodological approach for modeling discrete spatial processes and for predicting the process at unobserved locations. We employ Bayesian methodology for both estimation and prediction. Comparisons between the new method and Generalized Additive Model (GAM) are done to test the performance of the prediction.
Although there exists a large variety of copula functions, only a few are practically manageable and in certain problems one would like to choose the Gaussian copula to model the dependence. Furthermore, most copulas are exchangeable, thus implying symmetric dependence. However, none of them is flexible enough to catch the tailed (upper tailed or lower tailed) distribution as well as elliptical distributions. An elliptical copula is the copula corresponding to an elliptical distribution by Sklar's theorem, so it can be used appropriately and effectively only to fit elliptical distributions. While in reality, data may be better described by a "fat-tailed" or "tailed" copula than by an elliptical copula. This dissertation proposes a novel pseudo-copula (the modified Gaussian pseudo-copula) based on the Gaussian copula to model dependencies in multivariate data. Our modified Gaussian pseudo-copula differs from the standard Gaussian copula in that it can model the tail dependence. The modified Gaussian pseudo-copula captures properties from both elliptical copulas and Archimedean copulas. The modified Gaussian pseudo-copula and its properties are described. We focus on issues related to the dependence of extreme values. We give our pseudo-copula characteristics in the bivariate case, which can be extended to multivariate cases easily. The proposed pseudo-copula is assessed by estimating the measure of association from two real data sets, one from finance and one from insurance. A simulation study is done to test the goodness-of-fit of this new model. / Graduation date: 2012
|
64 |
Cornish-Fisher Expansion and Value-at-Risk method in application to risk management of large portfoliosSjöstrand, Maria, Aktaş, Özlem January 2011 (has links)
One of the major problem faced by banks is how to manage the risk exposure in large portfolios. According to Basel II regulation banks has to measure the risk using Value-at-Risk with confidence level 99%. However, this regulation does not specify the way to calculate Valueat- Risk. The easiest way to calculate Value-at-Risk is to assume that portfolio returns are normally distributed. Altough, this is the most common way to calculate Value-at-Risk, there exists also other methods. The previous crisis shows that the regular methods are unfortunately not always enough to prevent bankruptcy. This paper is devoted to compare the classical methods of estimating risk with other methods such as Cornish-Fisher Expansion (CFVaR) and assuming generalized hyperbolic distribution. To be able to do this study, we estimate the risk in a large portfolio consisting of ten stocks. These stocks are chosen from the NASDAQ 100-list in order to have highly liquid stocks (bluechips). The stocks are chosen from different sectors to make the portfolio welldiversified. To investigate the impact of dependence between the stocks in the portfolio we remove the two most correlated stocks and consider the resulting eight stock portfolio as well. In both portfolios we put equal weight to the included stocks. The results show that for a well-diversified large portfolio none of the risk measures are violated. However, for a portfolio consisting of only one highly volatile stock we prove that we have a violation in the classical methods but not when we use the modern methods mentioned above.
|
65 |
Speech Enhancement Using Nonnegative MatrixFactorization and Hidden Markov ModelsMohammadiha, Nasser January 2013 (has links)
Reducing interference noise in a noisy speech recording has been a challenging task for many years yet has a variety of applications, for example, in handsfree mobile communications, in speech recognition, and in hearing aids. Traditional single-channel noise reduction schemes, such as Wiener filtering, do not work satisfactorily in the presence of non-stationary background noise. Alternatively, supervised approaches, where the noise type is known in advance, lead to higher-quality enhanced speech signals. This dissertation proposes supervised and unsupervised single-channel noise reduction algorithms. We consider two classes of methods for this purpose: approaches based on nonnegative matrix factorization (NMF) and methods based on hidden Markov models (HMM). The contributions of this dissertation can be divided into three main (overlapping) parts. First, we propose NMF-based enhancement approaches that use temporal dependencies of the speech signals. In a standard NMF, the important temporal correlations between consecutive short-time frames are ignored. We propose both continuous and discrete state-space nonnegative dynamical models. These approaches are used to describe the dynamics of the NMF coefficients or activations. We derive optimal minimum mean squared error (MMSE) or linear MMSE estimates of the speech signal using the probabilistic formulations of NMF. Our experiments show that using temporal dynamics in the NMF-based denoising systems improves the performance greatly. Additionally, this dissertation proposes an approach to learn the noise basis matrix online from the noisy observations. This relaxes the assumption of an a-priori specified noise type and enables us to use the NMF-based denoising method in an unsupervised manner. Our experiments show that the proposed approach with online noise basis learning considerably outperforms state-of-the-art methods in different noise conditions. Second, this thesis proposes two methods for NMF-based separation of sources with similar dictionaries. We suggest a nonnegative HMM (NHMM) for babble noise that is derived from a speech HMM. In this approach, speech and babble signals share the same basis vectors, whereas the activation of the basis vectors are different for the two signals over time. We derive an MMSE estimator for the clean speech signal using the proposed NHMM. The objective evaluations and performed subjective listening test show that the proposed babble model and the final noise reduction algorithm outperform the conventional methods noticeably. Moreover, the dissertation proposes another solution to separate a desired source from a mixture with arbitrarily low artifacts. Third, an HMM-based algorithm to enhance the speech spectra using super-Gaussian priors is proposed. Our experiments show that speech discrete Fourier transform (DFT) coefficients have super-Gaussian rather than Gaussian distributions even if we limit the speech data to come from a specific phoneme. We derive a new MMSE estimator for the speech spectra that uses super-Gaussian priors. The results of our evaluations using the developed noise reduction algorithm support the super-Gaussianity hypothesis. / <p>QC 20130916</p>
|
66 |
Statistical analysis of multiuser and narrowband interference and superior system designs for impulse radio ultra-wide bandwidth wirelessShao, Hua Unknown Date
No description available.
|
67 |
Generating Generalized Inverse Gaussian Random VariatesHörmann, Wolfgang, Leydold, Josef January 2013 (has links) (PDF)
The generalized inverse Gaussian distribution has become quite popular in financial engineering. The most popular random variate generator is due to Dagpunar (1989). It is an acceptance-rejection algorithm method based on the Ratio-of-uniforms method. However, it is not uniformly fast as it has a prohibitive large rejection constant when the distribution is close to the gamma distribution. Recently some papers have discussed universal methods that are suitable for this distribution. However, these methods require an expensive setup and are therefore not suitable for the varying parameter case which occurs in, e.g., Gibbs sampling. In this paper we analyze the performance of Dagpunar's algorithm and combine it with a new rejection method which ensures a uniformly fast generator. As its setup is rather short it is in particular suitable for the varying parameter case. (authors' abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
68 |
Degradation modeling for reliability analysis with time-dependent structure based on the inverse gaussian distribution / Modelagem de degradação para análise de confiabilidade com estrutura dependente do tempo baseada na distribuição gaussiana inversaMorita, Lia Hanna Martins 07 April 2017 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-08-29T19:13:47Z
No. of bitstreams: 1
TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:48Z (GMT) No. of bitstreams: 1
TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:55Z (GMT) No. of bitstreams: 1
TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Made available in DSpace on 2017-09-25T18:27:54Z (GMT). No. of bitstreams: 1
TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5)
Previous issue date: 2017-04-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Conventional reliability analysis techniques are focused on the occurrence of failures over
time. However, in certain situations where the occurrence of failures is tiny or almost null, the
estimation of the quantities that describe the failure process is compromised. In this context the
degradation models were developed, which have as experimental data not the failure, but some
quality characteristic attached to it. Degradation analysis can provide information about the
components lifetime distribution without actually observing failures. In this thesis we proposed
different methodologies for degradation data based on the inverse Gaussian distribution.
Initially, we introduced the inverse Gaussian deterioration rate model for degradation data and
a study of its asymptotic properties with simulated data. We then proposed an inverse Gaussian
process model with frailty as a feasible tool to explore the influence of unobserved covariates,
and a comparative study with the traditional inverse Gaussian process based on simulated data
was made. We also presented a mixture inverse Gaussian process model in burn-in tests,
whose main interest is to determine the burn-in time and the optimal cutoff point that screen
out the weak units from the normal ones in a production row, and a misspecification study was
carried out with the Wiener and gamma processes. Finally, we considered a more flexible
model with a set of cutoff points, wherein the misclassification probabilities are obtained by
the exact method with the bivariate inverse Gaussian distribution or an approximate method
based on copula theory. The application of the methodology was based on three real datasets in
the literature: the degradation of LASER components, locomotive wheels and cracks in metals. / As técnicas convencionais de análise de confiabilidade são voltadas para a ocorrência de falhas
ao longo do tempo. Contudo, em determinadas situações nas quais a ocorrência de falhas é
pequena ou quase nula, a estimação das quantidades que descrevem os tempos de falha fica
comprometida. Neste contexto foram desenvolvidos os modelos de degradação, que possuem
como dado experimental não a falha, mas sim alguma característica mensurável a ela atrelada.
A análise de degradação pode fornecer informações sobre a distribuição de vida dos
componentes sem realmente observar falhas. Assim, nesta tese nós propusemos diferentes
metodologias para dados de degradação baseados na distribuição gaussiana inversa.
Inicialmente, nós introduzimos o modelo de taxa de deterioração gaussiana inversa para dados
de degradação e um estudo de suas propriedades assintóticas com dados simulados. Em
seguida, nós apresentamos um modelo de processo gaussiano inverso com fragilidade
considerando que a fragilidade é uma boa ferramenta para explorar a influência de covariáveis
não observadas, e um estudo comparativo com o processo gaussiano inverso usual baseado em
dados simulados foi realizado. Também mostramos um modelo de mistura de processos
gaussianos inversos em testes de burn-in, onde o principal interesse é determinar o tempo de
burn-in e o ponto de corte ótimo para separar os itens bons dos itens ruins em uma linha de
produção, e foi realizado um estudo de má especificação com os processos de Wiener e
gamma. Por fim, nós consideramos um modelo mais flexível com um conjunto de pontos de
corte, em que as probabilidades de má classificação são estimadas através do método exato
com distribuição gaussiana inversa bivariada ou em um método aproximado baseado na teoria
de cópulas. A aplicação da metodologia foi realizada com três conjuntos de dados reais de
degradação de componentes de LASER, rodas de locomotivas e trincas em metais.
|
69 |
Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion / Échantillonnage gaussien en grande dimension basé sur le principe du matrix splitting. : application à l’inversion bayésienneBӑrbos, Andrei-Cristian 10 January 2018 (has links)
La thèse traite du problème de l’échantillonnage gaussien en grande dimension.Un tel problème se pose par exemple dans les problèmes inverses bayésiens en imagerie où le nombre de variables atteint facilement un ordre de grandeur de 106_109.La complexité du problème d’échantillonnage est intrinsèquement liée à la structure de la matrice de covariance. Pour résoudre ce problème différentes solutions ont déjà été proposées,parmi lesquelles nous soulignons l’algorithme de Hogwild qui exécute des mises à jour de Gibbs locales en parallèle avec une synchronisation globale périodique.Notre algorithme utilise la connexion entre une classe d’échantillonneurs itératifs et les solveurs itératifs pour les systèmes linéaires. Il ne cible pas la distribution gaussienne requise, mais cible une distribution approximative. Cependant, nous sommes en mesure de contrôler la disparité entre la distribution approximative est la distribution requise au moyen d’un seul paramètre de réglage.Nous comparons d’abord notre algorithme avec les algorithmes de Gibbs et Hogwild sur des problèmes de taille modérée pour différentes distributions cibles. Notre algorithme parvient à surpasser les algorithmes de Gibbs et Hogwild dans la plupart des cas. Notons que les performances de notre algorithme dépendent d’un paramètre de réglage.Nous comparons ensuite notre algorithme avec l’algorithme de Hogwild sur une application réelle en grande dimension, à savoir la déconvolution-interpolation d’image.L’algorithme proposé permet d’obtenir de bons résultats, alors que l’algorithme de Hogwild ne converge pas. Notons que pour des petites valeurs du paramètre de réglage, notre algorithme ne converge pas non plus. Néanmoins, une valeur convenablement choisie pour ce paramètre permet à notre échantillonneur de converger et d’obtenir de bons résultats. / The thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results.
|
70 |
Defective models for cure rate modelingRocha, Ricardo Ferreira da 01 April 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-03T11:30:55Z
No. of bitstreams: 1
TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T17:37:43Z (GMT) No. of bitstreams: 1
TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T17:37:50Z (GMT) No. of bitstreams: 1
TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5) / Made available in DSpace on 2016-10-10T17:37:59Z (GMT). No. of bitstreams: 1
TeseRFR.pdf: 5229141 bytes, checksum: 6f0e842f89ed4a41892f27532248ba4a (MD5)
Previous issue date: 2016-04-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Modeling of a cure fraction, also known as long-term survivors, is a part of survival analysis. It studies cases where supposedly there are observations not susceptible to the event of interest. Such cases require special theoretical treatment, in a way that the modeling assumes the existence of such observations. We need to use some strategy to make the survival function converge to a value p 2 (0; 1), representing the cure rate. A way to model cure rates is to use defective distributions. These distributions are characterized by having probability density functions which integrate to values less than one when the
domain of some of their parameters is di erent from that usually de ned. There is not so much literature about these distributions. There are at least two distributions in the literature that can be used for defective modeling: the Gompertz and inverse Gaussian distribution. The defective models have the advantage of not need the assumption of the presence of immune individuals in the data set. In order to use the defective distributions theory in a competitive way, we need a larger variety of these distributions. Therefore, the main objective of this work is to increase the number of defective distributions that can be used in the cure rate modeling. We investigate how to extend baseline models using some family of distributions. In addition, we derive a property of the Marshall-Olkin family of distributions that allows one to generate new defective models. / A modelagem da fração de cura e uma parte importante da an álise de sobrevivência. Essa área estuda os casos em que, supostamente, existem observa ções não suscetíveis ao evento de interesse. Tais casos requerem um tratamento teórico especial, de forma que a modelagem pressuponha a existência de tais observações. E necessário usar alguma
estratégia para tornar a função de sobrevivência convergente para um valor p 2 (0; 1), que represente a taxa de cura. Uma forma de modelar tais frações e por meio de distribui ções defeituosas. Essas distribuições são caracterizadas por possuirem
funções de densidade de probabilidade que integram em valores inferiores a um quando o domínio de alguns dos seus parâmetros e diferente daquele em que e usualmente definido. Existem, pelo menos, duas distribuições defeituosas na literatura: a Gompertz e a inversa Gaussiana. Os modelos defeituosos têm a vantagem de não precisar pressupor a presença de indivíduos imunes no conjunto de dados. Para utilizar a teoria de d
istribuições defeituosas de forma competitiva e necessário uma maior variedade dessas distribuições. Portanto, o principal objetivo deste trabalho e aumentar o n úmero de distribuições defeituosas que podem ser utilizadas na modelagem de frações de curas. Nós investigamos como estender os modelos defeituosos básicos utilizando certas famílias de distribuições. Além disso, derivamos uma propriedade da famí lia Marshall-Olkin de distribuições que permite gerar uma nova classe de modelos defeituosos.
|
Page generated in 0.0797 seconds