• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

O modelo de regressão odd log-logística gama generalizada com aplicações em análise de sobrevivência / The regression model odd log-logistics generalized gamma with applications in survival analysis

Fábio Prataviera 11 July 2017 (has links)
Propor uma família de distribuição de probabilidade mais ampla e flexível é de grande importância em estudos estatísticos. Neste trabalho é utilizado um novo método de adicionar um parâmetro para uma distribuição contínua. A distribuição gama generalizada, que tem como casos especiais a distribuição Weibull, exponencial, gama, qui-quadrado, é usada como distribuição base. O novo modelo obtido tem quatro parâmetros e é chamado odd log-logística gama generalizada (OLLGG). Uma das características interessante do modelo OLLGG é o fato de apresentar bimodalidade. Outra proposta deste trabalho é introduzir um modelo de regressão chamado log-odd log-logística gama generalizada (LOLLGG) com base na GG (Stacy e Mihram, 1965). Este modelo pode ser muito útil, quando por exemplo, os dados amostrados possuem uma mistura de duas populações estatísticas. Outra vantagem da distribuição OLLGG consiste na capacidade de apresentar várias formas para a função de risco, crescente, decrescente, na forma de U e bimodal entre outras. Desta forma, são apresentadas em ambos os casos as expressões explícitas para os momentos, função geradora e desvios médios. Considerando dados nãocensurados e censurados de forma aleatória, as estimativas para os parâmetros de interesse, foram obtidas via método da máxima verossimilhança. Estudos de simulação, considerando diferentes valores para os parâmetros, porcentagens de censura e tamanhos amostrais foram conduzidos com o objetivo de verificar a flexibilidade da distribuição e a adequabilidade dos resíduos no modelo de regressão. Para ilustrar, são realizadas aplicações em conjuntos de dados reais. / Providing a wider and more flexible probability distribution family is of great importance in statistical studies. In this work a new method of adding a parameter to a continuous distribution is used. In this study the generalized gamma distribution (GG) is used as base distribution. The GG distribution has, as especial cases, Weibull distribution, exponential, gamma, chi-square, among others. For this motive, it is considered a flexible distribution in data modeling procedures. The new model obtained with four parameters is called log-odd log-logistic generalized gamma (OLLGG). One of the interesting characteristics of the OLLGG model is the fact that it presents bimodality. In addition, a regression model regression model called log-odd log-logistic generalized gamma (LOLLGG) based by GG (Stacy e Mihram, 1965) is introduced. This model can be very useful when, the sampled data has a mixture of two statistical populations. Another advantage of the OLLGG distribution is the ability to present various forms for the failing rate, as increasing, as decreasing, and the shapes of bathtub or U. Explicity expressions for the moments, generating functions, mean deviations are obtained. Considering non-censored and randomly censored data, the estimates for the parameters of interest were obtained using the maximum likelihood method. Simulation studies, considering different values for the parameters, percentages of censoring and sample sizes were done in order to verify the distribuition flexibility, and the residues distrbutuon in the regression model. To illustrate, some applications using real data sets are carried out.
12

On specification and inference in the econometrics of public procurement

Sundström, David January 2016 (has links)
In Paper [I] we use data on Swedish public procurement auctions for internal regularcleaning service contracts to provide novel empirical evidence regarding green publicprocurement (GPP) and its effect on the potential suppliers’ decision to submit a bid andtheir probability of being qualified for supplier selection. We find only a weak effect onsupplier behavior which suggests that GPP does not live up to its political expectations.However, several environmental criteria appear to be associated with increased complexity,as indicated by the reduced probability of a bid being qualified in the postqualificationprocess. As such, GPP appears to have limited or no potential to function as an environmentalpolicy instrument. In Paper [II] the observation is made that empirical evaluations of the effect of policiestransmitted through public procurements on bid sizes are made using linear regressionsor by more involved non-linear structural models. The aspiration is typically to determinea marginal effect. Here, I compare marginal effects generated under both types ofspecifications. I study how a political initiative to make firms less environmentally damagingimplemented through public procurement influences Swedish firms’ behavior. Thecollected evidence brings about a statistically as well as economically significant effect onfirms’ bids and costs. Paper [III] embarks by noting that auction theory suggests that as the number of bidders(competition) increases, the sizes of the participants’ bids decrease. An issue in theempirical literature on auctions is which measurement(s) of competition to use. Utilizinga dataset on public procurements containing measurements on both the actual and potentialnumber of bidders I find that a workhorse model of public procurements is bestfitted to data using only actual bidders as measurement for competition. Acknowledgingthat all measurements of competition may be erroneous, I propose an instrumental variableestimator that (given my data) brings about a competition effect bounded by thosegenerated by specifications using the actual and potential number of bidders, respectively.Also, some asymptotic results are provided for non-linear least squares estimatorsobtained from a dependent variable transformation model. Paper [VI] introduces a novel method to measure bidders’ costs (valuations) in descending(ascending) auctions. Based on two bounded rationality constraints bidders’costs (valuations) are given an imperfect measurements interpretation robust to behavioraldeviations from traditional rationality assumptions. Theory provides no guidanceas to the shape of the cost (valuation) distributions while empirical evidence suggeststhem to be positively skew. Consequently, a flexible distribution is employed in an imperfectmeasurements framework. An illustration of the proposed method on Swedishpublic procurement data is provided along with a comparison to a traditional BayesianNash Equilibrium approach.
13

雙變量Gamma與廣義Gamma分配之探討

曾奕翔 Unknown Date (has links)
Stacy (1962)首先提出廣義伽瑪分配 (generalized gamma distribution),此分布被廣泛應用於存活分析 (survival analysis) 以及可靠度 (reliability) 中壽命時間的資料描述。事實上,像是指數分配 (exponential distribution)、韋伯分配 (Weibull distribution) 以及伽瑪分配 (gamma distribution) 都是廣義伽瑪分配的一個特例。 Bologna (1987)提出一個特殊的雙變量廣義伽瑪分配 (bivariate generalized gamma distribution) 可以經由雙變量常態分配 (bivariate normal distribution) 所推得。我們根據他的想法,提出多變量廣義伽瑪分配可以經由多變量常態分配所推得。在過去的研究中,學者們做了許多有關雙變量伽瑪分配。當我們提到雙變量常態分配,由於其分配的型式為唯一的,所以沒人任何人對其分配的型式有疑問。然而,雙變量伽瑪分配卻有很多不同的型式。 在這篇論文中的架構如下。在第二章中,我們介紹並討論雙變量廣義伽瑪分配可以經由雙變量常態分配所推得,接著推導參數估計以及介紹模擬的程序。在第三章中,我們介紹一些對稱以及非對稱的雙變量伽瑪分配,接著拓展到雙變量廣義伽瑪分配,有關參數的估計以及模擬結果也將在此章中討論。在第三章最後,我們建構參數的敏感度分析 (sensitivity analysis)。最後,在第四章中,我們陳述結論以及未來研究方向。 / The generalized gamma distribution was introduced by Stacy (1962). This distribution is useful to describe lifetime data when conducting survival analysis and reliability. In fact, it includes the widely used exponential, Weibull, and gamma distributions as special cases. Bologna (1987) showed that a special bivariate genenralized gamma distribution can be derived from a bivariate normal distribution. Follow his idea, we show that a multivariate generalized gamma distribution can be derived from a multivariate normal distribution. In the past, researchers spend much time in working on a bivariate gamma distribution. When a bivariate normal distribution is mentioned, no one feels puzzled about its form, since it has only one form. However, there are various forms of bivariate gamma distributions. In this paper is as following. In Chapter 2, we introduce and discuss the bivariate generalized gamma distribution, then the multivariate generalized gamma distribution is derived. We also develop parameters estimation and simulation procedure. In Chapter 3, we introduce some symmetrical and asymmetrical bivariate gamma distributions, then they are extended to the bivariate generalized gamma distributions. Problems of parameters estimation and simulation results are also discussed in Chapter 3. Besides, sensitivity analyses of parameters estimation are conducted. Finally, we state conclusion and future work in Chapter 4.
14

Modélisation et traitement statistique d'images de microscopie confocale : application en dermatologie / Modeling and statistical treatment of confocal microscopy images : application in dermatology

Halimi, Abdelghafour 04 December 2017 (has links)
Dans cette thèse, nous développons des modèles et des méthodes statistiques pour le traitement d’images de microscopie confocale de la peau dans le but de détecter une maladie de la peau appelée lentigo. Une première contribution consiste à proposer un modèle statistique paramétrique pour représenter la texture dans le domaine des ondelettes. Plus précisément, il s’agit d’une distribution gaussienne généralisée dont on montre que le paramètre d’échelle est caractéristique des tissus sousjacents. La modélisation des données dans le domaine de l’image est un autre sujet traité dans cette thèse. A cette fin, une distribution gamma généralisée est proposée. Notre deuxième contribution consiste alors à développer un estimateur efficace des paramètres de cette loi à l’aide d’une descente de gradient naturel. Finalement, un modèle d’observation de bruit multiplicatif est établi pour expliquer la distribution gamma généralisée des données. Des méthodes d’inférence bayésienne paramétrique sont ensuite développées avec ce modèle pour permettre la classification d’images saines et présentant un lentigo. Les algorithmes développés sont appliqués à des images réelles obtenues d’une étude clinique dermatologique. / In this work, we develop statistical models and processing methods for confocal microscopy images. The first contribution consists of a parametric statistical model to represent textures in the wavelet domain. Precisely, a generalized Gaussian distribution is proposed, whose scale parameter is shown to be discriminant of the underlying tissues. The thesis deals also with modeling data in the image domain using the generalized gamma distribution. The second contribution develops an efficient parameter estimator for this distribution based on a natural gradient approach. The third contribution establishes a multiplicative noise observation model to explain the distribution of the data. Parametric Bayesian inference methods are subsequently developed based on this model to classify healthy and lentigo images. All algorithms developed in this thesis have been applied to real images from a dermatologic clinical study.
15

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Kato, Fernando Hideki 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
16

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Fernando Hideki Kato 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.

Page generated in 0.1101 seconds