• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quantile-based generalized logistic distribution

Omachar, Brenda V. January 2014 (has links)
This dissertation proposes the development of a new quantile-based generalized logistic distribution GLDQB, by using the quantile function of the generalized logistic distribution (GLO) as the basic building block. This four-parameter distribution is highly flexible with respect to distributional shape in that it explains extensive levels of skewness and kurtosis through the inclusion of two shape parameters. The parameter space as well as the distributional shape properties are discussed at length. The distribution is characterized through its -moments and an estimation algorithm is presented for estimating the distribution’s parameters with method of -moments estimation. This new distribution is then used to fit and approximate the probability of a data set. / Dissertation (MSc)--University of Pretoria, 2014. / Statistics / MSc / Unrestricted
2

Využití kvantilových funkcí při kostrukci pravděpodobnostních modelů mzdových rozdělení / An Application of Quantile Functions in Probability Model Constructions of Wage Distributions

Pavelka, Roman January 2004 (has links)
Over the course of years from 1995 to 2008 was acquired by Average Earnings Information System under the professional gestation of the Czech Republic Ministry of Labor and Social Affairs wage and personal data by individual employees. Thanks to the fact that in this statistical survey are collected wage and personal data by concrete employed persons it is possible to obtain a wage distribution, so it how this wages spread out among individual employees. Values that wages can be assumed in whole wage interval are not deterministical but they result from interactions of many random influences. The wage is necessary due to this randomness considered as random quantity with its probability density function. This spreading of wages in all labor market segments is described a wage distribution. Even though a representation of a high-income employee category is evidently small, one's incomes markedly affect statistically itemized average wage level and particularly the whole wage file variability. So wage employee collections are distinguished by the averaged wage that exceeds wages of a major employee mass and the high variability due to great wage heterogeneity. A general approach to distribution of earning modeling under current heterogeneity conditions don't permit to fit by some chosen distribution function or probably density function. This leads to the idea to apply some quantile approach with statistical modeling, i.e. to model an earning distribution with some appropriate inverse distributional function. The probability modeling by generalized or compound forms of quantile functions enables better to characterize a wage distribution, which distinguishes by high asymmetry and wage heterogeneity. The application of inverse distributional function as a probability model of a wage distribution can be expressed in forms of distributional mixture of partial employee's groups. All of the component distributions of this mixture model correspond to an employee's group with greater homogeneity of earnings. The partial employee's subfiles differ in parameters of their component density and in shares of this density in the total wage distribution of the wage file.
3

Modeling of generalized families of probability distribution in the quantile statistical universe

Van Staden, Paul Jacobus January 2013 (has links)
This thesis develops a methodology for the construction of generalized families of probability distributions in the quantile statistical universe, that is, distributions specified in terms of their quantile functions. The main benefit of the proposed methodology is that it generates quantile-based distributions with skewness-invariant measures of kurtosis. The skewness and kurtosis can therefore be identified and analyzed separately. The key contribution of this thesis is the development of a new type of the generalized lambda distribution (GLD), using the quantile function of the generalized Pareto distribution as the basic building block (in the literature each different type of the GLD is incorrectly referred to as a parameterization of the GLD – in this thesis the term type is used). The parameters of this new type can, contrary to existing types, easily be estimated with method of L-moments estimation, since closed-form expressions are available for the estimators as well as for their asymptotic standard errors. The parameter space and the shape properties of the new type are discussed in detail, including its characterization through L-moments. A simple estimation algorithm is presented and utilization of the new type in terms of data fitting and approximation of probability distributions is illustrated. / Thesis (PhD)--University of Pretoria, 2013. / gm2014 / Statistics / unrestricted
4

Solving the Differential Equation for the Probit Function Using a Variant of the Carleman Embedding Technique.

Alu, Kelechukwu Iroajanma 07 May 2011 (has links) (PDF)
The probit function is the inverse of the cumulative distribution function associated with the standard normal distribution. It is of great utility in statistical modelling. The Carleman embedding technique has been shown to be effective in solving first order and, less efficiently, second order nonlinear differential equations. In this thesis, we show that solutions to the second order nonlinear differential equation for the probit function can be approximated efficiently using a variant of the Carleman embedding technique.
5

Quantile Function-based Models for Resource Utilization and Power Consumption of Applications

Möbius, Christoph 02 September 2016 (has links) (PDF)
Server consolidation is currently widely employed in order to improve the energy efficiency of data centers. While being a promising technique, server consolidation may lead to resource interference between applications and thus, reduced performance of applications. Current approaches to account for possible resource interference are not well suited to respect the variation in the workloads for the applications. As a consequence, these approaches cannot prevent resource interference if workload for applications vary. It is assumed that having models for the resource utilization and power consumption of applications as functions of the workload to the applications can improve decision making and help to prevent resource interference in scenarios with varying workload. This thesis aims to develop such models for selected applications. To produce varying workload that resembles statistical properties of real-world workload a workload generator is developed in a first step. Usually, the measurement data for such models origins from different sensors and equipment, all producing data at different frequencies. In order to account for these different frequencies, in a second step this thesis particularly investigates the feasibility to employ quantile functions as model inputs. Complementary, since conventional goodness-of-fit tests are not appropriate for this approach, an alternative to assess the estimation error is presented. / Serverkonsolidierung wird derzeit weithin zur Verbesserung der Energieeffizienz von Rechenzentren eingesetzt. Während diese Technik vielversprechende Ergebnisse zeitigt, kann sie zu Ressourceninterferenz und somit zu verringerter Performanz von Anwendungen führen. Derzeitige Ansätze, um dieses Problem zu adressieren, sind nicht gut für Szenarien geeignet, in denen die Workload für die Anwendungen variiert. Als Konsequenz daraus folgt, dass diese Ansätze Ressourceninterferenz in solchen Szenarien nicht verhindern können. Es wird angenommen, dass Modelle für Anwendungen, die deren Ressourenauslastung und die Leistungsaufnahme als Funktion der Workload beschreiben, die Entscheidungsfindung bei der Konsolidierung verbessern und Ressourceninterferenz verhindern können. Diese Arbeit zielt darauf ab, solche Modelle für ausgewählte Anwendungen zu entwickeln. Um variierende Workload zu erzeugen, welche den statistischen Eigenschaften realer Workload folgt, wird zunächst ein Workload-Generator entwickelt. Gewöhnlicherweise stammen Messdaten für die Modelle aus verschienenen Sensoren und Messgeräten, welche jeweils mit unterschiedlichen Frequenzen Daten erzeugen. Um diesen verschiedenen Frequenzen Rechnung zu tragen, untersucht diese Arbeit insbesondere die Möglichkeit, Quantilfunktionen als Eingabeparameter für die Modelle zu verwenden. Da konventionelle Anpassungsgütetests bei diesem Ansatz ungeeignet sind, wird ergänzend eine Alternative vorgestellt, um den durch die Modellierung entstehenden Schätzfehler zu bemessen.
6

A distribuição Kumaraswamy normal: propriedades, modelos de regressão linear e diagnóstico / The Kumaraswamy normal distribution: properties, linear regression models and diagnosis

Machado, Elizabete Cardoso 28 May 2019 (has links)
No presente trabalho, são estudadas propriedades de uma distribuição pertencente à classe de distribuições Kumaraswamy generalizadas, denominada Kumaraswamy normal, formulada a partir da distribuição Kumaraswamy e da distribuição normal. Algumas propriedades estudadas são: expansão da função densidade de probabilidade em série de potências, função geradora de momentos, momentos, função quantílica, entropia de Shannon e de Rényi e estatísticas de ordem. São construídos dois modelos de regressão lineares do tipo localização-escala para a distribuição Kumaraswamy normal, um para dados sem censura e o outro com a presença de observações censuradas. Os parâmetros dos modelos são estimados pelo método de máxima verossimilhança e algumas medidas de diagnóstico, como influência global, influência local e resíduos são desenvolvidos. Para cada modelo de regressão é realizada uma aplicação a um conjunto de dados reais. / In this work, properties of a distribution belonging to the class of generalized Kumaraswamy distributions, called Kumaraswamy normal, are studied. The Kumaraswamy normal distribution is formulated from the Kumaraswamy distribution and from the normal distribution. Some properties studied are: expansion of the probability density function in power series, moment generating function, moments, quantile function, Shannon and Rényi entropy, and order statistics. Two location-scale linear regression models are constructed for the Kumaraswamy-normal distribution, one for datas uncensored and the other with the presence of censoreds observations. The parameters of these models are estimated by the maximum likelihood method and some diagnostic measures such as global influence, local influence and residuals are developed. For each regression model an application is made to a real data set.
7

Extensions of the normal distribution using the odd log-logistic family: theory and applications / Extensões do normal distribuição utilizando a família odd log-logística: teoria e aplicações

Braga, Altemir da Silva 23 June 2017 (has links)
In this study we propose three new distributions and a study with longitudinal data. The first was the Odd log-logistic normal distribution: theory and applications in analysis of experiments, the second was Odd log-logistic t Student: theory and applications, the third was the Odd log-logistic skew normal: the new distribution skew-bimodal with applications in analysis of experiments and the fourth regression model with random effect of the Odd log-logistic skew normal distribution: an application in longitudinal data. Some have been demonstrated such as symmetry, quantile function, some expansions, ordinary incomplete moments, mean deviation and the moment generating function. The estimation of the model parameters were approached by the method of maximum likelihood. In applications were used regression models to data from a completely randomized design (CRD) or designs completely randomized in blocks (DBC). Thus, the models can be used in practical situations for as a completely randomized designs or completely randomized blocks designs, mainly, with evidence of asymmetry, kurtosis and bimodality. / A distribuição normal é uma das mais importantes na área de estatística. Porém, não é adequada para ajustar dados que apresentam características de assimetria ou de bimodalidade, uma vez que tal distribuição possui apenas os dois primeiros momentos, diferentes de zero, ou seja, a média e o desvio-padrão. Por isso, muitos estudos são realizados com a finalidade de criar novas famílias de distribuições que possam modelar ou a assimetria ou a curtose ou a bimodalidade dos dados. Neste sentido, é importante que estas novas distribuições tenham boas propriedades matemáticas e, também, a distribuição normal como um submodelo. Porém, ainda, são poucas as classes de distribuições que incluem a distribuição normal como um modelo encaixado. Dentre essas propostas destacam-se: a skew-normal, a beta-normal, a Kumarassuamy-normal e a gama-normal. Em 2013 foi proposta a nova família X de distribuições Odd log-logística-G com o objetivo de criar novas distribuições de probabildade. Assim, utilizando as distribuições normal e a skew-normal como função base foram propostas três novas distribuições e um quarto estudo com dados longitudinais. A primeira, foi a distribuição Odd log-logística normal: teoria e aplicações em dados de ensaios experimentais; a segunda foi a distribuição Odd log-logística t Student: teoria e aplicações; a terceira foi a distribuição Odd log-logística skew-bimodal com aplicações em dados de ensaios experimentais e o quarto estudo foi o modelo de regressão com efeito aleatório para a distribuição distribuição Odd log-logística skew-bimodal: uma aplicação em dados longitudinais. Estas distribuições apresentam boas propriedades tais como: assimetria, curtose e bimodalidade. Algumas delas foram demonstradas como: simetria, função quantílica, algumas expansões, os momentos incompletos ordinários, desvios médios e a função geradora de momentos. A flexibilidade das novas distrições foram comparada com os modelos: skew-normal, beta-normal, Kumarassuamy-normal e gama-normal. A estimativas dos parâmetros dos modelos foram obtidas pelo método da máxima verossimilhança. Nas aplicações foram utilizados modelos de regressão para dados provenientes de delineamentos inteiramente casualizados (DIC) ou delineamentos casualizados em blocos (DBC). Além disso, para os novos modelos, foram realizados estudos de simulação para verificar as propriedades assintóticas das estimativas de parâmetros. Para verificar a presença de valores extremos e a qualidade dos ajustes foram propostos os resíduos quantílicos e a análise de sensibilidade. Portanto, os novos modelos estão fundamentados em propriedades matemáticas, estudos de simulação computacional e com aplicações para dados de delineamentos experimentais. Podem ser utilizados em ensaios inteiramente casualizados ou em blocos casualizados, principalmente, com dados que apresentem evidências de assimetria, curtose e bimodalidade.
8

Extensions of the normal distribution using the odd log-logistic family: theory and applications / Extensões do normal distribuição utilizando a família odd log-logística: teoria e aplicações

Altemir da Silva Braga 23 June 2017 (has links)
In this study we propose three new distributions and a study with longitudinal data. The first was the Odd log-logistic normal distribution: theory and applications in analysis of experiments, the second was Odd log-logistic t Student: theory and applications, the third was the Odd log-logistic skew normal: the new distribution skew-bimodal with applications in analysis of experiments and the fourth regression model with random effect of the Odd log-logistic skew normal distribution: an application in longitudinal data. Some have been demonstrated such as symmetry, quantile function, some expansions, ordinary incomplete moments, mean deviation and the moment generating function. The estimation of the model parameters were approached by the method of maximum likelihood. In applications were used regression models to data from a completely randomized design (CRD) or designs completely randomized in blocks (DBC). Thus, the models can be used in practical situations for as a completely randomized designs or completely randomized blocks designs, mainly, with evidence of asymmetry, kurtosis and bimodality. / A distribuição normal é uma das mais importantes na área de estatística. Porém, não é adequada para ajustar dados que apresentam características de assimetria ou de bimodalidade, uma vez que tal distribuição possui apenas os dois primeiros momentos, diferentes de zero, ou seja, a média e o desvio-padrão. Por isso, muitos estudos são realizados com a finalidade de criar novas famílias de distribuições que possam modelar ou a assimetria ou a curtose ou a bimodalidade dos dados. Neste sentido, é importante que estas novas distribuições tenham boas propriedades matemáticas e, também, a distribuição normal como um submodelo. Porém, ainda, são poucas as classes de distribuições que incluem a distribuição normal como um modelo encaixado. Dentre essas propostas destacam-se: a skew-normal, a beta-normal, a Kumarassuamy-normal e a gama-normal. Em 2013 foi proposta a nova família X de distribuições Odd log-logística-G com o objetivo de criar novas distribuições de probabildade. Assim, utilizando as distribuições normal e a skew-normal como função base foram propostas três novas distribuições e um quarto estudo com dados longitudinais. A primeira, foi a distribuição Odd log-logística normal: teoria e aplicações em dados de ensaios experimentais; a segunda foi a distribuição Odd log-logística t Student: teoria e aplicações; a terceira foi a distribuição Odd log-logística skew-bimodal com aplicações em dados de ensaios experimentais e o quarto estudo foi o modelo de regressão com efeito aleatório para a distribuição distribuição Odd log-logística skew-bimodal: uma aplicação em dados longitudinais. Estas distribuições apresentam boas propriedades tais como: assimetria, curtose e bimodalidade. Algumas delas foram demonstradas como: simetria, função quantílica, algumas expansões, os momentos incompletos ordinários, desvios médios e a função geradora de momentos. A flexibilidade das novas distrições foram comparada com os modelos: skew-normal, beta-normal, Kumarassuamy-normal e gama-normal. A estimativas dos parâmetros dos modelos foram obtidas pelo método da máxima verossimilhança. Nas aplicações foram utilizados modelos de regressão para dados provenientes de delineamentos inteiramente casualizados (DIC) ou delineamentos casualizados em blocos (DBC). Além disso, para os novos modelos, foram realizados estudos de simulação para verificar as propriedades assintóticas das estimativas de parâmetros. Para verificar a presença de valores extremos e a qualidade dos ajustes foram propostos os resíduos quantílicos e a análise de sensibilidade. Portanto, os novos modelos estão fundamentados em propriedades matemáticas, estudos de simulação computacional e com aplicações para dados de delineamentos experimentais. Podem ser utilizados em ensaios inteiramente casualizados ou em blocos casualizados, principalmente, com dados que apresentem evidências de assimetria, curtose e bimodalidade.
9

Quantile Function-based Models for Resource Utilization and Power Consumption of Applications

Möbius, Christoph 14 June 2016 (has links)
Server consolidation is currently widely employed in order to improve the energy efficiency of data centers. While being a promising technique, server consolidation may lead to resource interference between applications and thus, reduced performance of applications. Current approaches to account for possible resource interference are not well suited to respect the variation in the workloads for the applications. As a consequence, these approaches cannot prevent resource interference if workload for applications vary. It is assumed that having models for the resource utilization and power consumption of applications as functions of the workload to the applications can improve decision making and help to prevent resource interference in scenarios with varying workload. This thesis aims to develop such models for selected applications. To produce varying workload that resembles statistical properties of real-world workload a workload generator is developed in a first step. Usually, the measurement data for such models origins from different sensors and equipment, all producing data at different frequencies. In order to account for these different frequencies, in a second step this thesis particularly investigates the feasibility to employ quantile functions as model inputs. Complementary, since conventional goodness-of-fit tests are not appropriate for this approach, an alternative to assess the estimation error is presented.:1 Introduction 2 Thesis Overview 2.1 Testbed 2.2 Contributions and Thesis Structure 2.3 Scope, Assumptions, and Limitations 3 Generation of Realistic Workload 3.1 Statistical Properties of Internet Traffic 3.2 Statistical Properties of Video Server Traffic 3.3 Implementation of Workload Generation 3.4 Summary 4 Models for Resource Utilization and for Power Consumption 4.1 Introduction 4.2 Prior Work 4.3 Test Cases 4.4 Applying Regression To Samples Of Different Length 4.5 Models for Resource Utilization as Function of Request Size 4.6 Models for Power Consumption as Function of Resource Utilization 4.7 Summary 5 Conclusion & Future Work 5.1 Summary 5.2 Future Work Appendices / Serverkonsolidierung wird derzeit weithin zur Verbesserung der Energieeffizienz von Rechenzentren eingesetzt. Während diese Technik vielversprechende Ergebnisse zeitigt, kann sie zu Ressourceninterferenz und somit zu verringerter Performanz von Anwendungen führen. Derzeitige Ansätze, um dieses Problem zu adressieren, sind nicht gut für Szenarien geeignet, in denen die Workload für die Anwendungen variiert. Als Konsequenz daraus folgt, dass diese Ansätze Ressourceninterferenz in solchen Szenarien nicht verhindern können. Es wird angenommen, dass Modelle für Anwendungen, die deren Ressourenauslastung und die Leistungsaufnahme als Funktion der Workload beschreiben, die Entscheidungsfindung bei der Konsolidierung verbessern und Ressourceninterferenz verhindern können. Diese Arbeit zielt darauf ab, solche Modelle für ausgewählte Anwendungen zu entwickeln. Um variierende Workload zu erzeugen, welche den statistischen Eigenschaften realer Workload folgt, wird zunächst ein Workload-Generator entwickelt. Gewöhnlicherweise stammen Messdaten für die Modelle aus verschienenen Sensoren und Messgeräten, welche jeweils mit unterschiedlichen Frequenzen Daten erzeugen. Um diesen verschiedenen Frequenzen Rechnung zu tragen, untersucht diese Arbeit insbesondere die Möglichkeit, Quantilfunktionen als Eingabeparameter für die Modelle zu verwenden. Da konventionelle Anpassungsgütetests bei diesem Ansatz ungeeignet sind, wird ergänzend eine Alternative vorgestellt, um den durch die Modellierung entstehenden Schätzfehler zu bemessen.:1 Introduction 2 Thesis Overview 2.1 Testbed 2.2 Contributions and Thesis Structure 2.3 Scope, Assumptions, and Limitations 3 Generation of Realistic Workload 3.1 Statistical Properties of Internet Traffic 3.2 Statistical Properties of Video Server Traffic 3.3 Implementation of Workload Generation 3.4 Summary 4 Models for Resource Utilization and for Power Consumption 4.1 Introduction 4.2 Prior Work 4.3 Test Cases 4.4 Applying Regression To Samples Of Different Length 4.5 Models for Resource Utilization as Function of Request Size 4.6 Models for Power Consumption as Function of Resource Utilization 4.7 Summary 5 Conclusion & Future Work 5.1 Summary 5.2 Future Work Appendices
10

Value at risk et expected shortfall pour des données faiblement dépendantes : estimations non-paramétriques et théorèmes de convergences / Value at risk and expected shortfall for weak dependent random variables : nonparametric estimations and limit theorems

Kabui, Ali 19 September 2012 (has links)
Quantifier et mesurer le risque dans un environnement partiellement ou totalement incertain est probablement l'un des enjeux majeurs de la recherche appliquée en mathématiques financières. Cela concerne l'économie, la finance, mais d'autres domaines comme la santé via les assurances par exemple. L'une des difficultés fondamentales de ce processus de gestion des risques est de modéliser les actifs sous-jacents, puis d'approcher le risque à partir des observations ou des simulations. Comme dans ce domaine, l'aléa ou l'incertitude joue un rôle fondamental dans l'évolution des actifs, le recours aux processus stochastiques et aux méthodes statistiques devient crucial. Dans la pratique l'approche paramétrique est largement utilisée. Elle consiste à choisir le modèle dans une famille paramétrique, de quantifier le risque en fonction des paramètres, et d'estimer le risque en remplaçant les paramètres par leurs estimations. Cette approche présente un risque majeur, celui de mal spécifier le modèle, et donc de sous-estimer ou sur-estimer le risque. Partant de ce constat et dans une perspective de minimiser le risque de modèle, nous avons choisi d'aborder la question de la quantification du risque avec une approche non-paramétrique qui s'applique à des modèles aussi généraux que possible. Nous nous sommes concentrés sur deux mesures de risque largement utilisées dans la pratique et qui sont parfois imposées par les réglementations nationales ou internationales. Il s'agit de la Value at Risk (VaR) qui quantifie le niveau de perte maximum avec un niveau de confiance élevé (95% ou 99%). La seconde mesure est l'Expected Shortfall (ES) qui nous renseigne sur la perte moyenne au delà de la VaR. / To quantify and measure the risk in an environment partially or completely uncertain is probably one of the major issues of the applied research in financial mathematics. That relates to the economy, finance, but many other fields like health via the insurances for example. One of the fundamental difficulties of this process of management of risks is to model the under lying credits, then approach the risk from observations or simulations. As in this field, the risk or uncertainty plays a fundamental role in the evolution of the credits; the recourse to the stochastic processes and with the statistical methods becomes crucial. In practice the parametric approach is largely used.It consists in choosing the model in a parametric family, to quantify the risk according to the parameters, and to estimate its risk by replacing the parameters by their estimates. This approach presents a main risk, that badly to specify the model, and thus to underestimate or over-estimate the risk. Based within and with a view to minimizing the risk model, we choose to tackle the question of the quantification of the risk with a nonparametric approach which applies to models as general as possible. We concentrate to two measures of risk largely used in practice and which are sometimes imposed by the national or international regulations. They are the Value at Risk (VaR) which quantifies the maximum level of loss with a high degree of confidence (95% or 99%). The second measure is the Expected Shortfall (ES) which informs about the average loss beyond the VaR.

Page generated in 0.5386 seconds