• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 24
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 138
  • 45
  • 29
  • 26
  • 26
  • 23
  • 22
  • 20
  • 20
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Non-Parametric and Parametric Estimators of the Survival Function under Dependent Censorship

Qin, Yulin 22 November 2013 (has links)
No description available.
42

Density Estimation in Kernel Exponential Families: Methods and Their Sensitivities

Zhou, Chenxi January 2022 (has links)
No description available.
43

Statistical Inference for r-out-of-n F-system Based on Birnbaum-Saunders Distribution

Zhou, Yiliang January 2017 (has links)
The r-out-of-n F-system and load-sharing system are very common in industrial engineering. Statistical inference has been developed here for an equal-load sharing r-out-of-n F-system on Birnbaum-Sauders (BS) lifetime distribution. A simulation study is carried out with different parameter values and different censoring rates in order to examine the performance of the proposed estimation method. Moreover, to find maximum likelihood estimates numerically, three methods of finding initial values for the parameters - pseudo complete sample method, Type-II modified moment estimators of BS distribution method and stochastic approximation method - are developed. These three methods are then compared based on the number of iterations and simulation time. Two real data sets and one simulated data set are used for illustrative purposes. Finally, some concluding comments are made including possible future directions for investigation. / Thesis / Master of Science (MSc)
44

Uncertainty, Identification, And Privacy: Experiments In Individual Decision-making

Rivenbark, David 01 January 2010 (has links)
The alleged privacy paradox states that individuals report high values for personal privacy, while at the same time they report behavior that contradicts a high privacy value. This is a misconception. Reported privacy behaviors are explained by asymmetric subjective beliefs. Beliefs may or may not be uncertain, and non-neutral attitudes towards uncertainty are not necessary to explain behavior. This research was conducted in three related parts. Part one presents an experiment in individual decision making under uncertainty. Ellsberg's canonical two-color choice problem was used to estimate attitudes towards uncertainty. Subjects believed bets on the color ball drawn from Ellsberg's ambiguous urn were equally likely to pay. Estimated attitudes towards uncertainty were insignificant. Subjective expected utility explained subjects' choices better than uncertainty aversion and the uncertain priors model. A second treatment tested Vernon Smith's conjecture that preferences in Ellsberg's problem would be unchanged when the ambiguous lottery is replaced by a compound objective lottery. The use of an objective compound lottery to induce uncertainty did not affect subjects' choices. The second part of this dissertation extended the concept of uncertainty to commodities where quality and accuracy of a quality report were potentially ambiguous. The uncertain priors model is naturally extended to allow for potentially different attitudes towards these two sources of uncertainty, quality and accuracy. As they relate to privacy, quality and accuracy of a quality report are seen as metaphors for online security and consumer trust in e-commerce, respectively. The results of parametric structural tests were mixed. Subjects made choices consistent with neutral attitudes towards uncertainty in both the quality and accuracy domains. However, allowing for uncertainty aversion in the quality domain and not the accuracy domain outperformed the alternative which only allowed for uncertainty aversion in the accuracy domain. Finally, part three integrated a public-goods game and punishment opportunities with the Becker-DeGroot-Marschak mechanism to elicit privacy values, replicating previously reported privacy behaviors. The procedures developed elicited punishment (consequence) beliefs and information confidentiality beliefs in the context of individual privacy decisions. Three contributions are made to the literature. First, by using cash rewards as a mechanism to map actions to consequences, the study eliminated hypothetical bias as a confounding behavioral factor which is pervasive in the privacy literature. Econometric results support the 'privacy paradox' at levels greater than 10 percent. Second, the roles of asymmetric beliefs and attitudes towards uncertainty were identified using parametric structural likelihood methods. Subjects were, in general, uncertainty neutral and believed 'bad' events were more likely to occur when their private information was not confidential. A third contribution is a partial test to determine which uncertain process, loss of privacy or the resolution of consequences, is of primary importance to individual decision-makers. Choices were consistent with uncertainty neutral preferences in both the privacy and consequences domains.
45

Analysis of Agreement Between Two Long Ranked Lists

Sampath, Srinath January 2013 (has links)
No description available.
46

Reliability Assessment for Complex Systems Using Multi-level, Multi-type Reliability Data and Maximum Likelihood Method

Li, Xiangfei 24 September 2014 (has links)
No description available.
47

Updating Bridge Deck Condition Transition Probabilities as New Inspection Data are Collected: Methodology and Empirical Evaluation

Li, Zequn, LI January 2017 (has links)
No description available.
48

Stochastic modeling of the sleep process

Gibellato, Marilisa Gail 09 March 2005 (has links)
No description available.
49

Enhancements in Markovian Dynamics

Ali Akbar Soltan, Reza 12 April 2012 (has links)
Many common statistical techniques for modeling multidimensional dynamic data sets can be seen as variants of one (or multiple) underlying linear/nonlinear model(s). These statistical techniques fall into two broad categories of supervised and unsupervised learning. The emphasis of this dissertation is on unsupervised learning under multiple generative models. For linear models, this has been achieved by collective observations and derivations made by previous authors during the last few decades. Factor analysis, polynomial chaos expansion, principal component analysis, gaussian mixture clustering, vector quantization, and Kalman filter models can all be unified as some variations of unsupervised learning under a single basic linear generative model. Hidden Markov modeling (HMM), however, is categorized as an unsupervised learning under multiple linear/nonlinear generative models. This dissertation is primarily focused on hidden Markov models (HMMs). On the first half of this dissertation we study enhancements on the theory of hidden Markov modeling. These include three branches: 1) a robust as well as a closed-form parameter estimation solution to the expectation maximization (EM) process of HMMs for the case of elliptically symmetrical densities; 2) a two-step HMM, with a combined state sequence via an extended Viterbi algorithm for smoother state estimation; and 3) a duration-dependent HMM, for estimating the expected residency frequency on each state. Then, the second half of the dissertation studies three novel applications of these methods: 1) the applications of Markov switching models on the Bifurcation Theory in nonlinear dynamics; 2) a Game Theory application of HMM, based on fundamental theory of card counting and an example on the game of Baccarat; and 3) Trust modeling and the estimation of trustworthiness metrics in cyber security systems via Markov switching models. As a result of the duration dependent HMM, we achieved a better estimation for the expected duration of stay on each regime. Then by robust and closed form solution to the EM algorithm we achieved robustness against outliers in the training data set as well as higher computational efficiency in the maximization step of the EM algorithm. By means of the two-step HMM we achieved smoother probability estimation with higher likelihood than the standard HMM. / Ph. D.
50

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Carrasco, Jalmar Manuel Farfan 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.

Page generated in 0.1306 seconds