• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 158
  • 30
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 438
  • 438
  • 177
  • 154
  • 148
  • 115
  • 101
  • 70
  • 54
  • 50
  • 40
  • 36
  • 34
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Modelos lineares generalizados mistos multivariados para caracterização genética de doenças / Multivariate generalized linear mixed models for genetic characterization of diseases

Baldoni, Pedro Luiz, 1989- 24 August 2018 (has links)
Orientador: Hildete Prisco Pinheiro / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação / Made available in DSpace on 2018-08-24T09:34:36Z (GMT). No. of bitstreams: 1 Baldoni_PedroLuiz_M.pdf: 4328843 bytes, checksum: 0ab04f375988e62ac31097716ac0eaa5 (MD5) Previous issue date: 2014 / Resumo: Os Modelos Lineares Generalizados Mistos (MLGM) são uma generalização natural dos Modelos Lineares Mistos (MLM) e dos Modelos Lineares Generalizados (MLG). A classe dos MLGM estende a suposição de normalidade dos dados permitindo o uso de várias outras distribuições bem como acomoda a superdispersão frequentemente observada e também a correlação existente entre observações em estudos longitudiais ou com medidas repetidas. Entretanto, a teoria de verossimilhança para MLGM não é imediata uma vez que a função de verossimilhança marginal não possui forma fechada e envolve integrais de alta dimensão. Para solucionar este problema, diversas metodologias foram propostas na literatura, desde técnicas clássicas como quadraturas numéricas, por exemplo, até métodos sofisticados envolvendo algoritmo EM, métodos MCMC e quase-verossimilhança penalizada. Tais metodologias possuem vantagens e desvantagens que devem ser avaliadas em cada tipo de problema. Neste trabalho, o método de quase-verossimilhança penalizada (\cite{breslow1993approximate}) foi utilizado para modelar dados de ocorrência de doença em uma população de vacas leiteiras pois demonstrou ser robusto aos problemas encontrados na teoria de verossimilhança deste conjunto de dados. Além disto, os demais métodos não se mostram calculáveis frente à complexidade dos problemas existentes em genética quantitativa. Adicionalmente, estudos de simulação são apresentados para verificar a robustez de tal metodologia. A estabilidade dos estimadores e a teoria de robustez para este problema não estão completamente desenvolvidos na literatura / Abstract: Generalized Linear Mixed Models (GLMM) are a generalization of Linear Mixed Models (LMM) and of Generalized Linear Models (GLM). The class of models GLMM extends the normality assumption of the data and allows the use of several other probability distributions, for example, accommodating the over dispersion often observed and also the correlation among observations in longitudinal or repeated measures studies. However, the likelihood theory of the GLMM class is not straightforward since its likelihood function has not closed form and involves a high order dimensional integral. In order to solve this problem, several methodologies were proposed in the literature, from classical techniques as numerical quadrature¿s, for example, up to sophisticated methods involving EM algorithm, MCMC methods and penalized quasi-likelihood. These methods have advantages and disadvantages that must be evaluated in each problem. In this work, the penalized quasi-likelihood method (\cite{breslow1993approximate}) was used to model infection data in a population of dairy cattle because demonstrated to be robust in the problems faced in the likelihood theory of this data. Moreover, the other methods do not show to be treatable faced to the complexity existing in quantitative genetics. Additionally, simulation studies are presented in order to verify the robustness of this methodology. The stability of these estimators and the robust theory of this problem are not completely studied in the literature / Mestrado / Estatistica / Mestre em Estatística
172

Locally Optimal Experimental Designs for Mixed Responses Models

January 2020 (has links)
abstract: Bivariate responses that comprise mixtures of binary and continuous variables are common in medical, engineering, and other scientific fields. There exist many works concerning the analysis of such mixed data. However, the research on optimal designs for this type of experiments is still scarce. The joint mixed responses model that is considered here involves a mixture of ordinary linear models for the continuous response and a generalized linear model for the binary response. Using the complete class approach, tighter upper bounds on the number of support points required for finding locally optimal designs are derived for the mixed responses models studied in this work. In the first part of this dissertation, a theoretical result was developed to facilitate the search of locally symmetric optimal designs for mixed responses models with one continuous covariate. Then, the study was extended to mixed responses models that include group effects. Two types of mixed responses models with group effects were investigated. The first type includes models having no common parameters across subject group, and the second type of models allows some common parameters (e.g., a common slope) across groups. In addition to complete class results, an efficient algorithm (PSO-FM) was proposed to search for the A- and D-optimal designs. Finally, the first-order mixed responses model is extended to a type of a quadratic mixed responses model with a quadratic polynomial predictor placed in its linear model. / Dissertation/Thesis / Doctoral Dissertation Statistics 2020
173

An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups design

Wang, Chunxin 01 July 2011 (has links)
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions. When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results. The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
174

Modely s Touchardovm rozdÄlen­m / Models with Touchard Distribution

Ibukun, Michael Abimbola January 2021 (has links)
In 2018, Raul Matsushita, Donald Pianto, Bernardo B. De Andrade, Andre Can§ado & Sergio Da Silva published a paper titled âTouchard distributionâ, which presented a model that is a two-parameter extension of the Poisson distribution. This model has its normalizing constant related to the Touchard polynomials, hence the name of this model. This diploma thesis is concerned with the properties of the Touchard distribution for which delta is known. Two asymptotic tests based on two different statistics were carried out for comparison in a Touchard model with two independent samples, supported by simulations in R.
175

The Application of Mean-Variance Relationships to General Recognition Theory

Woodbury, George 28 September 2021 (has links)
No description available.
176

On The Jackknife Averaging of Generalized Linear Models

Zulj, Valentin January 2020 (has links)
Frequentist model averaging has started to grow in popularity, and it is considered a good alternative to model selection. It has recently been applied favourably to gen- eralized linear models, where it has mainly been purposed to aid the prediction of probabilities. The performance of averaging estimators has largely been compared to that of models selected using AIC or BIC, without much discussion of model screening. In this paper, we study the performance of model averaging in classification problems, and evaluate performances with reference to a single prediction model tuned using cross-validation. We discuss the concept of model screening and suggest two methods of constructing a candidate model set; averaging over the models that make up the LASSO regularization path, and the so called LASSO-GLM hybrid. By means of a Monte Carlo simulation study, we conclude that model averaging does not necessarily offer any improvement in classification rates. In terms of risk, however, we see that both methods of model screening are efficient, and their errors are more stable than those achieved by the cross-validated model of comparison.
177

Predicting customer level risk patterns in non-life insurance / Prediktering av riskmönster på kundnivå i sakförsäkring

Villaume, Erik January 2012 (has links)
Several models for predicting future customer profitability early into customer life-cycles in the property and casualty business are constructed and studied. The objective is to model risk at a customer level with input data available early into a private consumer’s lifespan. Two retained models, one using Generalized Linear Model another using a multilayer perceptron, a special form of Artificial Neural Network are evaluated using actual data. Numerical results show that differentiation on estimated future risk is most effective for customers with highest claim frequencies.
178

Surviving the Surge: Real-time Analytics in the Emergency Department

Rea, David J. 05 October 2021 (has links)
No description available.
179

Feature Screening for High-Dimensional Variable Selection In Generalized Linear Models

Jiang, Jinzhu 02 September 2021 (has links)
No description available.
180

Using an Experimental Mixture Design to Identify Experimental Regions with High Probability of Creating a Homogeneous Monolithic Column Capable of Flow

Willden, Charles C. 16 April 2012 (has links) (PDF)
Graduate students in the Brigham Young University Chemistry Department are working to develop a filtering device that can be used to separate substances into their constituent parts. The device consists of a monomer and water mixture that is polymerized into a monolith inside of a capillary. The ideal monolith is completely solid with interconnected pores that are small enough to cause the constituent parts to pass through the capillary at different rates, effectively separating the substance. Although the end objective is to minimize pore sizes, it is necessary to first identify an experimental region where any combination of input variables will consistently yield homogeneous monoliths capable of flow. To accomplish this task, an experimental mixture design is used to model the relationship between the variables related to the creation of the monolith and the probability of creating an acceptable polymer. The results of the mixture design suggest that, inside of the constrained experimental region, mixtures with higher proportions of monomer and surfactant, low amounts of initiator and salt, and DEGDA as the monomer have the highest probability of producing a workable monolith. Confirmatory experiments are needed before future experimentation to minimize pore sizes is performed using the refined constrained experimental region determined by the results of this analysis.

Page generated in 0.0381 seconds