• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance of Contextual Multilevel Models for Comparing Between-Person and Within-Person Effects

January 2016 (has links)
abstract: The comparison of between- versus within-person relations addresses a central issue in psychological research regarding whether group-level relations among variables generalize to individual group members. Between- and within-person effects may differ in magnitude as well as direction, and contextual multilevel models can accommodate this difference. Contextual multilevel models have been explicated mostly for cross-sectional data, but they can also be applied to longitudinal data where level-1 effects represent within-person relations and level-2 effects represent between-person relations. With longitudinal data, estimating the contextual effect allows direct evaluation of whether between-person and within-person effects differ. Furthermore, these models, unlike single-level models, permit individual differences by allowing within-person slopes to vary across individuals. This study examined the statistical performance of the contextual model with a random slope for longitudinal within-person fluctuation data. A Monte Carlo simulation was used to generate data based on the contextual multilevel model, where sample size, effect size, and intraclass correlation (ICC) of the predictor variable were varied. The effects of simulation factors on parameter bias, parameter variability, and standard error accuracy were assessed. Parameter estimates were in general unbiased. Power to detect the slope variance and contextual effect was over 80% for most conditions, except some of the smaller sample size conditions. Type I error rates for the contextual effect were also high for some of the smaller sample size conditions. Conclusions and future directions are discussed. / Dissertation/Thesis / Doctoral Dissertation Psychology 2016
2

Measuring the Mass of a Galaxy: An evaluation of the performance of Bayesian mass estimates using statistical simulation

Eadie, Gwendolyn 27 March 2013 (has links)
This research uses a Bayesian approach to study the biases that may occur when kinematic data is used to estimate the mass of a galaxy. Data is simulated from the Hernquist (1990) distribution functions (DFs) for velocity dispersions of the isotropic, constant anisotropic, and anisotropic Osipkov (1979) and Merritt (1985) type, and then analysed using the isotropic Hernquist model. Biases are explored when i) the model and data come from the same DF, ii) the model and data come from the same DF but tangential velocities are unknown, iii) the model and data come from different DFs, and iv) the model and data come from different DFs and the tangential velocities are unknown. Mock observations are also created from the Gauthier (2006) simulations and analysed with the isotropic Hernquist model. No bias was found in situation (i), a slight positive bias was found in (ii), a negative bias was found in (iii), and a large positive bias was found in (iv). The mass estimate of the Gauthier system when tangential velocities were unknown was nearly correct, but the mass profile was not described well by the isotropic Hernquist model. When the Gauthier data was analysed with the tangential velocities, the mass of the system was overestimated. The code created for the research runs three parallel Markov Chains for each data set, uses the Gelman-Rubin statistic to assess convergence, and combines the converged chains into a single sample of the posterior distribution for each data set. The code also includes two ways to deal with nuisance parameters. One is to marginalize over the nuisance parameter at every step in the chain, and the other is to sample the nuisance parameters using a hybrid-Gibbs sampler. When tangential velocities, v(t), are unobserved in the analyses above, they are sampled as nuisance parameters in the Markov Chain. The v(t) estimates from the Markov chains did a poor job of estimating the true tangential velocities. However, the posterior samples of v(t) proved to be useful, as the estimates of the tangential velocities helped explain the biases discovered in situations (i)-(iv) above. / Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2013-03-26 17:23:14.643
3

Simulação de dados visando à estimação de componentes de variância e coeficientes de herdabilidade / Simulation of data aiming at the estimation of variance components and heritability

Coelho, Angela Mello 03 February 2006 (has links)
A meta principal desse trabalho foi comparar métodos de estimação para coeficientes de herdabilidade para os modelos inteiramente ao acaso e em blocos casualizados. Para os dois casos foram utilizadas as definições de coeficiente de herdabilidade (h2) no sentido restrito, dadas respectivamente, por h2=4 σ2t/(σ2+σ2t) e h2=4 σ2t/(σ2+σ2t+σ2b). . Portanto, é preciso estimar os componentes de variância relativos ao erro experimental (σ2) e ao efeito de tratamentos (σ2t) quando se deseja estimar h2 para o modelo inteiramente ao acaso. Para o modelo para blocos casualizados, além de estimar os últimos dois componentes, é necessário estimar o componente de variância relativo ao efeito de blocos (σ2b). Para atingir a meta estabelecida, partiu-se de um conjunto de dados cujo coeficiente de herdabilidade é conhecido, o que foi feito através da simulação de dados. Foram comparados dois métodos de estimação, o método da análise da variância e método da máxima verossimilhança. Foram feitas 80 simulações, 40 para cada ensaio. Para os dois modelos, as 40 simulações foram divididas em 4 casos contendo 10 simulações. Cada caso considerou um valor distinto para h2, esses foram: h2=0,10; 0,20; 0,30 e 0,40; para cada um desses casos foram fixados 10 valores distintos para o σ2, a saber: σ2=10; 20; 30; 40; 50; 60; 70; 80; 90; 100. Os valores relativos ao σ2 foram encontrados através da equação dada para os coeficientes de herdabilidade, sendo que, para o modelo em blocos casualizados, foi fixado σ2b=20 para todas os 40 casos. Após realizadas as 80 simulações, cada uma obtendo 1000 conjunto de dados, e por conseqüência 1000 estimativas para cada componente de variância e coeficiente de herdabilidade relativos a cada um dos casos, foram obtidas estatísticas descritivas e histogramas de cada conjunto de 1000 estimativas. A comparação dos métodos foi feita através da comparação dessas estatísticas descritivas e histogramas, tendo como referência os valores dos parâmetros utilizados nas simulações. Para ambos os modelos observou-se que os dois métodos se aproximam quanto a estimação de σ2. Para o delineamento inteiramente casualizado, o método da máxima verossimilhança forneceu estimativas que, em média, subestimaram os valores de σ2t, e por conseqüência, tendem a superestimar o h2, o que não acontece para o método da análise da variância. Para o modelo em blocos casualizados, ambos os métodos se assemelham, também, quanto à estimação de σ2t, porém o método da máxima verossimilhança fornece estimativas que tendem a subestimar o σ2b, e e por conseqüência, tendem a superestimar o h2, o que não acontece para o método da análise da variância. Logo, o método da análise da variância se mostrou mais confiável quando se objetiva estimar componentes de variância e coeficientes de herdabilidade para ambos os modelos considerados. / The main aim of this work was to compare methods of estimation of heritability for the 1- way classification and the 2-way crossed classification without interaction. For both cases the definition of heritability (h2) in the narrow sense was used, given respectively, by h2=4σ2t/(σ2+σ2t) e h2=4σ2t/(σ2+σ2t+σ2b). Therefore, there is a need to estimate the components of variance related to the residual (σ2) and the effect of treatments (σ2t) in order to estimate (h2) for the 1-way classification. For the 2-way classification without interaction, there is a need to estimate the component of variance related to the effect of blocks (σ2b) as well as the other two components. To achieve the established aim, a data set with known heritability was used, produced by simulation. Two methods of estimation were compared: the analysis of variance method and the maximum likelihood method. 80 simulations were made, 40 for each classification. For both models, the 40 simulations were divided into 4 different groups containing 10 simulations. Each group considered a different value for h2 (h2=0,10; 0,20; 0,30 e 0,40) and for each one of those cases there were 10 different values fixed for) σ2 (σ2=10; 20; 30; 40; 50; 60; 70; 80; 90; 100). The values for σ2t were found using the equations for the heritability, and for the 2-way crossed classification without interaction, σ2b=20 for all the 40 cases. After the 80 simulations were done, each one obtaining 1000 data sets, and therefore 1000 estimates of each component of variance and the heritability, descriptive statistics and histograms were obtained for each set of 1000 estimates. The comparison of the methods was made based on the descriptive statistics and histograms, using as references the values of the parameters used in the simulations. For both models, the estimates of σ2 were close to the true values. For the 1-way classification, the maximum likelihood method gave estimates that, on average, underestimated the values of σ2t, and therefore the values of h2. This did not happen with the analysis of variance method. For the 2-way crossed classification without interaction, both methods gave similar estimates of σ2t, although the maximum likelihood method gave estimates that tended to underestimate σ2b and therefore to overestimate h2. This did not happen with the analysis of variance method. Hence, the analysis of variance method proved to be more accurate for the estimation of variance components and heritability for both classifications considered in this work.
4

Simulação de dados visando à estimação de componentes de variância e coeficientes de herdabilidade / Simulation of data aiming at the estimation of variance components and heritability

Angela Mello Coelho 03 February 2006 (has links)
A meta principal desse trabalho foi comparar métodos de estimação para coeficientes de herdabilidade para os modelos inteiramente ao acaso e em blocos casualizados. Para os dois casos foram utilizadas as definições de coeficiente de herdabilidade (h2) no sentido restrito, dadas respectivamente, por h2=4 σ2t/(σ2+σ2t) e h2=4 σ2t/(σ2+σ2t+σ2b). . Portanto, é preciso estimar os componentes de variância relativos ao erro experimental (σ2) e ao efeito de tratamentos (σ2t) quando se deseja estimar h2 para o modelo inteiramente ao acaso. Para o modelo para blocos casualizados, além de estimar os últimos dois componentes, é necessário estimar o componente de variância relativo ao efeito de blocos (σ2b). Para atingir a meta estabelecida, partiu-se de um conjunto de dados cujo coeficiente de herdabilidade é conhecido, o que foi feito através da simulação de dados. Foram comparados dois métodos de estimação, o método da análise da variância e método da máxima verossimilhança. Foram feitas 80 simulações, 40 para cada ensaio. Para os dois modelos, as 40 simulações foram divididas em 4 casos contendo 10 simulações. Cada caso considerou um valor distinto para h2, esses foram: h2=0,10; 0,20; 0,30 e 0,40; para cada um desses casos foram fixados 10 valores distintos para o σ2, a saber: σ2=10; 20; 30; 40; 50; 60; 70; 80; 90; 100. Os valores relativos ao σ2 foram encontrados através da equação dada para os coeficientes de herdabilidade, sendo que, para o modelo em blocos casualizados, foi fixado σ2b=20 para todas os 40 casos. Após realizadas as 80 simulações, cada uma obtendo 1000 conjunto de dados, e por conseqüência 1000 estimativas para cada componente de variância e coeficiente de herdabilidade relativos a cada um dos casos, foram obtidas estatísticas descritivas e histogramas de cada conjunto de 1000 estimativas. A comparação dos métodos foi feita através da comparação dessas estatísticas descritivas e histogramas, tendo como referência os valores dos parâmetros utilizados nas simulações. Para ambos os modelos observou-se que os dois métodos se aproximam quanto a estimação de σ2. Para o delineamento inteiramente casualizado, o método da máxima verossimilhança forneceu estimativas que, em média, subestimaram os valores de σ2t, e por conseqüência, tendem a superestimar o h2, o que não acontece para o método da análise da variância. Para o modelo em blocos casualizados, ambos os métodos se assemelham, também, quanto à estimação de σ2t, porém o método da máxima verossimilhança fornece estimativas que tendem a subestimar o σ2b, e e por conseqüência, tendem a superestimar o h2, o que não acontece para o método da análise da variância. Logo, o método da análise da variância se mostrou mais confiável quando se objetiva estimar componentes de variância e coeficientes de herdabilidade para ambos os modelos considerados. / The main aim of this work was to compare methods of estimation of heritability for the 1- way classification and the 2-way crossed classification without interaction. For both cases the definition of heritability (h2) in the narrow sense was used, given respectively, by h2=4σ2t/(σ2+σ2t) e h2=4σ2t/(σ2+σ2t+σ2b). Therefore, there is a need to estimate the components of variance related to the residual (σ2) and the effect of treatments (σ2t) in order to estimate (h2) for the 1-way classification. For the 2-way classification without interaction, there is a need to estimate the component of variance related to the effect of blocks (σ2b) as well as the other two components. To achieve the established aim, a data set with known heritability was used, produced by simulation. Two methods of estimation were compared: the analysis of variance method and the maximum likelihood method. 80 simulations were made, 40 for each classification. For both models, the 40 simulations were divided into 4 different groups containing 10 simulations. Each group considered a different value for h2 (h2=0,10; 0,20; 0,30 e 0,40) and for each one of those cases there were 10 different values fixed for) σ2 (σ2=10; 20; 30; 40; 50; 60; 70; 80; 90; 100). The values for σ2t were found using the equations for the heritability, and for the 2-way crossed classification without interaction, σ2b=20 for all the 40 cases. After the 80 simulations were done, each one obtaining 1000 data sets, and therefore 1000 estimates of each component of variance and the heritability, descriptive statistics and histograms were obtained for each set of 1000 estimates. The comparison of the methods was made based on the descriptive statistics and histograms, using as references the values of the parameters used in the simulations. For both models, the estimates of σ2 were close to the true values. For the 1-way classification, the maximum likelihood method gave estimates that, on average, underestimated the values of σ2t, and therefore the values of h2. This did not happen with the analysis of variance method. For the 2-way crossed classification without interaction, both methods gave similar estimates of σ2t, although the maximum likelihood method gave estimates that tended to underestimate σ2b and therefore to overestimate h2. This did not happen with the analysis of variance method. Hence, the analysis of variance method proved to be more accurate for the estimation of variance components and heritability for both classifications considered in this work.
5

Statistical Design For Yield And Variability Optimization Of Analog Integrated Circuits

Nalluri, Suresh Babu 12 1900 (has links) (PDF)
No description available.
6

Test et Fiabilité des Mémoires SRAM / Test and Reliability of SRAM Memories

Alves Fonseca, Renan 21 July 2011 (has links)
Aujourd'hui, les mémoires SRAM sont faites avec les technologies les plus rapides et sont parmi les éléments les plus importants dans les systèmes complexes. Les cellules SRAM sont souvent conçues en utilisant les dimensions minimales du nœud technologique. En conséquence, les SRAM sont plus sensibles à de nouveaux phénomènes physiques qui se produisent dans ces technologies, et sont donc extrêmement vulnérables aux défauts physiques. Afin de détecter si chaque composant est défectueux ou non, des procédures de test de haut coût sont employées. Différentes questions liées à cette procédure de test sont compilées dans ce document. Un des principaux apports de cette thèse est d'établir une méthode pour définir les conditions environnementales lors de la procédure de test afin de capter des défauts non-déterministe. Puisque des simulations statistiques sont souvent utilisées pour étudier des défauts non-déterministes, une méthode de simulation statistique efficace a été spécialement conçue pour la cellule SRAM. Dans cette thèse, nous traitons aussi la caractérisation de fautes, la caractérisation de la variabilité et la tolérance aux fautes. / Nowadays, Static Random Access Memories (SRAM) are made with the fastest technologies and are among the most important components in complex systems. SRAM bit-cell transistors are often designed using the minimal dimensions of the technology node. As a consequence, SRAMs are more sensitive to new physical phenomena that occur in these technologies, and hence are extremely vulnerable to physical defects. In order to detect whether each component is defective or not, high cost test procedures are employed. Different issues related to this test procedure were studied during this thesis, and are compiled in this document. One of the main contributions of this thesis was to establish a method to set the environmental conditions during the test procedure in order to capture non-deterministic faults. Since statistical simulations are often used to deal with non-deterministic faults, an efficient statistical simulation method was specially conceived for the 6 transistors SRAM bit-cell. In this thesis, we equally deal with fault characterization, variability characterization and fault tolerance.
7

Bayesian and classical inference for extensions of Geometric Exponential distribution with applications in survival analysis under the presence of the data covariated and randomly censored /

Gianfelice, Paulo Roberto de Lima. January 2020 (has links)
Orientador: Fernando Antonio Moala / Abstract: This work presents a study of probabilistic modeling, with applications to survival analysis, based on a probabilistic model called Exponential Geometric (EG), which o ers great exibility for the statistical estimation of its parameters based on samples of life time data complete and censored. In this study, the concepts of estimators and lifetime data are explored under random censorship in two cases of extensions of the EG model: the Extended Geometric Exponential (EEG) and the Generalized Extreme Geometric Exponential (GE2). The work still considers, exclusively for the EEG model, the approach of the presence of covariates indexed in the rate parameter as a second source of variation to add even more exibility to the model, as well as, exclusively for the GE2 model, a analysis of the convergence, hitherto ignored, it is proposed for its moments. The statistical inference approach is performed for these extensions in order to expose (in the classical context) their maximum likelihood estimators and asymptotic con dence intervals, and (in the bayesian context) their a priori and a posteriori distributions, both cases to estimate their parameters under random censorship, and covariates in the case of EEG. In this work, bayesian estimators are developed with the assumptions that the prioris are vague, follow a Gamma distribution and are independent between the unknown parameters. The results of this work are regarded from a detailed study of statistical simulation applied to... (Complete abstract click electronic access below) / Resumo: Este trabalho apresenta um estudo de modelagem probabilística, com aplicações à análise de sobrevivência, fundamentado em um modelo probabilístico denominado Exponencial Geométrico (EG), que oferece uma grande exibilidade para a estimação estatística de seus parâmetros com base em amostras de dados de tempo de vida completos e censurados. Neste estudo são explorados os conceitos de estimadores e dados de tempo de vida sob censuras aleatórias em dois casos de extensões do modelo EG: o Exponencial Geom étrico Estendido (EEG) e o Exponencial Geométrico Extremo Generalizado (GE2). O trabalho ainda considera, exclusivamente para o modelo EEG, a abordagem de presença de covariáveis indexadas no parâmetro de taxa como uma segunda fonte de variação para acrescentar ainda mais exibilidade para o modelo, bem como, exclusivamente para o modelo GE2, uma análise de convergência até então ignorada, é proposta para seus momentos. A abordagem da inferência estatística é realizada para essas extensões no intuito de expor (no contexto clássico) seus estimadores de máxima verossimilhança e intervalos de con ança assintóticos, e (no contexto bayesiano) suas distribuições à priori e posteriori, ambos os casos para estimar seus parâmetros sob as censuras aleatórias, e covariáveis no caso do EEG. Neste trabalho os estimadores bayesianos são desenvolvidos com os pressupostos de que as prioris são vagas, seguem uma distribuição Gama e são independentes entre os parâmetros desconhecidos. Os resultad... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
8

Multi-Stage Experimental Planning and Analysis for Forward-Inverse Regression Applied to Genetic Network Modeling

Taslim, Cenny 05 September 2008 (has links)
No description available.
9

Iterated Grid Search Algorithm on Unimodal Criteria

Kim, Jinhyo 02 June 1997 (has links)
The unimodality of a function seems a simple concept. But in the Euclidean space R^m, m=3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee convergence. Support is given both by analytic arguments and simulation study. Application is envisioned in low-dimensional but non-trivial problems. The convergence of the proposed iterated grid search algorithm is presented along with the results of particular application studies. It has been recognized that the derivative methods, such as the Newton-type method, are not entirely satisfactory, so a variety of other tools are being considered as alternatives. Many other tools have been rejected because of apparent manipulative difficulties. But in our current research, we focus on the simple algorithm and the guaranteed convergence for unimodal function to avoid the possible chaotic behavior of the function. Furthermore, in case the loss function to be optimized is not unimodal, we suggest a weaker condition: almost (noisy) unimodality, under which the iterated grid search finds an estimated optimum point. / Ph. D.
10

Aide au tolérancement tridimensionnel : modèle des domaines / Three-dimensional tolerancing assistance : domains model

Mansuy, Mathieu 25 June 2012 (has links)
Face à la demande de plus en plus exigeante en terme de qualité et de coût de fabrication des produits manufacturés, la qualification et quantification optimal des défauts acceptables est primordial. Le tolérancement est le moyen de communication permettant de définir les variations géométriques autorisé entre les différents corps de métier intervenant au cours du cycle de fabrication du produit. Un tolérancement optimal est le juste compromis entre coût de fabrication et qualité du produit final. Le tolérancement repose sur 3 problématiques majeures: la spécification (normalisation d'un langage complet et univoque), la synthèse et l'analyse de tolérances. Nous proposons dans ce document de nouvelles méthodes d'analyse et de synthèse du tolérancement tridimensionnel. Ces méthodes se basent sur une modélisation de la géométrie à l'aide de l'outil domaine jeux et écarts développé au laboratoire. La première étape consiste à déterminer les différentes topologies composant un mécanisme tridimensionnel. Pour chacune de ces topologies est définie une méthode de résolution des problématiques de tolérancement. Au pire des cas, les conditions de respect des exigences fonctionnelles se traduisent par des conditions d'existence et d'inclusions sur les domaines. Ces équations de domaines peuvent ensuite être traduites sous forme de système d'inéquations scalaires. L'analyse statistique s'appuie sur des tirages de type Monte-Carlo. Les variables aléatoires sont les composantes de petits déplacements des torseur écarts défini à l'intérieur de leur zone de tolérance (modélisée par un domaine écarts) et les dimensions géométriques fixant l'étendue des jeux (taille du domaine jeux associé). A l'issue des simulations statistiques, il est possible d'estimer le risque de non-qualité et les jeux résiduels en fonction du tolérancement défini. Le développement d'une nouvelle représentation des domaines jeux et écarts plus adapté, permet de simplifier les calculs relatifs aux problématiques de tolérancement. Le traitement local de chaque topologie élémentaire de mécanisme permet d'effectuer le traitement global des mécanismes tridimensionnels complexes avec prise en compte des jeux. / As far as the demand in quality and cost of manufacturing increase, the optimal qualification and quantification of acceptable defects is essential. Tolerancing is the means of communication between all actors of manufacturing. An optimal tolerancing is the right compromise between manufacturing cost and quality of the final product. Tolerancing is based on three major issues: The specification (standardization of a complete and unequivocal language), synthesis and analysis of the tolerancing. We suggest in this thesis some new analysis and synthesis of the three-dimensional tolerancing. These methods are based on a geometric model define by the deviations and clearances domains developed on the laboratory. The first step consists in determining the elementary topology that composes a three-dimensional mechanism. For each kind of these topologies one resolution method is defined. In worst case, the condition of functional requirement respect is traduced by existence and inclusions conditions on the domains. Then these domains equations can be translated in inequalities system of scalar. The statistical analysis uses the Monte-Carlo simulation. The random variables are the small displacements components of the deviation torsor which is defined inside its tolerance area (model by a deviations domain) and the geometrics dimensions which set the extent of clearance (size of the clearance domain). Thanks to statistical simulation, it is possible to estimate the non-quality rate in regards to the defined tolerancing. The development of a new representation of clearances and deviations domains most suitable, allows us to simplify the calculation for tolerancing problems. The local treatment of elementary topology makes enables the global treatment of complex three-dimensional mechanisms with take into account of clearances.

Page generated in 0.1449 seconds