Spelling suggestions: "subject:"discrepancy"" "subject:"adiscrepancy""
61 |
Análise quantitativa de modelos de prototipagem rápida baseados em dados de tomografia volumétrica, por meio de inspeção de engenharia reversa tridimensional / Quantitative analysis of rapid prototyping models based on datas of cone beam computerized tomography by inspection of three-dimensional reverse engineeringKang, Fatima Maria de Angelis 18 September 2009 (has links)
O objetivo neste estudo foi avaliar a reprodutibilidade quantitativa dos modelos tridimensionais gerados por meio de imagens de tomografia volumétrica de feixe cônico, obtidos por dois tomógrafos volumétricos NEWTOM 9000 DVT (i- (Quantitative Radiology, Verona, Itália) e i-CAT (Imaging Sciencies Int. Hatfield, Pensilvânia. Estados Unidos da América), submetidos à inspeção de engenharia reversa. Uma mandíbula macerada foi digitalizada por meio do SCANNER 3D VIVID 910, com precisão de 0,01 mm, associado à fotometria digital, sendo utilizado o software GEOMAGIC STUDIO, e assim obteve-se um modelo considerado padrão ouro. Posteriormente realizamos o exame de tomografia volumétrica de feixe cônico, na mesma mandíbula nos dois tomógrafos volumétricos, obtendo-se assim os modelos virtuais tridimensionais. Por meio de um alinhamento das geometrias do padrão ouro com os modelos virtuais resultantes da tomografia do aparelho NEWTOM 9000 DVT e do i-CAT, foram observadas as possíveis discrepâncias. Concluímos que estas alterações encontradas após a análise quantitativa das diversas regiões foram menores nas imagens adquiridas pelo tomógrafo volumétrico de feixe cônico modelo i-CAT, e portanto possibilitam a geração de modelos de prototipagem rápida de melhor qualidade. / The purpose of this study was to evaluate the quantitative reproductability of the 3d models created using CT images. The images used were obtained from two different cone beam tomographs: NEWTOM 9000 DVT (Quantitative Radiology, Verona, Italy) and i-CAT ( Imaging Sciences Int., Hatfield, EUA) and then inspected through reverse engineering processing software. A dry mandible was digitalized using a 3D Scanner VIVID 910 and the GEOMAGIC STUDIO software to obtain a 0,01mm precision model considered to be our gold standard. Two distinct 3D virtual models were then created after submitting the same dry mandible to a cone beam exam on both tomographs (NEWTOM and i-Cat). Possible discrepancies between the 3D models and the gold standard were analyzed through and alignment of their geometries. After evaluating the discrepancies at the different sites of the models, we concluded that the virtual model obtained using the i-Cat tomograph images was more accurate and therefore of greater quality.
|
62 |
O trabalho como fator determinante da defasagem escolar dos meninos no Brasil: mito ou realidade? / Labor as a determinant factor of school result discrepancy in Brazil: myth or reality?Artes, Amelia Cristina Abreu 24 November 2009 (has links)
De acordo com relatórios publicados por renomadas agências multilaterais, as meninas devem ocupar um lugar prioritário nos investimentos educacionais com vistas à igualdade de gênero no mundo. Contudo, estudos brasileiros indicam que, no país, os meninos apresentam os piores indicadores educacionais. Esta tese parte da constatação e análise da incompatibilidade destas informações. As meninas apresentam piores indicadores de ingresso (matrícula) nas regiões mais pobres do mundo, em que o acesso à escola ainda é restrito para ambos os sexos. Esta situação está também estreitamente ligada a fatores culturais e étnicos. Porém em regiões em que o acesso é praticamente universalizado, caso do Brasil, os indicadores de percurso (defasagem idade-série, anos de escolaridade, etc.) apresentam um pior resultado para os meninos. Na literatura a respeito da temática, quando se ressaltam os piores indicadores para os meninos a principal justificativa aventada é a necessidade de trabalhar para gerar renda, mais presente no universo masculino. Esta tese tem por objetivo avaliar a influência do trabalho nas trajetórias escolares de meninos e meninas. A partir dos microdados da Pesquisa Nacional de Amostra por Domicílio PNAD 2006, desenvolve-se uma modelagem estatística visando explicar a defasagem idade - anos de estudo (faixa etária de 10 a 14 anos) a partir da variável sexo e considerando o fator trabalho como variável de controle. Outros fatores como: cor (branco ou negro); regionalização do país (diferenciado por IDH alto, médio ou baixo) e afazeres domésticos, atividade mais presente no universo feminino, foram considerados no modelo. Os resultados indicam que o trabalho prejudica o percurso escolar de forma mais intensa para os meninos e os afazeres domésticos de forma mais sutil para as meninas, com resultados piores para os negros de ambos os sexos. As análises permitem concluir que o trabalho não pode ser considerado o principal responsável pela maior defasagem entre idade e anos de estudo dos rapazes de 10 a 14 anos no Brasil, pois de cada dez meninos apenas um trabalha e cinco estão defasados. Assim, para se compreender os diferentes percursos escolares de meninos e meninas é preciso investigar outros fatores, especialmente os internos ao funcionamento das escolas. / According to reports published by well-known multilateral agencies, girls should occupy a priority place in educational investments pursuing gender equality in the world. Nevertheless Brazilian studies indicate that boys present the worst educational indicators. This thesis starts from the observation and analysis of this information incompatibility. Girls present the worst indicators of access (enrollment) in the poorest regions of the world where the access to school is still restrict to both sexes. This situation is strictly connected to cultural and ethnical factors. However in regions where the access is practically universalized, like Brazil, the course indicators (discrepancy on age-grade, years of study and so on) present a worse result for boys. In the literature related to this theme when the worst results for boys are emphasized the main suggested justification is the necessity of working to generate income, more present in male world. This thesis aims to evaluate the labor influence on school courses of boys and girls. Considering the micro data of PNAD 2006 (National Residence Sample Survey), a statistic model is developed aiming to explain the discrepancy concerning age- years of study (aging from 10 to 14 years old) taking into account the sex and considering the labor factor as a control variance. Other factors such as: color (black and white), country sectionalizing (High, medium and low IDH - Human Development Index) and housework, an activity most related to women, were considered in the model. The results indicate that working damages more intensively the boys school process, and housework in a more subtle way influences the girls school process, with worse results for blacks of both sex. The analyses permit to conclude that labor cannot be considered the main factor responsible for the bigger discrepancy between age and years of study of boys from 10 to 14 years old in Brazil, since out of ten boys only one works and five present school result discrepancy. Therefore to understand the different school courses of boys and girls it is necessary to investigate other factors, mainly the ones related to internal school performance.
|
63 |
Model selection criteria in the presence of missing data based on the Kullback-Leibler discrepancySparks, JonDavid 01 December 2009 (has links)
An important challenge in statistical modeling involves determining an appropriate structural form for a model to be used in making inferences and predictions. Missing data is a very common occurrence in most research settings and can easily complicate the model selection problem. Many useful procedures have been developed to estimate parameters and standard errors in the presence of missing data;however, few methods exist for determining the actual structural form of a modelwhen the data is incomplete.
In this dissertation, we propose model selection criteria based on the Kullback-Leiber discrepancy that can be used in the presence of missing data. The criteria are developed by accounting for missing data using principles related to the expectation maximization (EM) algorithm and bootstrap methods. We formulate the criteria for three specific modeling frameworks: for the normal multivariate linear regression model, a generalized linear model, and a normal longitudinal regression model. In each framework, a simulation study is presented to investigate the performance of the criteria relative to their traditional counterparts. We consider a setting where the missingness is confined to the outcome, and also a setting where the missingness may occur in the outcome and/or the covariates. The results from the simulation studies indicate that our criteria provide better protection against underfitting than their traditional analogues.
We outline the implementation of our methodology for a general discrepancy measure. An application is presented where the proposed criteria are utilized in a study that evaluates the driving performance of individuals with Parkinson's disease under low contrast (fog) conditions in a driving simulator.
|
64 |
Anti-Aliased Low Discrepancy Samplers for Monte Carlo Estimators in Physically Based Rendering / Échantillonneurs basse discrepance anti aliassés pour du rendu réaliste avec estimateurs de Monte CarloPerrier, Hélène 07 March 2018 (has links)
Lorsque l'on affiche un objet 3D sur un écran d'ordinateur, on transforme cet objet en une image, c.a.d en un ensemble de pixels colorés. On appelle Rendu la discipline qui consiste à trouver la couleur à associer à ces pixels. Calculer la couleur d'un pixel revient à intégrer la quantité de lumière arrivant de toutes les directions que la surface renvoie dans la direction du plan image, le tout pondéré par une fonction binaire déterminant si un point est visible ou non. Malheureusement, l'ordinateur ne sait pas calculer des intégrales on a donc deux méthodes possibles : Trouver une expression analytique qui permet de supprimer l'intégrale de l'équation (approche basée statistique). Approximer numériquement l'équation en tirant des échantillons aléatoires dans le domaine d'intégration et en en déduisant la valeur de l'intégrale via des méthodes dites de Monte Carlo. Nous nous sommes ici intéressés à l'intégration numérique et à la théorie de l'échantillonnage. L'échantillonnage est au cœur des problématiques d'intégration numérique. En informatique graphique, il est capital qu'un échantillonneur génère des points uniformément dans le domaine d’échantillonnage pour garantir que l'intégration ne sera pas biaisée. Il faut également que le groupe de points généré ne présente aucune régularité structurelle visible, au risque de voir apparaître des artefacts dit d'aliassage dans l'image résultante. De plus, les groupes de points générés doivent minimiser la variance lors de l'intégration pour converger au plus vite vers le résultat. Il existe de nombreux types d'échantillonneurs que nous classeront ici grossièrement en 2 grandes familles : Les échantillonneurs bruit bleu, qui ont une faible la variance lors de l'intégration tout en générant de groupes de points non structurés. Le défaut de ces échantillonneurs est qu'ils sont extrêmement lents pour générer les points. Les échantillonneurs basse discrépance, qui minimisent la variance lors de l'intégration, génèrent des points extrêmement vite, mais qui présentent une forte structure, générant énormément d'aliassage. Notre travail a été de développer des échantillonneurs hybrides, combinant à la fois bruit bleu et basse discrépance / When you display a 3D object on a computer screen, we transform this 3D scene into a 2D image, which is a set of organized colored pixels. We call Rendering all the process that aims at finding the correct color to give those pixels. This is done by integrating all the light rays coming for every directions that the object's surface reflects back to the pixel, the whole being ponderated by a visibility function. Unfortunately, a computer can not compute an integrand. We therefore have two possibilities to solve this issue: We find an analytical expression to remove the integrand (statistic based strategy). Numerically approximate the equation by taking random samples in the integration domain and approximating the integrand value using Monte Carlo methods. Here we focused on numerical integration and sampling theory. Sampling is a fundamental part of numerical integration. A good sampler should generate points that cover the domain uniformly to prevent bias in the integration and, when used in Computer Graphics, the point set should not present any visible structure, otherwise this structure will appear as artifacts in the resulting image. Furthermore, a stochastic sampler should minimize the variance in integration to converge to a correct approximation using as few samples as possible. There exists many different samplers that we will regroup into two families: Blue Noise samplers, that have a low integration variance while generating unstructured point sets. The issue with those samplers is that they are often slow to generate a pointset. Low Discrepancy samplers, that minimize the variance in integration and are able to generate and enrich a point set very quickly. However, they present a lot of structural artifacts when used in Rendering. Our work aimed at developing hybriod samplers, that are both Blue Noise and Low Discrepancy
|
65 |
Probabilistic pairwise model comparisons based on discrepancy measures and a reconceptualization of the p-valueRiedle, Benjamin N. 01 May 2018 (has links)
Discrepancy measures are often employed in problems involving the selection and assessment of statistical models. A discrepancy gauges the separation between a fitted candidate model and the underlying generating model. In this work, we consider pairwise comparisons of fitted models based on a probabilistic evaluation of the ordering of the constituent discrepancies. An estimator of the probability is derived using the bootstrap.
In the framework of hypothesis testing, nested models are often compared on the basis of the p-value. Specifically, the simpler null model is favored unless the p-value is sufficiently small, in which case the null model is rejected and the more general alternative model is retained. Using suitably defined discrepancy measures, we mathematically show that, in general settings, the Wald, likelihood ratio (LR) and score test p-values are approximated by the bootstrapped discrepancy comparison probability (BDCP). We argue that the connection between the p-value and the BDCP leads to potentially new insights regarding the utility and limitations of the p-value. The BDCP framework also facilitates discrepancy-based inferences in settings beyond the limited confines of nested model hypothesis testing.
|
66 |
An Implementation of the USF/ Calvo Model in Verilog-A to Enforce Charge Conservation in Applicable FET ModelsNicodemus, Joshua 11 March 2005 (has links)
The primary goal of this research is to put into code a unique approach to addressing problems apparent with nonlinear FET models which were exposed by Calvo in her work in 1994. Since that time, the simulation software for which her model was appropriate underwent a significant update, necessitating the rewriting of her model code for a few applicable FET models in a Verilog-A, making it more compatible with the new versions of software and simulators.
The problems addressed are the inconsistencies between the small-signal model and the corresponding large-signal models due to a factor called transcapacitance. It has been noted by several researchers that the presence of a nonlinear capacitor in a circuit model mathematically implies the existence of a parallel transcapacitor, if the value of its capacitance is a function of two bias voltages, the local and a remote voltage. As a consequence, simulating small signal excursions using the linear model, if the latter does not include the transcapacitance, which is inevitably present. The Calvo model attempted to improve the performance of these models by modifying terms in the charge source equations which minimize these transcapacities. Thanks to the present effort, Calvo's theory is now incorporated in the Angelov Model and can also be implemented in some other popular existing models such as Curtic, Statz and Parker Skellern models.
|
67 |
Construction of lattice rules for multiple integration based on a weighted discrepancySinescu, Vasile January 2008 (has links)
High-dimensional integrals arise in a variety of areas, including quantum physics, the physics and chemistry of molecules, statistical mechanics and more recently, in financial applications. In order to approximate multidimensional integrals, one may use Monte Carlo methods in which the quadrature points are generated randomly or quasi-Monte Carlo methods, in which points are generated deterministically. One particular class of quasi-Monte Carlo methods for multivariate integration is represented by lattice rules. Lattice rules constructed throughout this thesis allow good approximations to integrals of functions belonging to certain weighted function spaces. These function spaces were proposed as an explanation as to why integrals in many variables appear to be successfully approximated although the standard theory indicates that the number of quadrature points required for reasonable accuracy would be astronomical because of the large number of variables. The purpose of this thesis is to contribute to theoretical results regarding the construction of lattice rules for multiple integration. We consider both lattice rules for integrals over the unit cube and lattice rules suitable for integrals over Euclidean space. The research reported throughout the thesis is devoted to finding the generating vector required to produce lattice rules that have what is termed a low weighted discrepancy . In simple terms, the discrepancy is a measure of the uniformity of the distribution of the quadrature points or in other settings, a worst-case error. One of the assumptions used in these weighted function spaces is that variables are arranged in the decreasing order of their importance and the assignment of weights in this situation results in so-called product weights . In other applications it is rather the importance of group of variables that matters. This situation is modelled by using function spaces in which the weights are general . In the weighted settings mentioned above, the quality of the lattice rules is assessed by the weighted discrepancy mentioned earlier. Under appropriate conditions on the weights, the lattice rules constructed here produce a convergence rate of the error that ranges from O(n−1/2) to the (believed) optimal O(n−1+δ) for any δ gt 0, with the involved constant independent of the dimension.
|
68 |
The perfectionism model of binge eating : idiographic and nomothetic tests of an integrative modelSherry, Simon B. 15 June 2006
Perfectionism is implicated in the onset, course, and remission of disordered eating (Bastiani, Rao, Weltzin, & Kaye, 1995; Bruch, 1979; Cockell et al., 2002; Stice, 2002; Tozzi, et al., 2005; Vohs, Bardone, Joiner, & Abramson, 1999; references are contained in Appendix F on p. 271). Building on the above research tradition, this dissertation proposed and evaluated a model relating perfectionism to binge eating. This new model is termed the Perfectionism Model of Binge Eating (PMOBE). According to the PMOBE, perfectionism confers vulnerability to binge eating by generating encounters with and by magnifying responses to specific triggers of binge eating: namely, perceived discrepancies, low self-esteem, depressive affect, and dietary restraint. <p>A multi-site, 7-day, web-based structured daily diary study was conducted to test the PMOBE. Overall, 566 female university students participated, and these individuals provided 3509 useable diary responses. A data analytic strategy involving structural equation modeling and multilevel modeling generally supported the PMOBE. For example, a structural model relating socially prescribed perfectionism (i.e., perceiving that others are demanding perfection of oneself) to binge eating through the aforementioned binge eating triggers demonstrated acceptable fit. Multilevel mediation also indicated that the influence of self-oriented perfectionism (i.e., demanding perfection of oneself) and socially prescribed perfectionism on binge eating operated through the abovementioned binge eating triggers (excepting dietary restraint). Support for multilevel moderation was limited, but suggested that the relationship between self-oriented perfectionism and binge eating was conditional upon dietary restraint. <p>This study is, to my knowledge, the first to examine the perfectionism-disordered eating connection using a structured daily diary methodology. Thus, this study offered a unique perspective apart from the usual cross-sectional and nomothetic research on perfectionism and eating pathology. In particular, this study suggested that, in their day-to-day lives, perfectionistic individuals (especially socially prescribed perfectionists) inhabit a world permeated with putative triggers of binge eating. Although perfectionism appeared to generate exposure to binge eating triggers, by and large, it did not seem to magnify responses to these same triggers (Bolger & Zuckerman, 1995, p. 890). A somewhat qualified version of the PMOBE was thus supported, with socially prescribed perfectionism assuming greater importance than self-oriented perfectionism and with perfectionism conferring vulnerability to binge eating by generating environments with, but not magnifying responses to, binge triggers. Overall, this dissertation contributed new knowledge to our understanding of the precipitants and the correlates of binge eating and highlighted the idea that perfectionism may play an important part in binge eating.
|
69 |
Directional Control of Generating Brownian Path under Quasi Monte CarloLiu, Kai January 2012 (has links)
Quasi-Monte Carlo (QMC) methods are playing an increasingly important role in computational finance. This is attributed to the increased complexity of the derivative securities and the sophistication of the financial models. Simple closed-form solutions for the finance applications typically do not exist and hence numerical methods need to be used to approximate
their solutions. QMC method has been proposed as an alternative method to Monte Carlo (MC) method to accomplish this objective. Unlike MC methods, the efficiency of QMC-based methods is highly dependent on the dimensionality of the problems. In particular, numerous researches have documented, under the Black-Scholes models, the critical role of the generating matrix for simulating the Brownian paths. Numerical results support the notion that generating matrix that reduces the effective dimension of the underlying problems is able to increase the efficiency of QMC. Consequently, dimension reduction methods such as principal component analysis, Brownian bridge, Linear Transformation and Orthogonal Transformation have been proposed to further enhance QMC. Motivated by these results, we first propose a new measure to quantify the effective dimension. We then propose a new dimension reduction method which we refer as the directional method (DC). The proposed DC method has the advantage that it depends explicitly on the given function of interest. Furthermore, by assigning appropriately the direction of importance of the given function, the proposed method optimally determines the generating matrix used to simulate the Brownian paths. Because of the flexibility of our proposed method, it can be shown that many of the existing dimension reduction methods are special cases of our proposed DC methods. Finally, many numerical examples are provided to support the competitive efficiency of the proposed method.
|
70 |
The perfectionism model of binge eating : idiographic and nomothetic tests of an integrative modelSherry, Simon B. 15 June 2006 (has links)
Perfectionism is implicated in the onset, course, and remission of disordered eating (Bastiani, Rao, Weltzin, & Kaye, 1995; Bruch, 1979; Cockell et al., 2002; Stice, 2002; Tozzi, et al., 2005; Vohs, Bardone, Joiner, & Abramson, 1999; references are contained in Appendix F on p. 271). Building on the above research tradition, this dissertation proposed and evaluated a model relating perfectionism to binge eating. This new model is termed the Perfectionism Model of Binge Eating (PMOBE). According to the PMOBE, perfectionism confers vulnerability to binge eating by generating encounters with and by magnifying responses to specific triggers of binge eating: namely, perceived discrepancies, low self-esteem, depressive affect, and dietary restraint. <p>A multi-site, 7-day, web-based structured daily diary study was conducted to test the PMOBE. Overall, 566 female university students participated, and these individuals provided 3509 useable diary responses. A data analytic strategy involving structural equation modeling and multilevel modeling generally supported the PMOBE. For example, a structural model relating socially prescribed perfectionism (i.e., perceiving that others are demanding perfection of oneself) to binge eating through the aforementioned binge eating triggers demonstrated acceptable fit. Multilevel mediation also indicated that the influence of self-oriented perfectionism (i.e., demanding perfection of oneself) and socially prescribed perfectionism on binge eating operated through the abovementioned binge eating triggers (excepting dietary restraint). Support for multilevel moderation was limited, but suggested that the relationship between self-oriented perfectionism and binge eating was conditional upon dietary restraint. <p>This study is, to my knowledge, the first to examine the perfectionism-disordered eating connection using a structured daily diary methodology. Thus, this study offered a unique perspective apart from the usual cross-sectional and nomothetic research on perfectionism and eating pathology. In particular, this study suggested that, in their day-to-day lives, perfectionistic individuals (especially socially prescribed perfectionists) inhabit a world permeated with putative triggers of binge eating. Although perfectionism appeared to generate exposure to binge eating triggers, by and large, it did not seem to magnify responses to these same triggers (Bolger & Zuckerman, 1995, p. 890). A somewhat qualified version of the PMOBE was thus supported, with socially prescribed perfectionism assuming greater importance than self-oriented perfectionism and with perfectionism conferring vulnerability to binge eating by generating environments with, but not magnifying responses to, binge triggers. Overall, this dissertation contributed new knowledge to our understanding of the precipitants and the correlates of binge eating and highlighted the idea that perfectionism may play an important part in binge eating.
|
Page generated in 0.0319 seconds