• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 101
  • 99
  • 81
  • 69
  • 69
  • 46
  • 39
  • 38
  • 38
  • 37
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Comparação de rols classificatórios de tratamentos e de estimativas de componentes de variância em grupos de experimentos / Comparison of treatments classicatory rankings and of variance components estimates in experimental groups

Cássio Dessotti 28 January 2010 (has links)
As análises de grupos de experimentos, de grande importância em melhoramento genético, são indispensáveis quando se pretende investigar o comportamento de alguns tratamentos em diversos locais de interesse do pesquisador. Nestes casos, parte-se das analises de variância individuais em cada local, para o agrupamento de todos os ensaios em uma única analise. Verifica-se então a veracidade da significância da interação tratamentos versus locais - TL, sendo esta não-significativa, pode-se obter conclusões generalizados a respeito do comportamento dos tratamentos. No entanto, o grande interesse esta nos casos de interação significativa, em que dois caminhos de destaque surgem para que se conclua a analise, o primeiro, permite que se considerem os resultados e conclusões das analises individuais, com o resíduo específico de cada local, enquanto o segundo aconselha que se desdobrem os graus de liberdade relativos a tratamentos + interação significativa, visando a interpretação dos tratamentos em estudo dentro de cada um dos locais, utilizando o resíduo médio como testador. Partindo do fato de que componentes de variância são variâncias associadas aos efeitos aleatórios de um modelo matemático, que permitem quantificar a variabilidade de tais efeitos, tem-se por objetivo neste trabalho, em grupos de experimentos reais com interação TL significativa, comparar os componentes de variância obtidos nas analises individuais utilizando os quadrados médios residuais - QMRes de cada ensaio versus os obtidos pós-desdobramento da interação em questão utilizando o quadrado médio do resíduo médio - QMRM. Tal confronto será fundamentado nas estimativas de variâncias das estimativas destes componentes. Finalmente, em grupos de ensaios reais e simulados, o objetivo será voltado para a comparação de rols classificatórios de tratamentos nas analises individuais versus os rols classificatórios de tratamentos obtidos pós-desdobramento da interação em questão. A montagem destes rols será possível a partir do uso do teste de Tukey, ao nível de 5% de significância, para os cálculos das diferenças mínimas significativas - dms ora com resíduos de analises individuais, ora de conjunta. Todos os cálculos deste trabalho serão realizados no software estatístico R. / The experimental groups analysis, of great importance in genetic improvement, are essential when intends to investigate the treatments behaviour in many places from researcher interest. In these cases, starts by the individual variance analysis in each place, to the grouping of all experiments in a single analysis. Examine the truth of the signicant treatments vs. places interaction - TL, being this no-signicant, is possible to obtain generalized conclusions about the treatments behaviour. However, the interest is in the cases when signi cant interaction is found, because two eminence ways appear for the analysis conclusions, the rst one allow that the individual analysis results and conclusions be considered, with the specic residue from each place, while the second one advise, that the degrees of freedom relative to treatments + signicant interaction be unfound, looking at the interpretation of the treatments in study inside each place, using the mean residue how testator. Starting with the fact that variance components are variances associated to the aleatory eects of a mathematical model, that allow the quantifying of such eects, this work objective, in real experimental groups, with signicant interaction TL, is to compare the variance components obtained in individual analysis using the residual mean square - QMRes from each experiment against the obtained after unfolding the interaction in question using the mean residual mean square - QMRM. This confrontation will be based in variance estimations of these components estimations. Finally, in real and simulate experimental groups, the objective will be directed to the comparison of treatments classicatory rankings in individual analysis vs. the treatments classi catory rankings obtained after unfolding of the interaction in question. The construction of these rankings will be possible using the Tukey test, with 5% of signicance, for the calculation of the signicants minimum dierences - dms, a time with individual analysis residual, othertime, conjunct. All the calculations from this work will be realized in the R statistical software.
212

Comparação de rols classificatórios de tratamentos e de estimativas de componentes de variância em grupos de experimentos / Comparison of treatments classicatory rankings and of variance components estimates in experimental groups

Dessotti, Cássio 28 January 2010 (has links)
As análises de grupos de experimentos, de grande importância em melhoramento genético, são indispensáveis quando se pretende investigar o comportamento de alguns tratamentos em diversos locais de interesse do pesquisador. Nestes casos, parte-se das analises de variância individuais em cada local, para o agrupamento de todos os ensaios em uma única analise. Verifica-se então a veracidade da significância da interação tratamentos versus locais - TL, sendo esta não-significativa, pode-se obter conclusões generalizados a respeito do comportamento dos tratamentos. No entanto, o grande interesse esta nos casos de interação significativa, em que dois caminhos de destaque surgem para que se conclua a analise, o primeiro, permite que se considerem os resultados e conclusões das analises individuais, com o resíduo específico de cada local, enquanto o segundo aconselha que se desdobrem os graus de liberdade relativos a tratamentos + interação significativa, visando a interpretação dos tratamentos em estudo dentro de cada um dos locais, utilizando o resíduo médio como testador. Partindo do fato de que componentes de variância são variâncias associadas aos efeitos aleatórios de um modelo matemático, que permitem quantificar a variabilidade de tais efeitos, tem-se por objetivo neste trabalho, em grupos de experimentos reais com interação TL significativa, comparar os componentes de variância obtidos nas analises individuais utilizando os quadrados médios residuais - QMRes de cada ensaio versus os obtidos pós-desdobramento da interação em questão utilizando o quadrado médio do resíduo médio - QMRM. Tal confronto será fundamentado nas estimativas de variâncias das estimativas destes componentes. Finalmente, em grupos de ensaios reais e simulados, o objetivo será voltado para a comparação de rols classificatórios de tratamentos nas analises individuais versus os rols classificatórios de tratamentos obtidos pós-desdobramento da interação em questão. A montagem destes rols será possível a partir do uso do teste de Tukey, ao nível de 5% de significância, para os cálculos das diferenças mínimas significativas - dms ora com resíduos de analises individuais, ora de conjunta. Todos os cálculos deste trabalho serão realizados no software estatístico R. / The experimental groups analysis, of great importance in genetic improvement, are essential when intends to investigate the treatments behaviour in many places from researcher interest. In these cases, starts by the individual variance analysis in each place, to the grouping of all experiments in a single analysis. Examine the truth of the signicant treatments vs. places interaction - TL, being this no-signicant, is possible to obtain generalized conclusions about the treatments behaviour. However, the interest is in the cases when signi cant interaction is found, because two eminence ways appear for the analysis conclusions, the rst one allow that the individual analysis results and conclusions be considered, with the specic residue from each place, while the second one advise, that the degrees of freedom relative to treatments + signicant interaction be unfound, looking at the interpretation of the treatments in study inside each place, using the mean residue how testator. Starting with the fact that variance components are variances associated to the aleatory eects of a mathematical model, that allow the quantifying of such eects, this work objective, in real experimental groups, with signicant interaction TL, is to compare the variance components obtained in individual analysis using the residual mean square - QMRes from each experiment against the obtained after unfolding the interaction in question using the mean residual mean square - QMRM. This confrontation will be based in variance estimations of these components estimations. Finally, in real and simulate experimental groups, the objective will be directed to the comparison of treatments classicatory rankings in individual analysis vs. the treatments classi catory rankings obtained after unfolding of the interaction in question. The construction of these rankings will be possible using the Tukey test, with 5% of signicance, for the calculation of the signicants minimum dierences - dms, a time with individual analysis residual, othertime, conjunct. All the calculations from this work will be realized in the R statistical software.
213

Spatio-Temporal Analysis of Point Patterns

Soale, Abdul-Nasah 01 August 2016 (has links)
In this thesis, the basic tools of spatial statistics and time series analysis are applied to the case study of the earthquakes in a certain geographical region and time frame. Then some of the existing methods for joint analysis of time and space are described and applied. Finally, additional research questions about the spatial-temporal distribution of the earthquakes are posed and explored using statistical plots and models. The focus in the last section is in the relationship between number of events per year and maximum magnitude and its effect on how clustered the spatial distribution is and the relationship between distances in time and space in between consecutive events as well as the distribution of the distances.
214

Performance of Imputation Algorithms on Artificially Produced Missing at Random Data

Oketch, Tobias O 01 May 2017 (has links)
Missing data is one of the challenges we are facing today in modeling valid statistical models. It reduces the representativeness of the data samples. Hence, population estimates, and model parameters estimated from such data are likely to be biased. However, the missing data problem is an area under study, and alternative better statistical procedures have been presented to mitigate its shortcomings. In this paper, we review causes of missing data, and various methods of handling missing data. Our main focus is evaluating various multiple imputation (MI) methods from the multiple imputation of chained equation (MICE) package in the statistical software R. We assess how these MI methods perform with different percentages of missing data. A multiple regression model was fit on the imputed data sets and the complete data set. Statistical comparisons of the regression coefficients are made between the models using the imputed data and the complete data.
215

The Document Similarity Network: A Novel Technique for Visualizing Relationships in Text Corpora

Baker, Dylan 01 January 2017 (has links)
With the abundance of written information available online, it is useful to be able to automatically synthesize and extract meaningful information from text corpora. We present a unique method for visualizing relationships between documents in a text corpus. By using Latent Dirichlet Allocation to extract topics from the corpus, we create a graph whose nodes represent individual documents and whose edge weights indicate the distance between topic distributions in documents. These edge lengths are then scaled using multidimensional scaling techniques, such that more similar documents are clustered together. Applying this method to several datasets, we demonstrate that these graphs are useful in visually representing high-dimensional document clustering in topic-space.
216

AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

Hamraz, Hamid 01 January 2018 (has links)
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
217

Improved Methods and Selecting Classification Types for Time-Dependent Covariates in the Marginal Analysis of Longitudinal Data

Chen, I-Chen 01 January 2018 (has links)
Generalized estimating equations (GEE) are popularly utilized for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, when certain types of time-dependent covariates are presented, these equations can be biased unless an independence working correlation structure is employed. Moreover, in this case regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches using the generalized method of moments or quadratic inference functions have been proposed for utilizing all valid moment conditions. However, we have found that such methods will not always provide valid inference and can also be improved upon in terms of finite-sample regression parameter estimation. Therefore, we propose a modified GEE approach and a selection method that will both ensure the validity of inference and improve regression parameter estimation. In addition, these modified approaches assume the data analyst knows the type of time-dependent covariate, although this likely is not the case in practice. Whereas hypothesis testing has been used to determine covariate type, we propose a novel strategy to select a working covariate type in order to avoid potentially high type II error rates with these hypothesis testing procedures. Parameter estimates resulting from our proposed method are consistent and have overall improved mean squared error relative to hypothesis testing approaches. Finally, for some real-world examples the use of mean regression models may be sensitive to skewness and outliers in the data. Therefore, we extend our approaches from their use with marginal quantile regression to modeling the conditional quantiles of the response variable. Existing and proposed methods are compared in simulation studies and application examples.
218

Modeling and Mapping Location-Dependent Human Appearance

Bessinger, Zachary 01 January 2018 (has links)
Human appearance is highly variable and depends on individual preferences, such as fashion, facial expression, and makeup. These preferences depend on many factors including a person's sense of style, what they are doing, and the weather. These factors, in turn, are dependent upon geographic location and time. In our work, we build computational models to learn the relationship between human appearance, geographic location, and time. The primary contributions are a framework for collecting and processing geotagged imagery of people, a large dataset collected by our framework, and several generative and discriminative models that use our dataset to learn the relationship between human appearance, location, and time. Additionally, we build interactive maps that allow for inspection and demonstration of what our models have learned.
219

Generalizing Multistage Partition Procedures for Two-parameter Exponential Populations

Wang, Rui 06 August 2018 (has links)
ANOVA analysis is a classic tool for multiple comparisons and has been widely used in numerous disciplines due to its simplicity and convenience. The ANOVA procedure is designed to test if a number of different populations are all different. This is followed by usual multiple comparison tests to rank the populations. However, the probability of selecting the best population via ANOVA procedure does not guarantee the probability to be larger than some desired prespecified level. This lack of desirability of the ANOVA procedure was overcome by researchers in early 1950's by designing experiments with the goal of selecting the best population. In this dissertation, a single-stage procedure is introduced to partition k treatments into "good" and "bad" groups with respect to a control population assuming some key parameters are known. Next, the proposed partition procedure is genaralized for the case when the parameters are unknown and a purely-sequential procedure and a two-stage procedure are derived. Theoretical asymptotic properties, such as first order and second order properties, of the proposed procedures are derived to document the efficiency of the proposed procedures. These theoretical properties are studied via Monte Carlo simulations to document the performance of the procedures for small and moderate sample sizes.
220

Making Models with Bayes

Olid, Pilar 01 December 2017 (has links)
Bayesian statistics is an important approach to modern statistical analyses. It allows us to use our prior knowledge of the unknown parameters to construct a model for our data set. The foundation of Bayesian analysis is Bayes' Rule, which in its proportional form indicates that the posterior is proportional to the prior times the likelihood. We will demonstrate how we can apply Bayesian statistical techniques to fit a linear regression model and a hierarchical linear regression model to a data set. We will show how to apply different distributions to Bayesian analyses and how the use of a prior affects the model. We will also make a comparison between the Bayesian approach and the traditional frequentist approach to data analyses.

Page generated in 0.0961 seconds