• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 22
  • 9
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 27
  • 26
  • 20
  • 17
  • 16
  • 16
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Alternative Methods of Estimating the Degree of Uncertainty in Student Ratings of Teaching

Alsarhan, Ala'a Mohammad 01 July 2017 (has links)
This study used simulated results to evaluate four alternative methods of computing confidence intervals for class means in the context of student evaluations of teaching in a university setting. Because of the skewed and bounded nature of the ratings, the goal was to identify a procedure for constructing confidence intervals that would be asymmetric and not dependent upon normal curve theory. The four methods included (a) a logit transformation, (b) a resampling procedure, (c) a nonparametric, bias corrected accelerated Bootstrapping procedure, and (d) a Bayesian bootstrap procedure. The methods were compared against four criteria including (a) coverage probability, (b) coverage error, (c) average interval width, and (d) the lower and upper error probability. The results of each method were also compared with a classical procedure for computing the confidence interval based on normal curve theory. In addition, Student evaluations of teaching effectiveness (SET) ratings from all courses taught during one semester at Brigham Young University were analyzed using multilevel generalizability theory to estimate variance components and to estimate the reliability of the class means as a function of the number of respondents in each class. The results showed that the logit transformation procedure outperformed the alternative methods. The results also showed that the reliability of the class means exceeded .80 for classes averaging 15 respondents or more. The study demonstrates the need to routinely report a margin of error associated with the mean SET rating for each class and recommends that a confidence interval based on the logit transformation procedure be used for this purpose.
82

Multi-Carrier Communications Over Underwater Acoustic Channels

January 2011 (has links)
abstract: Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results. / Dissertation/Thesis / Ph.D. Electrical Engineering 2011
83

Integração de redes neurais artificiais ao nariz eletrônico: avaliação aromática de café solúvel

Bona, Evandro January 2008 (has links)
No description available.
84

Integração de redes neurais artificiais ao nariz eletrônico: avaliação aromática de café solúvel

Bona, Evandro January 2008 (has links)
No description available.
85

Análise da dinâmica do potássio e nitrato em colunas de solo não saturado por meio de modelos não lineares e multiresposta / Analysis of the dynamics of potassium and nitrate in soil columns unsaturated through nonlinear model and multi-response

Ana Patricia Bastos Peixoto 02 August 2013 (has links)
Nos últimos anos grande número de modelos computacionais tem sido propostos com o intuito de descrever o movimento de solutos no perfil do solo, apesar disso, o que se observa é que existe grande dificuldade em se modelar esses fenômenos, para que o modelo possa predizer o processo de deslocamento e retenção dos solutos na natureza. Para tanto, o objetivo deste trabalho foi utilizar um modelo estatístico para descrever o transporte dos solutos no perfil do solo. Dessa forma, foi realizado um experimento em laboratório e observado os níveis de potássio e nitrato ao longo do perfil dos solos Latossolo Vermelho Amarelo e Nitossolo Vermelho. Para inferir sobre essas variáveis foram consideradas duas abordagens. Para a primeira abordagem foi utilizado um modelo de regressão não linear para cada uma das variáveis, cujos parâmetros do modelo apresentam uma interpretação prática, na área de solos. Para esse modelo foi realizado um esboço sobre a não linearidade do mesmo para verificar as propriedades assintóticas dos estimadores dos parâmetros. Para o método de estimação foi considerado, o método de mínimos quadrados e o método de bootstrap. Além disso, foi realizada uma análise de diagnóstico para verificar a adequação do modelo, bem como identificar pontos discrepantes. Por outro lado, para outra abordagem, foi utilizado um modelo multiresposta para analisar o comportamento das variáveis nitrato e potássio ao longo do perfil dos solos, conjuntamente. Para esse modelo foi utilizado o método da máxima verossimilhança para encontrar as estimativas dos parâmetros do modelo. Em ambas as situações, observou-se a adequação dos modelos para descrever o comportamento dos solutos nos solos, sendo uma alternativa para os pesquisadores que trabalham com estudo de solos. O modelo logístico com quatro parâmetros se destacou por apresentar melhores propriedades, como medidas de não linearidade e boa qualidade de ajuste. / In the last years, several computational models have been proposed to describe the movement of solutes in the soil profile, but what is observed is that there is great difficulty in model these phenomena, so that model can predict the displacement process and retention of solutes in nature. Thus, the aim of this study was to use a statistical model to describe the transport of solutes in the soil profile. Therefore, an experiment was conducted in the laboratory and observed levels of potassium and nitrate along the depth of soil Oxisol (Haplustox) and Hapludox,. To make inferences about these variables were considered two approaches. For the first approach was utilized a non-linear regression model for each variable and the model parameters have a practical interpretation on soil. For this model we performed a sketch on the nonlinearity of the model to check the asymptotic properties of parameter estimators. To estimate the parameters were considered the least squares method and the bootstrap method. In addition, we performed a diagnostic analysis to verify the adequacy of the model and identify outliers. In the second approach considered was using a multi-response model to analyze the behavior of the variables nitrate and potassium throughout the soil profile together. For this model we used the maximum likelihood method to estimate the model parameters. In both cases, we observed the suitability of the models to describe the behavior of solutes in soils, being an alternative for researchers working on the study of soils. The logistic model with four parameters stood out with better properties, such as non-linearity and good fit.
86

Ácaros na cultura de soja: genótipos, danos e tamanho de amostra / Spider mites on soybean: genotypes, damage and sample size

Fiorin, Rubens Alex 29 August 2014 (has links)
The study aimed to evaluate the influence of soybeans genotypes on spider mites populations, quantify the occurring damage from spider mite attack and determinate leaflet number collected from different genotypes to estimate the spider mite population. Two studies were carried, in São Sepé (20 genotypes) and in Santa Maria (25 genotypes). The experiments were carried in randomized block design with four replications in 4,5 and 5,0 x 25 m experimental units. Weekly samplings were carried collecting 25 leaflets from the medium stratum and 25 leaflets from the superior soybean plant stratum in each genotype and evaluated an area of 20 cm2 of each leaflet. To determinate the sample size was used the data from evaluations which at least one genotype presented average population superior to one spider mite cm-2. To estimate spider mite number was considered the number of immature + adults spider mites, averages were compared with t bootstrap test. Sample size was estimated for an amplitude of 2 and 4 spider mites 20cm-2 and the optimal sample size calculus. To quantify spider mite damage in each genotype was maintained infested plots and not infested plots by pulverizations of acaricide. The predominant specie was Mononychellus planki. Population of spider mites vary in different genotypes and concentrates on the plant superior stratum. The necessary sample size is crescent as population grows, at the beginning of the infestation, 50 leaflets are enough with CIA95% (confidence interval amplitude with 1-p=0,95) maximum equal to 2 spider mites 20cm-2. To quantify higher populations 150 leaflets is necessary with CIA95% maximum equal to 4 spider mites 20cm-2. Yield variation as response to spider mite populations attack depend on the studied genotype and to all genotypes there is difference between the infested and not-infested plots. Average damage on Santa Maria experiment was 493 kg ha-1 and São Sepé 427 kg ha-1 and average gain of 33,4%. / Este trabalho teve por objetivo avaliar a influência de genótipos de soja na população de ácaros, quantificar os danos decorrentes do ataque de ácaros e determinar o número de folíolos a serem coletados em diferentes genótipos para a quantificação de sua população. Para isto, foram realizados dois experimentos localizados nos municípios de São Sepé (20 genótipos) e Santa Maria (25 genótipos). Os experimentos foram conduzidos no delineamento blocos ao acaso, com quatro repetições em parcelas de 4,5 e 5,0 x 25 m. Foram realizadas amostragens semanais através da coleta de 25 folíolos do extrato médio e 25 do extrato superior das plantas em cada genótipo, avaliando uma área de 20 cm2 por folíolo. Para a determinação do tamanho de amostra foram utilizados os dados das avaliações em que pelo menos um genótipo apresentou população média superior a um ácaro.cm-². Para o número de ácaros, utilizou-se os valores de imaturos + adultos, comparando as médias dos genótipos pelo teste t bootstrap. Foi estimado o tamanho de amostra para amplitudes de 2 e 4 ácaros 20cm-2 e realizado o cálculo do tamanho ótimo de amostra. Para quantificação dos danos dos ácaros manteve-se, em cada genótipo, parcelas infestadas e sem infestação, através de aplicação de acaricidas. A espécie predominante foi Mononychellus planki. A população de ácaros é diferente em função do genótipo e concentra-se na parte superior das plantas. O tamanho de amostras necessário é crescente em função do incremento da população de ácaros, no início das infestações 50 folíolos suficientes com AIC95% (amplitude do intervalo de confiança com 1-p=0,95) máxima igual a 2 ácaros 20cm-2. Para quantificação de populações mais elevadas são necessários 150 folíolos com AIC95% máxima igual a 4 ácaros 20cm-2. A variação no rendimento de grãos pelo ataque de ácaros depende do genótipo avaliado e, comparando a área controlada com a não controlada, há diferença para todos os genótipos. O dano médio no experimento de Santa Maria foi de 493 kg ha-1 e no de São Sepé 427 kg ha-1, com ganho médio de 33,4%.
87

Model adaptation techniques in machine translation / Techniques d'adaptation en traduction automatique

Shah, Kashif 29 June 2012 (has links)
L’approche statistique pour la traduction automatique semble être aujourd’hui l’approche la plusprometteuse. Cette approche permet de développer rapidement un système de traduction pour unenouvelle paire de langue lorsque les données d'apprentissage disponibles sont suffisammentconséquentes.Les systèmes de traduction automatique statistique (Statistical Machine Translation (SMT)) utilisentdes textes parallèles, aussi appelés les bitextes, comme support d'apprentissage pour créer lesmodèles de traduction. Ils utilisent également des corpus monolingues afin de modéliser la langueciblée.Les performances d'un système de traduction automatique statistique dépendent essentiellement dela qualité et de la quantité des données disponibles. Pour l'apprentissage d'un modèle de traduction,les textes parallèles sont collectés depuis différentes sources, dans différents domaines. Ces corpussont habituellement concaténés et les phrases sont extraites suite à un processus d'alignement desmots.Néanmoins, les données parallèles sont assez hétérogènes et les performances des systèmes detraduction automatique dépendent généralement du contexte applicatif. Les performances varient laplupart du temps en fonction de la source de données d’apprentissage, de la qualité de l'alignementet de la cohérence des données avec la tâche. Les traductions, sélectionnées parmi différenteshypothèses, sont directement influencées par le domaine duquel sont récupérées les donnéesd'apprentissage. C'est en contradiction avec l'apprentissage des modèles de langage pour lesquelsdes techniques bien connues sont utilisées pour pondérer les différentes sources de données. Ilapparaît donc essentiel de pondérer les corpus d’apprentissage en fonction de leur importance dansle domaine de la tâche de traduction.Nous avons proposé de nouvelles méthodes permettant de pondérer automatiquement les donnéeshétérogènes afin d'adapter le modèle de traduction.Dans une première approche, cette pondération automatique est réalisée à l'aide d'une technique deré-échantillonnage. Un poids est assigné à chaque bitextes en fonction de la proportion de donnéesdu corpus. Les alignements de chaque bitextes sont par la suite ré-échantillonnés en fonction de cespoids. Le poids attribué aux corpus est optimisé sur les données de développement en utilisant uneméthode numérique. De plus, un score d'alignement relatif à chaque paire de phrase alignée estutilisé comme mesure de confiance.Dans un travail approfondi, nous pondérons en ré-échantillonnant des alignements, en utilisant despoids qui diminuent en fonction de la distance temporelle entre les bitextes et les données de test.Nous pouvons, de cette manière, utiliser tous les bitextes disponibles tout en mettant l'accent sur leplus récent.L'idée principale de notre approche est d'utiliser une forme paramétrique, ou des méta-poids, pourpondérer les différentes parties des bitextes. De cette manière, seuls quelques paramètres doiventêtre optimisés.Nous avons également proposé un cadre de travail générique qui, lors du calcul de la table detraduction, ne prend en compte que les corpus et les phrases réalisant les meilleurs scores. Cetteapproche permet une meilleure distribution des masses de probabilités sur les paires de phrasesindividuelles.Nous avons présenté les résultats de nos expériences dans différentes campagnes d'évaluationinternationales, telles que IWSLT, NIST, OpenMT et WMT, sur les paires de langues Anglais/Arabeet Fançais/Arabe. Nous avons ainsi montré une amélioration significative de la qualité destraductions proposées. / Nowadays several indicators suggest that the statistical approach to machinetranslation is the most promising. It allows fast development of systems for anylanguage pair provided that sufficient training data is available.Statistical Machine Translation (SMT) systems use parallel texts ‐ also called bitexts ‐ astraining material for creation of the translation model and monolingual corpora fortarget language modeling.The performance of an SMT system heavily depends upon the quality and quantity ofavailable data. In order to train the translation model, the parallel texts is collected fromvarious sources and domains. These corpora are usually concatenated, word alignmentsare calculated and phrases are extracted.However, parallel data is quite inhomogeneous in many practical applications withrespect to several factors like data source, alignment quality, appropriateness to thetask, etc. This means that the corpora are not weighted according to their importance tothe domain of the translation task. Therefore, it is the domain of the training resourcesthat influences the translations that are selected among several choices. This is incontrast to the training of the language model for which well‐known techniques areused to weight the various sources of texts.We have proposed novel methods to automatically weight the heterogeneous data toadapt the translation model.In a first approach, this is achieved with a resampling technique. A weight to eachbitexts is assigned to select the proportion of data from that corpus. The alignmentscoming from each bitexts are resampled based on these weights. The weights of thecorpora are directly optimized on the development data using a numerical method.Moreover, an alignment score of each aligned sentence pair is used as confidencemeasurement.In an extended work, we obtain such a weighting by resampling alignments usingweights that decrease with the temporal distance of bitexts to the test set. By thesemeans, we can use all the available bitexts and still put an emphasis on the most recentone. The main idea of our approach is to use a parametric form or meta‐weights for theweighting of the different parts of the bitexts. This ensures that our approach has onlyfew parameters to optimize.In another work, we have proposed a generic framework which takes into account thecorpus and sentence level "goodness scores" during the calculation of the phrase‐tablewhich results into better distribution of probability mass of the individual phrase pairs.
88

Novel Approach to Epipolar Resampling of HRSI and Satellite Stereo Imagery-based Georeferencing of Aerial Images

Oh, Jaehong 22 July 2011 (has links)
No description available.
89

Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate

Xu, Yangyi 03 December 2014 (has links)
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems. For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations. For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes. / Ph. D.
90

Modelo de regressão para dados com censura intervalar e dados de sobrevivência grupados / Regression model for interval-censored data and grouped survival data

Hashimoto, Elizabeth Mie 04 February 2009 (has links)
Neste trabalho foi proposto um modelo de regressão para dados com censura intervalar utilizando a distribuição Weibull-exponenciada, que possui como característica principal a função de taxa de falha que assume diferentes formas (unimodal, forma de banheira, crescente e decrescente). O atrativo desse modelo de regressão é a sua utilização para discriminar modelos, uma vez que o mesmo possui como casos particulares os modelos de regressão Exponencial, Weibull, Exponencial-exponenciada, entre outros. Também foi estudado um modelo de regressão para dados de sobrevivência grupados na qual a abordagem é fundamentada em modelos de tempo discreto e em tabelas de vida. A estrutura de regressão representada por uma probabilidade é modelada adotando-se diferentes funções de ligação, tais como, logito, complemento log-log, log-log e probito. Em ambas as pesquisas, métodos de validação dos modelos estatísticos propostos são descritos e fundamentados na análise de sensibilidade. Para detectar observações influentes nos modelos propostos, foram utilizadas medidas de diagnóstico baseadas na deleção de casos, denominadas de influência global e medidas baseadas em pequenas perturbações nos dados ou no modelo proposto, denominada de influência local. Para verificar a qualidade de ajuste do modelo e detectar pontos discrepantes foi realizada uma análise de resíduos nos modelos propostos. Os resultados desenvolvidos foram aplicados a dois conjuntos de dados reais. / In this study, a regression model for interval-censored data were developed, using the Exponentiated- Weibull distribution, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease). A good feature of that regression model is their use to discriminate models, that have as particular cases, the models of regression: Exponential, Weibull, Exponential-exponentiated, amongst others. Also a regression model were studied for grouped survival data in which the approach is based in models of discrete time and in life tables, the regression structure represented by a probability is modeled through the use of different link function, logit, complementary log-log, log-log or probit. In both studies, validation methods for the statistical models studied are described and based on the sensitivity analysis. To find influential observations in the studied models, diagnostic measures were used based on case deletion, denominated as global influence and measures based on small perturbations on the data or in the studied model, denominated as local influence. To verify the goodness of fitting of the model and to detect outliers it was performed residual analysis for the proposed models. The developed results were applied to two real data sets.

Page generated in 0.1371 seconds