• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

5G Positioning using Machine Learning

Malmström, Magnus January 2018 (has links)
Positioning is recognized as an important feature of fifth generation (\abbrFiveG) cellular networks due to the massive number of commercial use cases that would benefit from access to position information. Radio based positioning has always been a challenging task in urban canyons where buildings block and reflect the radio signal, causing multipath propagation and non-line-of-sight (NLOS) signal conditions. One approach to handle NLOS is to use data-driven methods such as machine learning algorithms on beam-based data, where a training data set with positioned measurements are used to train a model that transforms measurements to position estimates.  The work is based on position and radio measurement data from a 5G testbed. The transmission point (TP) in the testbed has an antenna that have beams in both horizontal and vertical layers. The measurements are the beam reference signal received power (BRSRP) from the beams and the direction of departure (DOD) from the set of beams with the highest received signal strength (RSS). For modelling of the relation between measurements and positions, two non-linear models has been considered, these are neural network and random forest models. These non-linear models will be referred to as machine learning algorithms.  The machine learning algorithms are able to position the user equipment (UE) in NLOS regions with a horizontal positioning error of less than 10 meters in 80 percent of the test cases. The results also show that it is essential to combine information from beams from the different vertical antenna layers to be able to perform positioning with high accuracy during NLOS conditions. Further, the tests show that the data must be separated into line-of-sight (LOS) and NLOS data before the training of the machine learning algorithms to achieve good positioning performance under both LOS and NLOS conditions. Therefore, a generalized likelihood ratio test (GLRT) to classify data originating from LOS or NLOS conditions, has been developed. The probability of detection of the algorithms is about 90\% when the probability of false alarm is only 5%.  To boost the position accuracy of from the machine learning algorithms, a Kalman filter have been developed with the output from the machine learning algorithms as input. Results show that this can improve the position accuracy in NLOS scenarios significantly. / Radiobasserad positionering av användarenheter är en viktig applikation i femte generationens (5G) radionätverk, som mycket tid och pengar läggs på för att utveckla och förbättra. Ett exempel på tillämpningsområde är positionering av nödsamtal, där ska användarenheten kunna positioneras med en noggrannhet på ett tiotal meter. Radio basserad positionering har alltid varit utmanande i stadsmiljöer där höga hus skymmer och reflekterar signalen mellan användarenheten och basstationen. En ide att positionera i dessa utmanande stadsmiljöer är att använda datadrivna modeller tränade av algoritmer baserat på positionerat testdata – så kallade maskininlärningsalgoritmer. I detta arbete har två icke-linjära modeller - neurala nätverk och random forest – bli implementerade och utvärderade för positionering av användarenheter där signalen från basstationen är skymd.% Dessa modeller refereras som maskininlärningsalgoritmer. Utvärderingen har gjorts på data insamlad av Ericsson från ett 5G-prototypnätverk lokaliserat i Kista, Stockholm. Antennen i den basstation som används har 48 lober vilka ligger i fem olika vertikala lager. Insignal och målvärdena till maskininlärningsalgoritmerna är signals styrkan för varje stråle (BRSRP), respektive givna GPS-positioner för användarenheten. Resultatet visar att med dessa maskininlärningsalgoritmer positioneras användarenheten med en osäkerhet mindre än tio meter i 80 procent av försöksfallen. För att kunna uppnå dessa resultat är viktigt att kunna detektera om signalen mellan användarenheten och basstationen är skymd eller ej. För att göra det har ett statistiskt test blivit implementerat. Detektionssannolikhet för testet är över 90 procent, samtidigt som sannolikhet att få falskt alarm endast är ett fåtal procent.\newline \newline%För att minska osäkerheten i positioneringen har undersökningar gjorts där utsignalen från maskininlärningsalgoritmerna filtreras med ett Kalman-filter. Resultat från dessa undersökningar visar att Kalman-filtret kan förbättra presitionen för positioneringen märkvärt.
92

Testes de hipóteses em eleições majoritárias / Test of hypothesis in majoritarian election

Victor Fossaluza 16 June 2008 (has links)
O problema de Inferência sobre uma proporção, amplamente divulgado na literatura estatística, ocupa papel central no desenvolvimento das várias teorias de Inferência Estatística e, invariavelmente, é objeto de investigação e discussão em estudos comparativos entre as diferentes escolas de Inferência. Ademais, a estimação de proporções, bem como teste de hipóteses para proporções, é de grande importância para as diversas áreas do conhecimento, constituindo um método quantitativo simples e universal. Nesse trabalho, é feito um estudo comparativo entre as abordagens clássica e bayesiana do problema de testar as hipóteses de ocorrência ou não de 2º turno em um cenário típico de eleição majoritária (maioria absoluta) em dois turnos no Brasil. / The problem of inference about a proportion, widely explored in the statistical literature, plays a key role in the development of several theories of statistical inference and, invariably, is the object of investigation and discussion in comparative studies among different schools of inference. In addition, the estimation of proportions, as well as test of hypothesis for proportions, is very important in many areas of knowledge as it constitutes a simple and universal quantitative method. In this work a comparative study between the Classical and Bayesian approaches to the problem of testing the hypothesis of occurrence of second round (or not) in a typical scenario of a majoritarian election (absolute majority) in two rounds in Brazil is developed.
93

Modelos lineares mistos para dados longitudinais em ensaio fatorial com tratamento adicional / Mixed linear models for longitudinal data in a factorial experiment with additional treatment

Gilson Silvério da Rocha 09 October 2015 (has links)
Em experimentos agronômicos são comuns ensaios planejados para estudar determinadas culturas por meio de múltiplas mensurações realizadas na mesma unidade amostral ao longo do tempo, espaço, profundidade entre outros. Essa forma com que as mensurações são coletadas geram conjuntos de dados que são chamados de dados longitudinais. Nesse contexto, é de extrema importância a utilização de metodologias estatísticas que sejam capazes de identificar possíveis padrões de variação e correlação entre as mensurações. A possibilidade de inclusão de efeitos aleatórios e de modelagem das estruturas de covariâncias tornou a metodologia de modelos lineares mistos uma das ferramentas mais apropriadas para a realização desse tipo de análise. Entretanto, apesar de todo o desenvolvimento teórico e computacional, a utilização dessa metodologia em delineamentos mais complexos envolvendo dados longitudinais e tratamentos adicionais, como os utilizados na área de forragicultura, ainda é passível de estudos. Este trabalho envolveu o uso do diagrama de Hasse e da estratégia top-down na construção de modelos lineares mistos no estudo de cortes sucessivos de forragem provenientes de um experimento de adubação com boro em alfafa (Medicago sativa L.) realizado no campo experimental da Embrapa Pecuária Sudeste. Primeiramente, considerou-se uma abordagem qualitativa para todos os fatores de estudo e devido à complexidade do delineamento experimental optou-se pela construção do diagrama de Hasse. A incorporação de efeitos aleatórios e seleção de estruturas de covariâncias para os resíduos foram realizadas com base no teste da razão de verossimilhanças calculado a partir de parâmetros estimados pelo método da máxima verossimilhança restrita e nos critérios de informação de Akaike (AIC), Akaike corrigido (AICc) e bayesiano (BIC). Os efeitos fixos foram testados por meio do teste Wald-F e, devido aos efeitos significativos das fontes de variação associadas ao fator longitudinal, desenvolveu-se um estudo de regressão. A construção do diagrama de Hasse foi fundamental para a compreensão e visualização simbólica do relacionamento de todos os fatores presentes no estudo, permitindo a decomposição das fontes de variação e de seus graus de liberdade, garantindo que todos os testes fossem realizados corretamente. A inclusão de efeito aleatório associado à unidade experimental foi essencial para a modelagem do comportamento de cada unidade e a estrutura de componentes de variância com heterogeneidade, incorporada aos resíduos, foi capaz de modelar eficientemente a heterogeneidade de variâncias presente nos diferentes cortes da cultura da alfafa. A verificação do ajuste foi realizada por meio de gráficos de diagnósticos de resíduos. O estudo de regressão permitiu avaliar a produtividade de matéria seca da parte aérea da planta (kg ha-1) de cortes consecutivos da cultura da alfafa, envolvendo a comparação de adubações com diferentes fontes e doses de boro. Os melhores resultados de produtividade foram observados para a combinação da fonte ulexita com as doses 3, 6 e 9 kg ha-1 de boro. / Assays aimed at studying some crops through multiple measurements performed in the same sample unit along time, space, depth etc. have been frequently adopted in agronomical experiments. This type of measurement originates a dataset named longitudinal data, in which the use of statistical procedures capable of identifying possible standards of variation and correlation among measurements has great importance. The possibility of including random effects and modeling of covariance structures makes the methodology of mixed linear models one of the most appropriate tools to perform this type of analysis. However, despite of all theoretical and computational development, the use of such methodology in more complex designs involving longitudinal data and additional treatments, such as those used in forage crops, still needs to be studied. The present work covered the use of the Hasse diagram and the top-down strategy in the building of mixed linear models for the study of successive cuts from an experiment involving boron fertilization in alfalfa (Medicago sativa L.) carried out in the field area of Embrapa Southeast Livestock. First, we considered a qualitative approach for all study factors and we chose the Hasse diagram building due to the model complexity. The inclusion of random effects and selection of covariance structures for residues were performed based on the likelihood ratio test, calculated based on parameters estimated through the restricted maximum likelihood method, the Akaike\'s Information Criterion (AIC), the Akaike\'s information criterion corrected (AICc) and the Bayesian Information Criterion (BIC). The fixed effects were analyzed through the Wald-F test and we performed a regression study due to the significant effects of the variation sources associated with the longitudinal factor. The Hasse diagram building was essential for understanding and symbolic displaying regarding the relation among all factors present in the study, thus allowing variation sources and their degrees of freedom to be decomposed, assuring that all tests were correctly performed. The inclusion of random effect associated with the sample unit was essential for modeling the behavior of each unity. Furthermore, the structure of variance components with heterogeneity, added to the residues, was capable of modeling efficiently the heterogeneity of variances present in the different cuts of alfalfa plants. The fit was checked by residual diagnostic plots. The regression study allowed us to evaluate the productivity of shoot dry matter (kg ha-1) related to successive cuts of alfalfa plants, involving the comparison of fertilization with different boron sources and doses. We observed the best productivity in the combination of the source ulexite with the doses 3, 6 and 9 kg ha-1 boron.
94

Techniques statistiques de détection de cibles dans des images infrarouges inhomogènes en milieu maritime. / Statistical techniques for target detection in inhomogenous infrared images in maritime environment

Vasquez, Emilie 11 January 2011 (has links)
Des techniques statistiques de détection d'objet ponctuel dans le ciel ou résolu dans la mer dans des images infrarouges de veille panoramique sont développées. Ces techniques sont adaptées aux inhomogénéités présentes dans ce type d'image. Elles ne sont fondées que sur l'analyse de l'information spatiale et ont pour objectif de maîtriser le taux de fausse alarme sur chaque image. Pour les zones de ciel, une technique conjointe de segmentation et détection adaptée aux variations spatiales de la luminosité moyenne est mise en œuvre et l'amélioration des performances auxquelles elle conduit est analysée. Pour les zones de mer, un détecteur de bord à taux de fausse alarme constant en présence d'inhomogénéités et de corrélations spatiales des niveaux de gris est développé et caractérisé. Dans chaque cas, la prise en compte des inhomogénéités dans les algorithmes statistiques s'avère essentielle pour maîtriser le taux de fausse alarme et améliorer les performances de détection. / Statistical detection techniques of point target in the sky or resolved target in the sea in infrared surveillance system images are developed. These techniques are adapted to inhomogeneities present in this kind of images. They are based on the spatial information analysis and allow the control of the false alarm rate in each image.For sky areas, a joint segmentation detection technique adapted to spatial variations of the mean luminosity is developed and its performance improvement is analyzed. For sea areas, an edge detector with constant false alarm rate when inhomogeneities and grey level spatial correlations are present is developed and characterized. In each case, taking into account the inhomogeneities in these statistical algorithms is essential to control the false alarm rate and to improve the detection performance.
95

Extending covariance structure analysis for multivariate and functional data

Sheppard, Therese January 2010 (has links)
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
96

Modely s kategoriální odezvou / Models with categorical response

Faltýnková, Anežka January 2015 (has links)
This thesis concentrates on regression models with a categorical response. It focuses on the model of logistic regression with binary response and its generalization in which two models are distinguished: multinomial regression with nominal response and multinomial regression with ordinal response. For all three models separately, the Wald test and the likelihood ratio test are derived. These theoretical derivations are then used to calculate the test statistics for specific examples in statistical software R. The theory described in the thesis is illustrated by examples with small and large number of explanatory variables.
97

Fitting extreme value distributions to the Zambezi River flood water levels recorded at Katima Mulilo in Namibia (1965-2003)

Kamwi, Innocent Silibelo January 2005 (has links)
>Magister Scientiae - MSc / This study sought to identify and fit the appropriate extreme value distribution to flood data, using the method of maximum likelihood. To examine the uncertainty of the estimated parameters and evaluate the goodness of fit of the model identified. The study revealed that the three parameter Weibull and the generalised extreme value (GEV) distributions fit the data very well. Standard errors for the estimated parameters were calculated from the empirical information matrix. An upper limit to the flood levels followed from the fitted distribution.
98

GLR Control Charts for Monitoring Correlated Binary Processes

Wang, Ning 27 December 2013 (has links)
When monitoring a binary process proportion p, it is usually assumed that the binary observations are independent. However, it is very common that the observations are correlated with p being the correlation between two successive observations. The first part of this research investigates the problem of monitoring p when the binary observations follow a first-order two-state Markov chain model with p remaining unchanged. A Markov Binary GLR (MBGLR) chart with an upper bound on the estimate of p is proposed to monitor a continuous stream of autocorrelated binary observations treating each observation as a sample of size n=1. The MBGLR chart with a large upper bound has good overall performance over a wide range of shifts. The MBGLR chart is optimized using the extra number of defectives (END) over a range of upper bounds for the MLE of p. The numerical results show that the optimized MBGLR chart has a smaller END than the optimized Markov binary CUSUM. The second part of this research develops a CUSUM-pp chart and a GLR-pp chart to monitor p and p simultaneously. The CUSUM-pp with two tuning parameters is designed to detect shifts in p and p when the shifted values are known. We apply two CUSUM-pp charts as a chart combination to detect increases in p and increases or decreases in p. The GLR-pp chart with an upper bound on the estimate of p, and an upper bound and a lower bound on the estimate of p works well when the shifts are unknown. We find that the GLR-pp chart has better overall performance. The last part of this research investigates the problem of monitoring p with p remains at the target value when the correlated binary observations are aggregated into samples with n>1. We assume that samples are independent and there is correlation between the observations in a sample. We proposed some GLR and CUSUM charts to monitor p and the performance of the charts are compared. The simulation results show MBNGLR has overall better performance than the other charts. / Ph. D.
99

Robust Deep Learning Under Application Induced Data Distortions

Rajeev Sahay (10526555) 21 November 2022 (has links)
<p>Deep learning has been increasingly adopted in a multitude of settings. Yet, its strong performance relies on processing data during inference that is in-distribution with its training data. Deep learning input data during deployment, however, is not guaranteed to be in-distribution with the model's training data and can often times be distorted, either intentionally (e.g., by an adversary) or unintentionally (e.g., by a sensor defect), leading to significant performance degradations. In this dissertation, we develop algorithms for a variety of applications to improve the performance of deep learning models in the presence of distorted data. We begin by first designing feature engineering methodologies to increase classification performance in noisy environments. Here, we demonstrate the efficacy of our proposed algorithms on two target detection tasks and show that our framework outperforms a variety of state-of-the-art baselines. Next, we develop mitigation algorithms to improve the performance of deep learning in the presence of adversarial attacks and nonlinear signal distortions. In this context, we demonstrate the effectiveness of our methods on a variety of wireless communications tasks including automatic modulation classification, power allocation in massive MIMO networks, and signal detection. Finally, we develop an uncertainty quantification framework, which produces distributive estimates, as opposed to point predictions, from deep learning models in order to characterize samples with uncertain predictions as well as samples that are out-of-distribution from the model's training data. Our uncertainty quantification framework is carried out on a hyperspectral image target detection task as well as on counter unmanned aircraft systems (cUAS) model. Ultimately, our proposed algorithms improve the performance of deep learning in several environments in which the data during inference has been distorted to be out-of-distribution from the training data. </p>
100

Item Discrimination, Model-Data Fit, and Type I Error Rates in DIF Detection using Lord's <i>χ<sup>2</sup></i>, the Likelihood Ratio Test, and the Mantel-Haenszel Procedure

Price, Emily A. 11 June 2014 (has links)
No description available.

Page generated in 0.064 seconds