• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 89
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 561
  • 134
  • 100
  • 98
  • 76
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 41
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Explicit Estimators for a Banded Covariance Matrix in a Multivariate Normal Distribution

Karlsson, Emil January 2014 (has links)
The problem of estimating mean and covariances of a multivariate normal distributedrandom vector has been studied in many forms. This thesis focuses on the estimatorsproposed in [15] for a banded covariance structure with m-dependence. It presents theprevious results of the estimator and rewrites the estimator when m = 1, thus makingit easier to analyze. This leads to an adjustment, and a proposition for an unbiasedestimator can be presented. A new and easier proof of consistency is then presented.This theory is later generalized into a general linear model where the correspondingtheorems and propositions are made to establish unbiasedness and consistency. In thelast chapter some simulations with the previous and new estimator verifies that thetheoretical results indeed makes an impact.
52

Localisation à haute résolution de cibles lentes et de petite taille à l’aide de radars de sol hautement ambigus / High resolution localization of small and slow-moving targets with highly ambiguous ground-based radars

Hadded Aouchiche, Linda 14 March 2018 (has links)
Cette thèse a pour objectif d’améliorer la détection de cibles lentes et de faible réflectivité dans le cas de radars de sol Doppler pulsés à fréquence de récurrence intermédiaire. Ces radars, hautement ambigus en distance et en vitesse, émettent de façon consécutive des trains d’impulsions de périodes de récurrence différentes, afin de lever les ambiguïtés.L’émission successive de trains d’impulsions de courtes durées conduit à une faible capacité de séparation sur l’axe Doppler. Par conséquent, les objets lents de faible réflectivité, comme les drones, sont difficiles à distinguer du fouillis de sol. A l’issue du traitement Doppler conventionnel qui vise à éliminer les échos de fouillis, les performances de détection de ces cibles sont fortement atténuées. Pour palier à ce problème, nous avons développé une nouvelle chaîne de traitement 2D distance/Doppler pour les radars à fréquence de récurrence intermédiaire. Celle-ci s’appuie, en premier lieu, sur un algorithme itératif permettant d’exploiter la diversité temporelle entre les trains d’impulsions émis, afin de lever les ambiguïtés en distance et en vitesse et de détecter les cibles rapides exo-fouillis. La détection des cibles lentes endo-fouillis est ensuite réalisée à l’aide d’un détecteur adaptatif. Une nouvelle approche, permettant d’associer les signaux issus de rafales de caractéristiques différentes pour l’estimation de la matrice de covariance, est utilisée en vue d’optimiser les performances de détection. Les différents tests effectués sur données simulées et réelles pour évaluer les traitements développés et la nouvelle chaîne de traitement, ont montré l’intérêt de ces derniers. / The aim of this thesis is to enhance the detection of slow-moving targets with low reflectivity in case of ground-based pulse Doppler radars operating in intermediate pulse repetition frequency. These radars are highly ambiguous in range and Doppler. To resolve ambiguities, they transmit successively short pulse trains with different pulse repetition intervals. The transmission of short pulse trains results in a poor Doppler resolution. As consequence, slow-moving targets with low reflectivity, such as unmanned aerial vehicles, are buried into clutter returns. One of the main drawbacks of the classical Doppler processing of intermediate pulse repetition frequency pulse Doppler radars is the low detection performance of small and slowly-moving targets after ground clutter rejection. In order to address this problem, a two-dimensional range / Dopper processing chain including new techniques is proposed in this thesis. First, an iterative algorithm allows to exploit transmitted pulse trains temporal diversity to resolve range and Doppler ambiguities and detect fast, exo-clutter, targets. The detection of slow, endo-clutter, targets is then performed by an adaptive detection scheme. It uses a new covariance matrix estimation approach allowing the association of pulse trains with different characteristics in order to enhance detection performance. The different tests performed on simulated and real data to evaluate the proposed techniques and the new processing chain have shown their effectiveness.
53

A study of covariance structure selection for split-plot designs analyzed using mixed models

Qiu, Chen January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / In the classic split-plot design where whole plots have a completely randomized design, the conventional analysis approach assumes a compound symmetry (CS) covariance structure for the errors of observation. However, often this assumption may not be true. In this report, we examine using different covariance models in PROC MIXED in the SAS system, which are widely used in the repeated measures analysis, to model the covariance structure in the split-plot data in which the simple compound symmetry assumption does not hold. The comparison of the covariance structure models in PROC MIXED and the conventional split-plot model is illustrated through a simulation study. In the example analyzed, the heterogeneous compound symmetry (CSH) covariance model has the smallest values for the Akaike and Schwarz’s Bayesian information criteria fit statistics and is therefore the best model to fit our example data.
54

Econometric computing with HC and HAC covariance matrix estimators

Zeileis, Achim January 2004 (has links) (PDF)
Data described by econometric models typically contains autocorrelation and/or heteroskedasticity of unknown form and for inference in such models it is essential to use covariance matrix estimators that can consistently estimate the covariance of the model parameters. Hence, suitable heteroskedasticity-consistent (HC) and heteroskedasticity and autocorrelation consistent (HAC) estimators have been receiving attention in the econometric literature over the last 20 years. To apply these estimators in practice, an implementation is needed that preferably translates the conceptual properties of the underlying theoretical frameworks into computational tools. In this paper, such an implementation in the package sandwich in the R system for statistical computing is described and it is shown how the suggested functions provide reusable components that build on readily existing functionality and how they can be integrated easily into new inferential procedures or applications. The toolbox contained in sandwich is extremely flexible and comprehensive, including specific functions for the most important HC and HAC estimators from the econometric literature. Several real-world data sets are used to illustrate how the functionality can be integrated into applications. / Series: Research Report Series / Department of Statistics and Mathematics
55

Object-oriented Computation of Sandwich Estimators

Zeileis, Achim January 2006 (has links) (PDF)
Sandwich covariance matrix estimators are a popular tool in applied regression modeling for performing inference that is robust to certain types of model misspecification. Suitable implementations are available in the R system for statistical computing for certain model fitting functions only (in particular lm()), but not for other standard regression functions, such as glm(), nls(), or survreg(). Therefore, conceptual tools and their translation to computational tools in the package sandwich are discussed, enabling the computation of sandwich estimators in general parametric models. Object orientation can be achieved by providing a few extractor functions-most importantly for the empirical estimating functions-from which various types of sandwich estimators can be computed. / Series: Research Report Series / Department of Statistics and Mathematics
56

Probe Level Analysis of Affymetrix Microarray Data

Kennedy, Richard Ellis 01 January 2008 (has links)
The analysis of Affymetrix GeneChip® data is a complex, multistep process. Most often, methodscondense the multiple probe level intensities into single probeset level measures (such as RobustMulti-chip Average (RMA), dChip and Microarray Suite version 5.0 (MAS5)), which are thenfollowed by application of statistical tests to determine which genes are differentially expressed. An alternative approach is a probe-level analysis, which tests for differential expression directly using the probe-level data. Probe-level models offer the potential advantage of more accurately capturing sources of variation in microarray experiments. However, this has not been thoroughly investigated, since current research efforts have largely focused on the development of improved expression summary methods. This research project will review current approaches to analysis of probe-level data and discuss extensions of two examples, the S-Score and the Random Variance Model (RVM). The S-Score is a probe-level algorithm based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0) for genes with low levels of expression. Initial results with the S-Score have been promising, but the method has been limited to two-chip comparisons. This project presents extensions to the S-Score that permit comparisons of multiple chips and "borrowing" of information across probes to increase statistical power. The RVM is a probeset-level algorithm that models the variance of the probeset intensities as a random sample from a common distribution to "borrow" information across genes. This project presents extensions to the RVM for probe-level data, using multivariate statistical theory to model the covariance among probes in a probeset. Both of these methods show the advantages of probe-level, rather than probeset-level, analysis in detecting differential gene expression for Afymetrix GeneChip data. Future research will focus on refining the probe-level models of both the S-Score and RVM algorithms to increase the sensitivity and specificity of microarray experiments.
57

Modelling and comperative analysis of volatility spillover between US, Czech Republic and Serbian stock markets

Marković, Jelena January 2015 (has links)
MASTER THESIS MODELLING AND COMPARATIVE ANALYZES OF VOLATILITY SPILLOVER BETWEEN US, CZECH REPUBLIC AND SERBIAN STOCK MARKETS Abstract This paper estimates Serbian, Czech and US stock markets volatility. Few studies analyzed stock market linkages for these three markets. The mean equation is estimated using the vector auto- regression model. The second moments is further estimated using different multivariate GARCH models. We find that current conditional volatilities for each stock is highly affected by the past innovations. Cross-market correlations are significant as well. However, there is a higher conditional correlation between Czech and US stock market indices compared to the conditional correlation between Serbian and US stock indices.
58

Carbon dynamics of longleaf pine ecosystems

Wright, Jennifer Kathryn January 2013 (has links)
The interactions between vegetation and climate are complex and critical to our ability to predict and mitigate climate change. Savanna ecosystems, unique in their structure and composition, are particularly dynamic and their carbon cycling has been identified as highly significant to the global carbon budget. Understanding the responses of these dynamic ecosystems to environmental conditions is therefore central to both ecosystem management and scientific knowledge. Longleaf pine ecosystems are highly biodiverse and unique savanna ecosystems located in the south-eastern USA – an important current carbon sink and key area identified for future carbon sequestration. These ecosystems depend on fire to maintain their structure and function, and the longleaf pine tree itself (Pinus palustris Mill.) has been noted for its resilience to drought, fire, pests and storms and is thus becoming increasingly attractive as both a commercial forestry species and a provider of other ecosystem services. Previous process-based models tested in the south-eastern USA have been shown to fail in conditions of drought or rapid disturbance. Consequently, in order to inform management and understand better the physiology of these ecosystems, there is a need for a process-based model capable of upscaling leaf-level processes to the stand scale to predict GPP of longleaf pine savannas. P.palustris exists across a wide range of soil moisture conditions, from dry sandy well-drained soils (xeric) to claypan areas with higher moisture content (mesic). Previous work has demonstrated that this species adjusts many aspects of its physiology in response to these differing soil conditions, even under identical climate. The research in this thesis supports these previous findings, but additionally explores, with the assistance of the Soil Plant Atmosphere model (SPA), the productivity response of P. palustris across the soil moisture gradient. Contrary to expectations, measurements, field observations and modelling suggest that P. palustris trees growing in already water-limited conditions cope better with exceptional drought than their mesic counterparts. At the leaf-level, xeric P. palustris trees were found to have higher measured net photosynthesis, but the lower stand density and leaf area at this site meant that in non-drought conditions mesic P. palustris annual gross primary productivity (GPP) was 23% greater than xeric annual GPP. Initial upscaling of leaf-level processes to the canopy scale using the SPA model found that, during the growing season when other components of longleaf pine ecosystems are active, the longleaf pine may only be responsible for around 65% of the total productivity. Other important components of longleaf pine savannas are oaks and grasses which, with pine, constitute 95% of longleaf pine ecosystem biomass. Each of these groups, however, responds differently to fire and water availability. Despite this, the other components of longleaf pine savannas have received limited research attention and have never been modelled using a process-based model such as SPA. As integral components of longleaf pine carbon budgets, it is essential that the physiology and productivity of oaks and grasses in this system are better understood. The research in this thesis studied the productivity response of these groups during drought across a soil moisture gradient, and found that oak and pines at each site appear to fill separate ecohydrological niches depending on whether or not they are growing in a xeric or mesic habitat. As expected, the highest drought tolerance was found in the C4 grass, wiregrass (Aristida stricta), at both xeric and mesic sites. In order to further explore the contributions of the different functional groups in longleaf pine savannas, the SPA model was adapted to run with concurrent functional groups and to represent the different photosynthetic pathways of the understorey grasses (C4) and the canopy trees (C3). The aim of this part of the thesis was to represent better a savanna ecosystem in a process-based model and explore and quantify the contributions of each functional group diurnally, seasonally, annually and interannually. Modelling results suggest that accurately representing the phenology not only of trees but of grasses, is critical to capturing ecosystem GPP and its variability. This phenology may not only be seasonally controlled, but also dictated by fire. Overall, this research highlights the importance of continued research into savanna and savanna-like ecosystems. Additionally, it provides an insight into the responses of multiple ecosystem components to an extreme drought, and how these responses differ at leaf, stand and landscape scales. The thesis also employs a little-used method of combining eddy-covariance data with a process-based model to separate out different ecosystem components, a method becoming more common but not yet widely tested.
59

Correlation and variance stabilization in the two group comparison case in high dimensional data under dependencies

Paranagama, Dilan C. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Gary L. Gadbury / Multiple testing research has undergone renewed focus in recent years as advances in high throughput technologies have produced data on unprecedented scales. Much of the focus has been on false discovery rates (FDR) and related quantities that are estimated (or controlled for) in large scale multiple testing situations. Recent papers by Efron have directly addressed this issue and incorporated measures to account for high-dimensional correlation structure when estimating false discovery rates and when estimating a density. Other authors also have proposed methods to control or estimate FDR under dependencies with certain assumptions. However, not much focus is given to the stability of the results obtained under dependencies in the literature. This work begins by demonstrating the effect of dependence structure on the variance of the number of discoveries and the false discovery proportion (FDP). A variance of the number of discoveries is shown and the density of a test statistic, conditioned on the status (reject or failure to reject) of a different correlated test, is derived. A closed form solution to the correlation between test statistics is also derived. This correlation is a combination of correlations and variances of the data within groups being compared. It is shown that these correlations among the test statistics affect the conditional density and alters the threshold for significance of a correlated test, causing instability in the results. The concept of performing tests within networks, Conditional Network Testing (CNT) is introduced. This method is based on the conditional density mentioned above and uses the correlation between test statistics to construct networks. A method to simulate realistic data with preserved dependence structures is also presented. CNT is evaluated using simple simulations and the proposed simulation method. In addition, existing methods that controls false discovery rates are used on t-tests and CNT for comparing performance. It was shown that the false discovery proportion and type I error proportions are smaller when using CNT versus using t-tests and, in general, results are more stable when applied to CNT. Finally, applications and steps to further improve CNT are discussed.
60

Medidas Precisas de Energias de Transições Gama em Coincidência: Espectroscopia das Séries do 232U e 233U / Accurate measures of energies of gamma transitions in coincidence: 232U and 233U series of spectroscopy.

Guimarães Filho, Zwinglio de Oliveira 03 December 1998 (has links)
Este trabalho apresenta um método experimental para determinação de energia de transições gama utilizando-se detectores de HPGe em experimentos em coincidência com precisões similares às que se obtém em medidas simples. O método foi aplicado na medida das energias das transições gama dos decaimentos das séries do 233U e 232U. Matrizes de covariância de dados experimentais são tão significativas quando suas variâncias para a avaliação de incertezas experimentais e indispensáveis para a atualização e combinação de resultados. O procedimento desenvolvido neste trabalho utiliza-se de toda a matriz de covariância das energias das transições gama bem como das matrizes de covariâncias de todos os parâmetros em todas as etapas da análise. O arranjo experimental utilizou o módulo multidetector, padrão Camac, desenvolvido no Laboratório do Acelerador Linear. Os dados foram analisados usando o programa Bidim, desenvolvido durante este trabalho. O procedimento de ajuste, baseado no método dos mínimos quadrados, é aplicado diretamente sobre o espectro bidimensional. A precisão final das energias das transições gama foram iguais ou superiores a cerca de 5eV. Testes de qui-quadrado dos ajustes, sempre melhores que 15%, não indicaram a presença de inconsistências. / This work presents an experimental method for determining gamma-ray energies employing HPGe detectors in coincidence experiments with precision similar to that obtained in single measurements. The method was applied in the measurement of gamma-ray energies from the decay chain of 233U and 232U. Covariance matrices between experimental data are as significant as variances in the evaluation of experimental uncertainties and indispensable for updating and combining experimental results. In this work the complete covariance matrices for the gamma-ray energies and for all parameters used in the developed procedure were determined in every stage of the work. The experimental apparatus used the Camac multidetector adapter developed in the Laboratório do Acelerador Linear. The data were analyzed using the software Bidim, developed during this work. The fitting procedure, based on the least squares method, was applied directly to the two-dimensional spectra. The best final precision of the gamma-ray energies was about 5eV. Chi-squared tests of the fit, always better than 15%, do not indicate the presence of inconsistencies.

Page generated in 0.0774 seconds