• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 88
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 559
  • 133
  • 100
  • 98
  • 76
  • 69
  • 69
  • 58
  • 53
  • 48
  • 46
  • 44
  • 40
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

An Improved Meta-analysis for Analyzing Cylindrical-type Time Series Data with Applications to Forecasting Problem in Environmental Study

Wang, Shuo 27 April 2015 (has links)
This thesis provides a case study on how the wind direction plays an important role in the amount of rainfall, in the village of Somi$acute{o}$. The primary goal is to illustrate how a meta-analysis, together with circular data analytic methods, helps in analyzing certain environmental issues. The existing GLS meta-analysis combines the merits of usual meta-analysis that yields a better precision and also accounts for covariance among coefficients. But, it is quite limited since information about the covariance among coefficients is not utilized. Hence, in my proposed meta-analysis, I take the correlations between adjacent studies into account when employing the GLS meta-analysis. Besides, I also fit a time series linear-circular regression as a comparable model. By comparing the confidence intervals of parameter estimates, covariance matrix, AIC, BIC and p-values, I discuss an improvement on the GLS meta analysis model in its application to forecasting problem in Environmental study.
172

Portfolio Construction using Clustering Methods

Ren, Zhiwei 26 April 2005 (has links)
One major criticism about the traditional mean-variance portfolio optimization is that it tends to magnify the estimation error. A little estimation error can cause the distortion of the whole portfolio. Two popular ways to solve this problem are to use a resampling method or the Black-Litterman method (Bayesian method). The clustering method is a newer way to solve the problem. Clustering means we group the highly correlated stocks first and treat the group as a single stock. After we group the stocks, we will have some clusters of stocks, then we run the traditional mean-variance portfolio optimization for these clusters. The clustering method can improve the stability of the portfolio and reduce the impact of estimation error. In this project, we will explain why it works and we will perform tests to determine if clustering methods do improve the stabilities and performance of the portfolio.
173

Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems

Charara, Ali 24 May 2018 (has links)
Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and reduce the overhead of the API. Performance reported of the dense and TLR-Cholesky shows many-fold speedups against state-of-the-art implementations on various systems equipped with GPUs. Additionally, the TLR implementation gives the user flexibility to select the desired accuracy. This trade-off between performance and accuracy is, currently, a well-established leading trend in the convergence of the third and fourth paradigm, i.e., HPC and Big Data, when moving forward with exascale software roadmap.
174

Padrão espaço temporal dos componentes do balanço de energia em clima subtropical úmido

Schirmbeck, Juliano January 2017 (has links)
Resumo: Considerando a importância da compreensão da dinâmica espaço temporal dos componentes do balanço de energia (BE) em escala regional para o gerenciamento de recursos hídrico e o manejo agrícola, o objetivo principal desta tese foi construir e analisar uma série temporal dos componentes do BE adequada às condições de clima subtropical úmido do Estado do Rio Grande do Sul. Para tanto, inicialmente foi avaliada a adequação de modelos de estimativa de BE para o Estado. Nesta etapa foram utilizados produtos MODIS e dados de referência medidos em uma torre micrometeorológica instalada em Cruz Alta – RS, usando valores instantâneos para um período de estudo de 2009 a 2011. Na sequência foi avaliada a adequação dos modelos em representar a variabilidade espacial dos componentes do BE. Nesta etapa foram usados produtos MODIS, dados de reanálise ERA Interim, dados de referência da torre micrometeorológica e dados de estações meteorológicas do INMET, para o mesmo período de estudo. Na última etapa do trabalho foi construída a série temporal dos componentes do BE usando o modelo METRIC, a qual abrangeu um período de 14 anos, de 2002 a 2016. Os resultados demonstraram que os três modelos analisados apresentam coerência com as medidas de referência, sendo as maiores limitações apresentadas pelo modelo SEBAL, as quais se atribui principalmente às condições ecoclimáticas do Estado e a baixa resolução espacial das imagens. Na análise da variabilidade espacial, o modelo METRIC apresentou maior consistência nos resultados e proporcionou maior número de dias com resultados válidos, sendo assim apontado como o mais apto para realização do restante do estudo. A série temporal construída possibilitou a compreensão dos padrões de distribuição espaço temporal dos componentes do BE no estado do Rio Grande do Sul. Há uma marcada sazonalidade nos componentes do BE, com maiores valores no verão e menores no inverno. G (fluxo de calor no solo) é o componente de menor magnitude e sua distribuição espacial e temporal é determinada pela distribuição de Rn (saldo de radiação). Já os componentes LE (fluxo de calor latente) e H (fluxo de calor sensível), são os que mostram magnitude maior e apresentam padrões de distribuição espacial e temporal coerentes com as condições climáticas e com os tipos de uso e cobertura na área de estudo. Observase um padrão inverso, com um gradiente de LE no sentido noroeste para sudeste e para o componente H, no sentido sudeste para noroeste. Sendo estas informações de grande importância para gerenciamento de recursos hídricos em escala regional, para estudos de zoneamento agrícola. / Abstract: Given the importance of understanding the temporal and spatial dynamics of of the energy balance (EB) components in a regional scale for the management of water resources and agricultural, the main objective of this thesis was to construct and analyze a time series of the components of BE appropriate to the subtropical humid climate conditions of the State of Rio Grande do Sul. In order to reach the objective initially, the adequacy of the models for the humid climate conditions was evaluated, in this step we used MODIS data and reference data measured in a micrometeorological tower installed in Cruz Alta - RS. The analyzes performed with instantaneous values and the study period was from 2009 to 2011. The next step evaluate the spatial variability of the BE components, the data used were the MODIS products, ERA Interim reanalysis data, reference data of the micrometeorological tower and INMET meteorological stations, for the same study period. In the last stage the time series of the BE components was constructed from the METRIC model. The period series was 14 years from 2002 to 2016.The results showed that the three models analyzed were consistent with the reference measurements, with the greatest limitations presented by the SEBAL model, which are mainly attributed to the state's eco-climatic conditions and the low spatial resolution of the images In the analysis of the spatial variability, the METRIC model presented greater consistency in the results and provided greater number of days with valid results, this model thus indicated as the most suitable for the rest of the study. The time series constructed allowed us to understand the temporal distribution patterns of BE components in the state of Rio Grande do Sul. There is a marked seasonality in the BE components, with higher values in summer and lower in winter. G is the smallest magnitude component and its spatial and temporal distribution is determined by the Rn distribution. On the other hand, the LE and H components are those that show higher magnitude and present spatial and temporal distribution patterns consistent with the climatic conditions and the types of use and coverage in the study area. An inverse pattern is observed, with a LE gradient from north-west to south-east and for H-component, from southeast to northwest.
175

Application of Distance Covariance to Extremes and Time Series and Inference for Linear Preferential Attachment Networks

Wan, Phyllis January 2018 (has links)
This thesis covers four topics: i) Measuring dependence in time series through distance covariance; ii) Testing goodness-of-fit of time series models; iii) Threshold selection for multivariate heavy-tailed data; and iv) Inference for linear preferential attachment networks. Topic i) studies a dependence measure based on characteristic functions, called distance covariance, in time series settings. Distance covariance recently gathered popularity for its ability to detect nonlinear dependence. In particular, we characterize a general family of such dependence measures and use them to measure lagged serial and cross dependence in stationary time series. Assuming strong mixing, we establish the relevant asymptotic theory for the sample auto- and cross- distance correlation functions. Topic ii) proposes a goodness-of-fit test for general classes of time series model by applying the auto-distance covariance function (ADCV) to the fitted residuals. Under the correct model assumption, the limit distribution for the ADCV of the residuals differs from that of an i.i.d. sequence by a correction term. This adjustment has essentially the same form regardless of the model specification. Topic iii) considers data in the multivariate regular varying setting where the radial part $R$ is asymptotically independent of the angular part $\Theta$ as $R$ goes to infinity. The goal is to estimate the limiting distribution of $\Theta$ given $R\to\infty$, which characterizes the tail dependence of the data. A typical strategy is to look at the angular components of the data for which the radial parts exceed some threshold. We propose an algorithm to select the threshold based on distance covariance statistics and a subsampling scheme. Topic iv) investigates inference questions related to the linear preferential attachment model for network data. Preferential attachment is an appealing mechanism based on the intuition “the rich get richer” and produces the well-observed power-law behavior in net- works. We provide methods for fitting such a model under two data scenarios, when the network formation is given, and when only a single-time snapshot of the network is observed.
176

Estimation of the scale matrix and their eigenvalues in the Wishart and the multivariate F distribution.

January 1996 (has links)
by Wai-Yin Chan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 42-45). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Main Problems --- p.1 / Chapter 1.2 --- Class of Regularized Estimator --- p.4 / Chapter 1.3 --- Preliminaries --- p.6 / Chapter 1.4 --- Related Works --- p.9 / Chapter 1.5 --- Brief Summary --- p.10 / Chapter Chapter 2 --- Estimation of the Covariance Matrix and its Eigenvalues in a Wishart Distribution / Chapter 2.1 --- Significance of The Problem --- p.12 / Chapter 2.2 --- Review of the Previous Work --- p.13 / Chapter 2.3 --- Properties of the Wishart Distribution --- p.18 / Chapter 2.4 --- Main Results --- p.20 / Chapter 2.5 --- Simulation Study --- p.23 / Chapter Chapter 3 --- Estimation of the Scale Matrix and its Eigenvalues in a Multivariate F Distribution / Chapter 3.1 --- Formulation and significance of the Problem --- p.26 / Chapter 3.2 --- Review of the Previous Works --- p.28 / Chapter 3.3 --- Properties of Multivariate F Distribution --- p.30 / Chapter 3.4 --- Main Results --- p.33 / Chapter 3.5 --- Simulation Study --- p.38 / Chapter Chapter 4 --- Further research --- p.40 / Reference --- p.42 / Appendix --- p.46
177

Gráficos de controle para monitoramento de processos multivariados /

Machado, Marcela Aparecida Guerreiro. January 2009 (has links)
Resumo: Esta tese oferece algumas contribuições à área de monitoramento de processos multivariados. Com respeito ao monitoramento do vetor de médias, investigou-se o desempenho dos gráficos de 2 T baseados em componentes principais e também o desempenho dos gráficos de médias utilizados em conjunto, sendo que cada gráfico monitora a média de uma das características de qualidade. Com respeito ao monitoramento da matriz de covariâncias, foi proposta uma nova estatística baseada nas variâncias amostrais (estatística de VMAX). O gráfico de VMAX é mais eficiente do que o gráfico da variância amostral generalizada S , que é o gráfico usual para o monitoramento da matriz de covariâncias. Uma vantagem adicional dessa nova estatística é que o usuário já está bem familiarizado com o cálculo de variâncias amostrais; o mesmo não pode ser dito em relação à variância amostral generalizada S . O desempenho do gráfico de VMAX foi também avaliado quando se utiliza a amostragem dupla, quando se variam os parâmetros do gráfico de controle, quando se adota o esquema de EWMA e quando se aplicam regras especiais de decisão. Investigou-se também o desempenho dos gráficos de controle destinados ao monitoramento simultâneo do vetor de médias e da matriz de covariâncias. / Abstract: This thesis offers some contributions to the field of monitoring multivariate processes. Regarding to the monitoring of the mean vector, we investigated the performance of the 2 T charts based on principal components and also the performance of the mean charts used simultaneously, where each chart is assigned to control one quality characteristic. Regarding to the monitoring of the covariance matrix, we propose a new statistic based on the sample variances (the VMAX statistic). The VMAX chart is more efficient than the generalized variance S chart, which is the usual chart for monitoring the covariance matrix. An additional advantage of this new statistic is that the user is already well familiar with the calculation of sample variances; we can't say the same regarding to the generalized variance S statistic. We also studied the performance of the VMAX chart with double sampling, with adaptive schemes, with the EWMA procedure and also with special run rules. We also investigated the performance of the control charts designed for monitoring the mean vector and the covariance matrix simultaneously. / Orientador: Antonio Fernando Branco Costa / Coorientador: Fernando Augusto Silva Marins / Banca: Messias Borges Silva / Banca: Ubirajara Rocha Ferreira / Banca: Linda Lee Ho / Banca: Roberto da Costa Quinino / Doutor
178

Rethinking meta-analysis: an alternative model for random-effects meta-analysis assuming unknown within-study variance-covariance

Toro Rodriguez, Roberto C 01 August 2019 (has links)
One single primary study is only a little piece of a bigger puzzle. Meta-analysis is the statistical combination of results from primary studies that address a similar question. The most general case is the random-effects model, in where it is assumed that for each study the vector of outcomes T_i~N(θ_i,Σ_i ) and that the vector of true-effects for each study is θ_i~N(θ,Ψ). Since each θ_i is a nuisance parameter, inferences are based on the marginal model T_i~N(θ,Σ_i+Ψ). The main goal of a meta-analysis is to obtain estimates of θ, the sampling error of this estimate and Ψ. Standard meta-analysis techniques assume that Σ_i is known and fixed, allowing the explicit modeling of its elements and the use of Generalized Least Squares as the method of estimation. Furthermore, one can construct the variance-covariance matrix of standard errors and build confidence intervals or ellipses for the vector of pooled estimates. In practice, each Σ_i is estimated from the data using a matrix function that depends on the unknown vector θ_i. Some alternative methods have been proposed in where explicit modeling of the elements of Σ_i is not needed. However, estimation of between-studies variability Ψ depends on the within-study variance Σ_i, as well as other factors, thus not modeling explicitly the elements of Σ_i and departure of a hierarchical structure has implications on the estimation of Ψ. In this dissertation, I develop an alternative model for random-effects meta-analysis based on the theory of hierarchical models. Motivated, primarily, by Hoaglin's article "We know less than we should about methods of meta-analysis", I take into consideration that each Σ_i is unknown and estimated by using a matrix function of the corresponding unknown vector θ_i. I propose an estimation method based on the Minimum Covariance Estimator and derive formulas for the expected marginal variance for two effect sizes, namely, Pearson's moment correlation and standardized mean difference. I show through simulation studies that the proposed model and estimation method give accurate results for both univariate and bivariate meta-analyses of these effect-sizes, and compare this new approach to the standard meta-analysis method.
179

Revolution in Autonomous Orbital Navigation (RAON)

Bhatia, Rachit 01 December 2019 (has links)
Spacecraft navigation is a critical component of any space mission. Space navigation uses on-board sensors and other techniques to determine the spacecraft’s current position and velocity, with permissible accuracy. It also provides requisite information to navigate to a desired position, while following the desired trajectory. Developments in technology have resulted in new techniques of space navigation. However, inertial navigation systems have consistently been the bedrock for space navigation. Recently, the successful space mission GOCE used on-board gravity gradiometer for mapping Earth’s gravitational field. This has motivated the development of new techniques like cold atom accelerometers, to create ultra-sensitive gravity gradiometers, specifically suited for space applications, including autonomous orbital navigation. This research aims to highlight the existing developments in the field of gravity gradiometry and its potential space navigation applications. The study aims to use the Linear Covariance Theory to determine specific sensor requirements to enable autonomous space navigation for different flight regimes.
180

Forecast Combination with Multiple Models and Expert Correlations

Soule, David P 01 January 2019 (has links)
Combining multiple forecasts in order to generate a single, more accurate one is a well-known approach. A simple average of forecasts has been found to be robust despite theoretically better approaches, increasing availability in the number of expert forecasts, and improved computational capabilities. The dominance of a simple average is related to the small sample sizes and to the estimation errors associated with more complex methods. We study the role that expert correlation, multiple experts, and their relative forecasting accuracy have on the weight estimation error distribution. The distributions we find are used to identify the conditions when a decision maker can confidently estimate weights versus using a simple average. We also propose an improved expert weighting approach that is less sensitive to covariance estimation error while providing much of the benefit from a covariance optimal weight. These two improvements create a new heuristic for better forecast aggregation that is simple to use. This heuristic appears new to the literature and is shown to perform better than a simple average in a simulation study and by application to economic forecast data.

Page generated in 0.0803 seconds