• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 89
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 564
  • 134
  • 101
  • 98
  • 77
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 42
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Estimação não-parametrica para função de covariancia de processos gaussianos espaciais / Nonparametric estimation for covariance function of spatial gaussian processes

Gomes, José Clelto Barros 13 August 2018 (has links)
Orientador: Ronaldo Dias / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-13T14:28:48Z (GMT). No. of bitstreams: 1 Gomes_JoseCleltoBarros_M.pdf: 1798618 bytes, checksum: db671b29b83f0321e8dbc03c5af42cde (MD5) Previous issue date: 2009 / Resumo: O desafio na modelagem de processos espaciais está na descrição da estrutura de covariância do fenômeno sob estudo. Um estimador não-paramétrico da função de covariância foi construído de forma a usar combinações lineares de funções B-splines. Estas bases são usadas com muita frequência na literatura graças ao seu suporte compacto e a computação tão rápida quanto a habilidade de criar aproximações suaves e apropriadas. Verificouse que a função de covariância estimada era definida positiva por meio do teorema de Bochner. Para a estimação da função de covariância foi implementado um algoritmo que fornece um procedimento completamente automático baseado no número de funções bases. Então foram realizados estudos numéricos que evidenciaram que assintoticamente o procedimento é consistente, enquanto que para pequenas amostras deve-se considerar as restrições das funções de covariância. As funções de covariâncias usadas na estimação foram as de exponencial potência, gaussiana, cúbica, esférica, quadrática racional, ondular e família de Matérn. Foram estimadas ainda covariâncias encaixadas. Simulações foram realizadas também a fim de verificar o comportamento da distribuição da afinidade. As estimativas apresentaram-se satisfatórias / Abstract: The challenge in modeling of spatials processes is in description of the framework of covariance of the phenomenon about study. The estimation of covariance functions was done using a nonparametric linear combinations of basis functions B-splines. These bases are used frequently in literature thanks to its compact support and fast computing as the ability to create smooth and appropriate approaches There was positive definiteness of the estimator proposed by the Bochner's theorem. For the estimation of the covariance functions was implemented an algorithm that provides a fully automated procedure based on the number of basis functions. Then numerical studies were performed that showed that the procedure is consistent assynthotically. While for small samples should consider the restrictions of the covariance functions, so the process of optimization was non-linear optimization with restrictions. The following covariance functions were used in estimating: powered exponential, Gaussian, cubic, spherical, rational quadratic and Matérn family. Nested covariance funtions still were estimated. Simulations were also performed to verify the behavior of affinity and affinity partial, which measures how good is the true function of the estimated function. Estimates showed satisfactory / Mestrado / Mestre em Estatística
372

Campo quântico de Dirac localizado tipo-string

Oliveira, Erichardson Tarocco de 14 September 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-27T14:18:22Z No. of bitstreams: 1 erichardsontaroccodeoliveira.pdf: 733385 bytes, checksum: e4cec1766829e1f89c8e40c3d6528d35 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T21:05:20Z (GMT) No. of bitstreams: 1 erichardsontaroccodeoliveira.pdf: 733385 bytes, checksum: e4cec1766829e1f89c8e40c3d6528d35 (MD5) / Made available in DSpace on 2017-08-07T21:05:20Z (GMT). No. of bitstreams: 1 erichardsontaroccodeoliveira.pdf: 733385 bytes, checksum: e4cec1766829e1f89c8e40c3d6528d35 (MD5) Previous issue date: 2010-09-14 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Como são bem conhecidos; os campos quânticos estudados na TQC satisfazem o princípio de localidade segundo pontos do espaço-tempo. A eles, referem-se como campos que possuam localização do tipo-ponto ou que são puntiformemente localizados. Nesta dissertação, será feita a construção de campos quânticos livre de Dirac, com localização "tipo-string". Em contraste aos campos usuais, que vivem em ponto do espaço-tempo, estes vivem em semi-reta que começa num certo ponto do espaço de Minkowski e se estende até o infinito numa certa direção tipo-espaço. Tal localização é permitida pelos princípios da física quântica relativística, dado que os campos admitem a construção de observáveis locais. O interesse na localização tipo-string deve-se ao fato de ser uma localização menos forte, que implica um comportamento menos singular nas altas energias, apresentado pelos campos quânticos com localização tipo-ponto. Com isso apresentarão um melhor comportamento UV. Com essa localização menos forte, pode-se, então, criar mais modelos interagentes. Campos livres com localização tipo-string já foram obtidos para várias partículas [1, 2], a partir dos quais podem se fazer modelos interagentes . Para construir modelos interagentes vindos do campo livre, deve-se fazer uma análise da função de dois pontos do campo livre correspondente. Tal análise, porém, não será feita nesse trabalho, visto que não é o objetivo do estudo em questão. Nesse trabalho foi construído o campo quântico livre de Dirac com localização tipo-string, em que foram verificadas a equação de Dirac e a relação de covariância. Definiu-se sua densidade de corrente e verificou-se que esta se conserva. Por último, definiu-se a função de dois pontos para localização tipo-string, que pode ser verificada a localidade do campo tipo-string. / As is well known, quantum fields studied in TQC satisfy the second principle of locality of space-time points. To them, refer to fields that have location-point type or that are located punctate. In this dissertation, will be the construction of free quantum fields Dirac, with location-type "string". In contrast to the usual fields, living in space-time point, they live in semi-straight line beginning at a certain point in Minkowski space and extends to infinity in a certain space-like direction. This location is permitted by the principles of relativistic quantum physics, since the fields admit the construction of local observables. The interest in location-string type is due to the fact that a location is less strong, which implies a less singular behavior at high energies, described by quantum fields with point-type location. With this present an improved UV behavior. With this location less strong, we can then create more models interacting. Free fields with location-string type have been obtained for several particles [1, 2], from which they can make interacting models. To build models from the field interacting free, one must make an analysis of the function of two points corresponding free field. This analysis, however, will not be done in this work, since it is not the objective of the study. In this work we construct a quantum field-free Dirac-string type with location, which was verified in the Dirac equation and the ratio of covariance. We defined its current density and found that it is conserved. Finally, we defined the function of two points for location-string type, which can be verified the location of the string-type field.
373

Multivariate GARCH and portfolio optimisation : a comparative study of the impact of applying alternative covariance methodologies

Niklewski, Jacek January 2014 (has links)
This thesis investigates the impact of applying different covariance modelling techniques on the efficiency of asset portfolio performance. The scope of this thesis is limited to the exploration of theoretical aspects of portfolio optimisation rather than developing a useful tool for portfolio managers. Future work may entail taking the results from this work further and producing a more practical tool from a fund management perspective. The contributions made by this thesis to the knowledge of the subject are that it extends literature by applying a number of different covariance models to a unique dataset that focuses on the 2007 global financial crisis. The thesis also contributes to the literature as the methodology applied also enables a distinction to be made in respect to developed and emerging/frontier regional markets. This has resulted in the following findings: First, it identifies the impact of the 2007–2009 financial crisis on time-varying correlations and volatilities as measured by the dynamic conditional correlation model (Engle 2002). This is examined from the perspective of a United States (US) investor given that the crisis had its origin in the US market. Prima facie evidence is found that economic structural adjustment has resulted in long-term increases in the correlation between the US and other markets. In addition, the magnitude of the increase in correlation is found to be greater in respect to emerging/frontier markets than in respect to developed markets. Second, the long-term impact of the 2007–2009 financial crisis on time-varying correlations and volatilities is further examined by comparing estimates produced by different covariance models. The selected time-varying models (DCC, copula DCC, GO-GARCH: MM, ICA, NLS, ML; EWMA and SMA) produce statistically significantly different correlation and volatility estimates. This finding has potential implication for the estimation of efficient portfolios. Third, the different estimates derived using the selected covariance models are found to have a significant impact on the calculated weights and turnovers of efficient portfolios. Interestingly, however, there was no significant difference between their respective returns. This is the main finding of the thesis, which has potentially very important implications for portfolio management.
374

Diferentes técnicas de condicionamento em séries temporais turbulentas

Zimermann, Hans Rogério 09 December 2005 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / The dynamics process of the atmosphere near Earth ground is controled by two main forcings, termical and machanics. These process are reponsible for the atmospheric flow variability in this layer, and this variability characterizes the atmospheric turbulence. The presence of turbulence phenomena drives to distinguish it from the rest of atmosphere, such layer is commom called Atmospheric Boundary Layer (ABL). So, the importance of studying ABL is the fact of that turbulence represents an effective transport process near the ground surface. Adequate treating of the experimental data gives more truthful so qualitative as quantitavie when we are interpreting and understading these transports. This is very impportant for suitable trustful charaterizing the turbulent fluxes. This dissertation shows an overview about some basics turbulent data treatment techics. The dataset, colected experimentaly and separed into 27 minutes window samples, were subjected to simple mean, running mean through digital recursive filter e and linear detrending. Our focus are the implications of applying this technics and how each of this acts in turbulent time series of temperature and vertical velocity of the wind data, showing and discussing about the results in the estimating fluxes of sensible heat by Eddy Covariance method and also spectral densities estimates of temperature and vertical wind velocity. The main goal of the study done in this dissertation, was identifying that applying corrections on fase lag, not considered in older digital recursive filter (FDR as proposed by McMillen 1988) and, present into the model (FFDR proposed by Franceschi e Zardi 2003) leads for trusties estimatives, mainly for turbulent temperature spectra, which are the hardest ones for minimizing the non statinarity effects. Clearly, observing the graphical results of temperature spectra, we see that those low frequencies were better removed than the others technics, giving to spectral shape the classics espected shape. / A dinâmica da atmosfera próxima á superfície é regida por dois forçantes principais, um mecânico e outro térmico. Esses processos são responsáveis pela variabilidade dos escoamentos na baixa atmosfera e, é essa variabilidade que caracteriza a turbulência atmosférica. A presença do fenômeno de turbulência, permite distinguir uma camada do restante da atmosfera, esta é chamada de Camada Limite Atmosférica ( CLA). A importância de estudos nessa camada, está relacionada com o fato da turbulência representar um processo efetivo de transporte próximo à superfície. O tratamento adequado dos dados experimentais permite maior contabilidade, tanto qualitativa quanto quantitativas, na interpretação e entendimento desse transporte, ou seja, é necessário para uma adequada caracterização dos fluxos turbulentos. Nesta dissertação são investigadas algumas, das principais técnicas básicas, no tratamento de dados e condicionamento em séries temporais turbulentas. Os dados, experimentalmente coletados e separados em conjuntos de amostras com 27 minutos, são submetidos aos tratamentos com técnicas de média simples, média instantânea através de filtros digitais recursivos e remoção linear de tendência. Obeserva-se as implicações da aplicação destas técnicas, como cada uma delas age nas séries temporais turbulentas de temperatura e velocidade vertical do vento, apresentando e discutindo os resultados dessa aplicação, nas estimativas dos fluxos turbulentos de calor sensível através do método de Covariância dos Vórtices (MCV), e também das densidades espectrais de temperatura e velocidade vertical. Um dos grandes benefícios do estudo feito nessa dissertação, foi identificar que a correção do atraso de fase, que não era levada em consideração nos modelos de filtros digitais anteriores (FDR proposto por McMillen 1988) e, presente no modelo (FFDR Franceschi e Zardi 2003) conduz à estimativas satisfatórias, principalmente para espectros de temperatura turbulenta, que são os mais difíceis de se minimizar os efeitos de não estacionariedade. Ficou claro, observando nos resultados gráficos dos espectros, que a remoção de baixas freqüências nos espectros de temperatura, os deixou com o perfil típico de especros clássicamente esperados.
375

Efektivní implementace metod pro redukci dimenze v mnohorozměrné statistice / Efficient implementation of dimension reduction methods for high-dimensional statistics

Pekař, Vojtěch January 2015 (has links)
The main goal of our thesis is to make the implementation of a classification method called linear discriminant analysis more efficient. It is a model of multivariate statistics which, given samples and their membership to given groups, attempts to determine the group of a new sample. We focus especially on the high-dimensional case, meaning that the number of variables is higher than number of samples and the problem leads to a singular covariance matrix. If the number of variables is too high, it can be practically impossible to use the common methods because of the high computational cost. Therefore, we look at the topic from the perspective of numerical linear algebra and we rearrange the obtained tasks to their equivalent formulation with much lower dimension. We offer new ways of solution, provide examples of particular algorithms and discuss their efficiency. Powered by TCPDF (www.tcpdf.org)
376

Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

Huang, Huang 16 July 2017 (has links)
This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as separability and full symmetry. We formulate test functions as functions of temporal lags for each pair of spatial locations and develop a rank-based testing procedure induced by functional data depth for assessing these properties. The method is illustrated using simulated data from widely used spatio-temporal covariance models, as well as real datasets from weather stations and climate model outputs.
377

Portfolio Value at Risk and Expected Shortfall using High-frequency data / Portfólio Value at Risk a Expected Shortfall s použitím vysoko frekvenčních dat

Zváč, Marek January 2015 (has links)
The main objective of this thesis is to investigate whether multivariate models using Highfrequency data provide significantly more accurate forecasts of Value at Risk and Expected Shortfall than multivariate models using only daily data. Our objective is very topical since the Basel Committee announced in 2013 that is going to change the risk measure used for calculation of capital requirement from Value at Risk to Expected Shortfall. The further improvement of accuracy of both risk measures can be also achieved by incorporation of high-frequency data that are rapidly more available due to significant technological progress. Therefore, we employed parsimonious Heterogeneous Autoregression and its asymmetric version that uses high-frequency data for the modeling of realized covariance matrix. The benchmark models are chosen well established DCC-GARCH and EWMA. The computation of Value at Risk (VaR) and Expected Shortfall (ES) is done through parametric, semi-parametric and Monte Carlo simulations. The loss distributions are represented by multivariate Gaussian, Student t, multivariate distributions simulated by Copula functions and multivariate filtered historical simulations. There are used univariate loss distributions: Generalized Pareto Distribution from EVT, empirical and standard parametric distributions. The main finding is that Heterogeneous Autoregression model using high-frequency data delivered superior or at least the same accuracy of forecasts of VaR to benchmark models based on daily data. Finally, the backtesting of ES remains still very challenging and applied Test I. and II. did not provide credible validation of the forecasts.
378

Rizika použití VAR modelů při řízení portfolia / Risks of using VaR models for portfolio management

Antonenko, Zhanna January 2014 (has links)
The diploma thesis Risks of using VaR models for portfolio management is focused on estimation of the portfolio VaR using basic and modified methods. The goal of this thesis is to point out some weakness of the basic methods and to demonstrate the estimation of VaR using improved methods to overcome these problems. The analysis will be perform theoretically and in practice. Only market risk will be the subject of the study. Several simulation and parametric methods will be introduced.
379

Statistical modeling and design in forestry : The case of single tree models

Berhe, Leakemariam January 2008 (has links)
Forest quantification methods have evolved from a simple graphical approach to complex regression models with stochastic structural components. Currently, mixed effects models methodology is receiving attention in the forestry literature. However, the review work (Paper I) indicates a tendency to overlook appropriate covariance structures in the NLME modeling process. A nonlinear mixed effects modeling process is demonstrated in Paper II using Cupressus lustanica tree merchantable volume data and compared several models with and without covariance structures. For simplicity and clarity of the nonlinear mixed effects modeling, four phases of modeling were introduced. The nonlinear mixed effects model for C. lustanica tree merchantable volume with the covariance structures for both the random effects and within group errors has shown a significant improvement over the model with simplified covariance matrix. However, this statistical significance has little to explain in the prediction performance of the model. In Paper III, using several performance indicator statistics, tree taper models were compared in an effort to propose the best model for the forest management and planning purpose of the C. lustanica plantations. Kozak's (1988) tree taper model was found to be the best for estimating C. lustanica taper profile. Based on the Kozak (1988) tree taper model, a Ds optimal experimental design study is carried out in Paper IV. In this study, a Ds-optimal (sub) replication free design is suggested for the Kozak (1988) tree taper model.
380

Détection automatique de cibles dans des fonds complexes. Pour des images ou séquences d'images / Automatical detection in complex background

Thivin, Solenne 16 December 2015 (has links)
L'objectif principal de ces travaux de thèse a été la mise en place d'un algorithme de détection de cibles sous-résolues pour des images infra-rouges de ciel.Pour cela, nous avons d'abord cherché à modéliser les images réelles dont nous disposions. Après une étude de ces images, nous avons proposé plusieurs modèles gaussiens prenant en compte la covariance spatiale. Dans ces modèles, nous avons supposé que les images pouvaient être segmentées en zones stationnaires. Dans chaque zone, nous avons supposé une structure forte sur la matrice de covariance (comme les modèles auto-régressifs en deux dimensions par exemple).Il a ensuite fallu choisir entre ces modèles. Pour cela, nous avons appliqué une méthode de sélection de modèles par critère de vraisemblance pénalisée introduite par Birgé et Massart. Nous avons obtenu comme résultats théoriques une inégalité oracle qui a permis de démontrer les propriétés statistiques du modèle choisi. Une fois le modèle sélectionné, nous avons pu bâtir un test de détection. Nous nous sommes inspirés de la théorie de Neyman-Pearson et du test du rapport de vraisemblance généralisé. Notre contrainte principale a été le respect du taux de fausses alarmes par image. Pour le garantir, nous avons appris le comportement du test sur les images réelles pour en déduire le seuil à appliquer.~~Nous avons ensuite remarqué que le comportement de ce test variait fortement selon la texture de l'image : image de ciel bleu uniforme, image de nuage très texturé, etc. Après avoir caractérisé les différentes textures rencontrées avec les coefficients de scattering de Stéphane Mallat, nous avons décidé de classer ces textures. Le seuil appliqué lors de la détection a alors été adapté à la texture locale du fond. Nous avons finalement mesuré les performances de cet algorithme sur des images réelles et nous les avons comparées à d'autres méthodes de détection.Mots-clés: Détection, Covariance spatiale, Sélection de modèles, Apprentissage, Classification non supervisée. / During this PHD, we developped an detection algorithm. Our principal objective was to detect small targets in a complex background like clouds for example.For this, we used the spatial covariate structure of the real images.First, we developped a collection of models for this covariate structure. Then, we selected a special model in the previous collection. Once the model selected, we applied the likelihood ratio test to detect the potential targets.We finally studied the performances of our algorithm by testing it on simulated and real images.

Page generated in 0.0551 seconds