• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 20
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Efficient Algorithms for Data Mining with Federated Databases

Young, Barrington R. St. A. 03 July 2007 (has links)
No description available.
52

市場風險因子情境產生方法之研究 / Methodology for Risk Factors Scenario Generation

陳育偉, Chen,Yu-Wei Unknown Date (has links)
由於金融事件層出不窮,控管風險已成為銀行、證券、保險各種金融產業的重要課題。其中Value-at-Risk(VaR)模型為銀行與證券業最常用來衡量其市場風險的模型。VaR模型中的蒙地卡羅模擬法是將投資組合持有部位以適當的市場風險因子來表示,接著產生市場風險因子的各種情境,再結合評價公式以求得投資組合在某一段持有期間內、某一信心水準之下的最低價值,再將最低價值減去原來之價值,便為可能的最大損失(Jorion, 2007)。 / 使用蒙地卡羅模擬法產生市場風險因子的各種情境,必須先估計市場風險因子的共變異數矩陣,再藉此模擬出數千種市場風險因子情境。本研究便是將蒙地卡羅模擬法加入隨著時間改變之共變異數矩陣(time-varying covariance matrix)的概念並減少市場風險因子個數,利用蒙地卡羅模擬法配合Constant模型、UWMA模型、EWMA模型、Orthogonal EWMA模型、Orthogonal GARCH模型、PCA EWMA模型、PCA GARCH模型來產生市場風險因子未來的情境並比較各方法對長天期與短天期風險衡量之優劣。結果顯示PCA EWMA模型的效果最好,因此建議各大金融機構可採用PCA EWMA模型來控管其投資組合短天期與長天期的市場風險。
53

Etude de représentations parcimonieuses des statistiques d'erreur d'observation pour différentes métriques. Application à l'assimilation de données images / Study of sparse representations of statistical observation error for different metrics. Application to image data assimilation

Chabot, Vincent 11 July 2014 (has links)
Les dernières décennies ont vu croître en quantité et en qualité les données satellites. Au fil des ans, ces observations ont pris de plus en plus d'importance en prévision numérique du temps. Ces données sont aujourd'hui cruciales afin de déterminer de manière optimale l'état du système étudié, et ce, notamment car elles fournissent des informations denses et de qualité dansdes zones peu observées par les moyens conventionnels. Cependant, le potentiel de ces séquences d'images est encore largement sous–exploitée en assimilation de données : ces dernières sont sévèrement sous–échantillonnées, et ce, en partie afin de ne pas avoir à tenir compte des corrélations d'erreurs d'observation.Dans ce manuscrit nous abordons le problème d'extraction, à partir de séquences d'images satellites, d'information sur la dynamique du système durant le processus d'assimilation variationnelle de données. Cette étude est menée dans un cadre idéalisé afin de déterminer l'impact d'un bruit d'observations et/ou d'occultations sur l'analyse effectuée.Lorsque le bruit est corrélé en espace, tenir compte des corrélations en analysant les images au niveau du pixel n'est pas chose aisée : il est nécessaire d'inverser la matrice de covariance d'erreur d'observation (qui se révèle être une matrice de grande taille) ou de faire des approximationsaisément inversibles de cette dernière. En changeant d'espace d'analyse, la prise en compte d'une partie des corrélations peut être rendue plus aisée. Dans ces travaux, nous proposons d'effectuer cette analyse dans des bases d'ondelettes ou des trames de curvelettes. En effet, un bruit corréléen espace n'impacte pas de la même manière les différents éléments composants ces familles. En travaillant dans ces espaces, il est alors plus aisé de tenir compte d'une partie des corrélations présentes au sein du champ d'erreur. La pertinence de l'approche proposée est présentée sur différents cas tests.Lorsque les données sont partiellement occultées, il est cependant nécessaire de savoir comment adapter la représentation des corrélations. Ceci n'est pas chose aisée : travailler avec un espace d'observation changeant au cours du temps rend difficile l'utilisation d'approximations aisément inversibles de la matrice de covariance d'erreur d'observation. Dans ces travaux uneméthode permettant d'adapter, à moindre coût, la représentations des corrélations (dans des bases d'ondelettes) aux données présentes dans chaque image est proposée. L'intérêt de cette approche est présenté dans un cas idéalisé. / Recent decades have seen an increase in quantity and quality of satellite observations . Over the years , those observations has become increasingly important in numerical weather forecasting. Nowadays, these datas are crucial in order to determine optimally the state of the studied system. In particular, satellites can provide dense observations in areas poorly observed by conventionnal networks. However, the potential of such observations is clearly under--used in data assimilation : in order to avoid the management of observation errors, thinning methods are employed in association to variance inflation.In this thesis, we adress the problem of extracting information on the system dynamic from satellites images data during the variationnal assimilation process. This study is carried out in an academic context in order to quantify the influence of observation noise and of clouds on the performed analysis.When the noise is spatially correlated, it is hard to take into account such correlations by working in the pixel space. Indeed, it is necessary to invert the observation error covariance matrix (which turns out to be very huge) or make an approximation easily invertible of such a matrix. Analysing the information in an other space can make the job easier. In this manuscript, we propose to perform the analysis step in a wavelet basis or a curvelet frame. Indeed, in those structured spaces, a correlated noise does not affect in the same way the differents structures. It is then easier to take into account part of errors correlations : a suitable approximation of the covariance matrix is made by considering only how each kind of element is affected by a correlated noise. The benefit of this approach is demonstrated on different academic tests cases.However, when some data are missing one has to address the problem of adapting the way correlations are taken into account. This work is not an easy one : working in a different observation space for each image makes the use of easily invertible approximate covariance matrix very tricky. In this work a way to adapt the diagonal hypothesis of the covariance matrix in a wavelet basis, in order to take into account that images are partially hidden, is proposed. The interest of such an approach is presented in an idealised case.
54

Automatic parameter tuning in localization algorithms / Automatisk parameterjustering av lokaliseringsalgoritmer

Lundberg, Martin January 2019 (has links)
Many algorithms today require a number of parameters to be set in order to perform well in a given application. The tuning of these parameters is often difficult and tedious to do manually, especially when the number of parameters is large. It is also unlikely that a human can find the best possible solution for difficult problems. To be able to automatically find good sets of parameters could both provide better results and save a lot of time. The prominent methods Bayesian optimization and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are evaluated for automatic parameter tuning in localization algorithms in this work. Both methods are evaluated using a localization algorithm on different datasets and compared in terms of computational time and the precision and recall of the final solutions. This study shows that it is feasible to automatically tune the parameters of localization algorithms using the evaluated methods. In all experiments performed in this work, Bayesian optimization was shown to make the biggest improvements early in the optimization but CMA-ES always passed it and proceeded to reach the best final solutions after some time. This study also shows that automatic parameter tuning is feasible even when using noisy real-world data collected from 3D cameras.
55

Estudo de técnicas estatísticas aplicadas à determinação de parâmetros no método k0 de análise por ativação neutrônica / Improvement in statistical techniques applied to the determination of parameters in the k0 method of neutron activation analysis

Ribeiro, Rafael Vanhoz 21 September 2017 (has links)
O presente trabalho procurou aperfeiçoar um código criado para calcular os parâmetros k0 e Q0, denominado COVAR, adicionando um outro método para a determinação do fator k0 e aprimorar a análise de covariância existente, criando uma nova versão: COVAR v4.1. O presente trabalho também desenvolveu um novo método de cálculo para os parâmetros α e vários fatores k0 em um único ajuste de mínimos quadrados, por meio de uma metodologia inédita, utilizando matrizes de covariância e todas as incertezas parciais envolvidas. Para aplicação deste método, outro código foi desenvolvido, denominado AKFIT v2.1, no qual possui a capacidade de efetuar dois ajustes, linear e não-linear, na determinação de α e k0 para várias irradiações. Foi utilizado um conjunto de dados com irradiações realizadas nos anos de 2008 e 2010, pelo Laboratório de Metrologia Nuclear (LMN) e por meio do reator nuclear IEAR-1, do IPEN-CNEN/SP, correspondendo aos radionuclídeos 95Zr, 65Zn, 69mZn, 46Sc, 140La e 60Co, resultando em 21 conjuntos de dados a serem analisados para verificar o desempenho dos códigos criados. Para COVAR v 4.1, os resultados do cálculo alternativo para o fator k0 foram próximos dos valores obtidos pelo método original do programa e foram consistentes com a literatura. Para AKFIT v2.1, realizou-se ajustes com ambas irradiações simultâneas e separadas. Os modelos ajustados concordaram com a literatura. O valor de α foi de 0,0025(83), que está de acordo com resultados obtidos anteriormente pelo LMN. As correlações entre os parâmetros k0 se comportaram como esperado, com valores menores entre diferentes elementos e maiores entre elementos iguais com diferentes energias e usando o mesmo comparador. Pode-se concluir que os métodos propostos foram capazes de calcular os valores de k0, com AKFIT v2.1 sendo uma nova técnica no qual é possível determinar dois parâmetros, α e k0, ao mesmo tempo, de modo rápido e preciso. Espera-se que o código AKFIT possa ser aperfeiçoado, adicionando mais parâmetros, como Q0 e f, tornando-o uma ferramenta de ajuste completo para a determinação de todos os parâmetros essenciais do Método k0 de AAN. / The present work aims to improve a code for calculating k0 and Q0 parameters, called COVAR, adding another method of calculating k0 factor and improving the covariance analysis, creating a new version: COVAR v4.1. The present work also aims the development of a new method of calculating the alpha and several k0 parameters in a single least square fit, by means of a novel methodology, using covariance matrices and all partial uncertainties. For the calculations applying this new method, another code was developed, called AKFIT v2.1 which performs linear and non-linear fittings for the determination of alpha and k0 parameters for several irradiations in different periods. We used a database with irradiations in the years 2008 and 2010 performed at the IEAR-1 nuclear reactor of the IPEN-CNEN/SP, by the Nuclear Metrology Laboratory (LMN), corresponding to radionuclides 95Zr, 65Zn, 69mZn, 46Sc, 140La and 60Co and resulting in 21 data sets which were analyzed in order to verify the performance of COVAR4.1 and AKFIT2.1. For COVAR v4.1, the results with the alternative calculation of k0 factor were close to the already existing calculation and were consistent with the literature. For AKFIT v2.1 fittings were performed with both irradiations simultaneously and separately. The fitted models agreed with the literature. The α value was 0,0025(83), which agrees with previous results obtained by the LMN. The correlations between the parameters k0 behaved as expected, with smaller values between different elements and greater correlations between equal elements with different energies and using the same comparator measurement. It can be concluded that the proposed methods were able to calculate the values of k0, with AKFIT v2.1 being a new technique in which it is possible to determine two parameters, alpha and k0 at the same time, quickly and accurately. It is expected that AKFIT code can be improved by adding more parameters, such as Q0 and f, by making a complete fitting for the determination of all the main parameters for the k0 NAA method.
56

Estudo de técnicas estatísticas aplicadas à determinação de parâmetros no método k0 de análise por ativação neutrônica / Improvement in statistical techniques applied to the determination of parameters in the k0 method of neutron activation analysis

Rafael Vanhoz Ribeiro 21 September 2017 (has links)
O presente trabalho procurou aperfeiçoar um código criado para calcular os parâmetros k0 e Q0, denominado COVAR, adicionando um outro método para a determinação do fator k0 e aprimorar a análise de covariância existente, criando uma nova versão: COVAR v4.1. O presente trabalho também desenvolveu um novo método de cálculo para os parâmetros α e vários fatores k0 em um único ajuste de mínimos quadrados, por meio de uma metodologia inédita, utilizando matrizes de covariância e todas as incertezas parciais envolvidas. Para aplicação deste método, outro código foi desenvolvido, denominado AKFIT v2.1, no qual possui a capacidade de efetuar dois ajustes, linear e não-linear, na determinação de α e k0 para várias irradiações. Foi utilizado um conjunto de dados com irradiações realizadas nos anos de 2008 e 2010, pelo Laboratório de Metrologia Nuclear (LMN) e por meio do reator nuclear IEAR-1, do IPEN-CNEN/SP, correspondendo aos radionuclídeos 95Zr, 65Zn, 69mZn, 46Sc, 140La e 60Co, resultando em 21 conjuntos de dados a serem analisados para verificar o desempenho dos códigos criados. Para COVAR v 4.1, os resultados do cálculo alternativo para o fator k0 foram próximos dos valores obtidos pelo método original do programa e foram consistentes com a literatura. Para AKFIT v2.1, realizou-se ajustes com ambas irradiações simultâneas e separadas. Os modelos ajustados concordaram com a literatura. O valor de α foi de 0,0025(83), que está de acordo com resultados obtidos anteriormente pelo LMN. As correlações entre os parâmetros k0 se comportaram como esperado, com valores menores entre diferentes elementos e maiores entre elementos iguais com diferentes energias e usando o mesmo comparador. Pode-se concluir que os métodos propostos foram capazes de calcular os valores de k0, com AKFIT v2.1 sendo uma nova técnica no qual é possível determinar dois parâmetros, α e k0, ao mesmo tempo, de modo rápido e preciso. Espera-se que o código AKFIT possa ser aperfeiçoado, adicionando mais parâmetros, como Q0 e f, tornando-o uma ferramenta de ajuste completo para a determinação de todos os parâmetros essenciais do Método k0 de AAN. / The present work aims to improve a code for calculating k0 and Q0 parameters, called COVAR, adding another method of calculating k0 factor and improving the covariance analysis, creating a new version: COVAR v4.1. The present work also aims the development of a new method of calculating the alpha and several k0 parameters in a single least square fit, by means of a novel methodology, using covariance matrices and all partial uncertainties. For the calculations applying this new method, another code was developed, called AKFIT v2.1 which performs linear and non-linear fittings for the determination of alpha and k0 parameters for several irradiations in different periods. We used a database with irradiations in the years 2008 and 2010 performed at the IEAR-1 nuclear reactor of the IPEN-CNEN/SP, by the Nuclear Metrology Laboratory (LMN), corresponding to radionuclides 95Zr, 65Zn, 69mZn, 46Sc, 140La and 60Co and resulting in 21 data sets which were analyzed in order to verify the performance of COVAR4.1 and AKFIT2.1. For COVAR v4.1, the results with the alternative calculation of k0 factor were close to the already existing calculation and were consistent with the literature. For AKFIT v2.1 fittings were performed with both irradiations simultaneously and separately. The fitted models agreed with the literature. The α value was 0,0025(83), which agrees with previous results obtained by the LMN. The correlations between the parameters k0 behaved as expected, with smaller values between different elements and greater correlations between equal elements with different energies and using the same comparator measurement. It can be concluded that the proposed methods were able to calculate the values of k0, with AKFIT v2.1 being a new technique in which it is possible to determine two parameters, alpha and k0 at the same time, quickly and accurately. It is expected that AKFIT code can be improved by adding more parameters, such as Q0 and f, by making a complete fitting for the determination of all the main parameters for the k0 NAA method.
57

Carteiras de baixa volatilidade : menor risco e maior retorno no mercado de ações brasileiro

Samsonescu, Jorge Augusto Dias 20 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-05-25T14:00:15Z No. of bitstreams: 1 Jorge Augusto Dias Samsonescu.pdf: 443638 bytes, checksum: 0ac887f981377608fa611c1016a91b22 (MD5) / Made available in DSpace on 2015-05-25T14:00:15Z (GMT). No. of bitstreams: 1 Jorge Augusto Dias Samsonescu.pdf: 443638 bytes, checksum: 0ac887f981377608fa611c1016a91b22 (MD5) Previous issue date: 2015-02-20 / Banco do Brasil S.A. / Este trabalho analisa o desempenho fora da amostra de carteiras de mínima variância e baixa volatilidade no mercado de ações brasileiro entre 2003 e 2013, comparativamente ao índice IBOVESPA e a uma carteira igualmente ponderada. As carteiras de mínima variância foram otimizadas com restrição de posições vendidas e limite de peso para os ativos. A matriz de covariância foi estimada pelo método amostral e método shrinkage proposto por Ledoit e Wolf (2003). A carteira de baixa volatilidade foi estruturada de forma similar ao método do índice S&P 500 Low Volatility. O período utilizado para o rebalanceamento das carteiras foi quadrimestral e os ativos elegíveis para as carteiras foram os componentes do IBOVESPA em cada quadrimestre analisado. A comparação das carteiras foi feita através dos indicadores de retorno, desvio padrão e índice de Sharpe anualizados, MVaR e maximum drawdown. Os resultados apontam para a importância na escolha do limite de pesos para os ativos das carteiras de mínima variância. As carteiras de menor risco obtiveram os melhores resultados em todos os indicadores testados. / This study analyzes the out-of-sample performance of minimum-variance and low volatility portfolios in the Brazilian stock market from 2003 to 2013, when compared to IBOVESPA index and an equally weighted portfolio. The minimum variance portfolios have been optimized with short selling restriction and weight limits for the assets. The covariance matrix was estimated by sample method and shrinkage method proposed by Ledoit & Wolf (2003). The low volatility portfolio was structured in a similar way to the S&P 500 Low Volatility index method. The portfolios rebalancing period were quarterly and the eligible assets for the portfolios were IBOVESPA components in each analyzed period. The portfolios performance was evaluated through indicators such return, standard deviation, Sharpe ratio, maximum drawdown and MVAR indicators. The results point to the importance in choosing the weight limits for the assets of minimum-variance portfolios. Lower risk portfolios delivered the best results in all tested indicators.
58

Improved Methods and Selecting Classification Types for Time-Dependent Covariates in the Marginal Analysis of Longitudinal Data

Chen, I-Chen 01 January 2018 (has links)
Generalized estimating equations (GEE) are popularly utilized for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, when certain types of time-dependent covariates are presented, these equations can be biased unless an independence working correlation structure is employed. Moreover, in this case regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches using the generalized method of moments or quadratic inference functions have been proposed for utilizing all valid moment conditions. However, we have found that such methods will not always provide valid inference and can also be improved upon in terms of finite-sample regression parameter estimation. Therefore, we propose a modified GEE approach and a selection method that will both ensure the validity of inference and improve regression parameter estimation. In addition, these modified approaches assume the data analyst knows the type of time-dependent covariate, although this likely is not the case in practice. Whereas hypothesis testing has been used to determine covariate type, we propose a novel strategy to select a working covariate type in order to avoid potentially high type II error rates with these hypothesis testing procedures. Parameter estimates resulting from our proposed method are consistent and have overall improved mean squared error relative to hypothesis testing approaches. Finally, for some real-world examples the use of mean regression models may be sensitive to skewness and outliers in the data. Therefore, we extend our approaches from their use with marginal quantile regression to modeling the conditional quantiles of the response variable. Existing and proposed methods are compared in simulation studies and application examples.
59

Representation Of Covariance Matrices In Track Fusion Problems

Gunay, Melih 01 November 2007 (has links) (PDF)
Covariance Matrix in target tracking algorithms has a critical role at multi- sensor track fusion systems. This matrix reveals the uncertainty of state es- timates that are obtained from diferent sensors. So, many subproblems of track fusion usually utilize this matrix to get more accurate results. That is why this matrix should be interchanged between the nodes of the multi-sensor tracking system. This thesis mainly deals with analysis of approximations of the covariance matrix that can best represent this matrix in order to efectively transmit this matrix to the demanding site. Kullback-Leibler (KL) Distance is exploited to derive some of the representations for Gaussian case. Also com- parison of these representations is another objective of this work and this is based on the fusion performance of the representations and the performance is measured for a system of a 2-radar track fusion system.
60

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.

Page generated in 0.0748 seconds