• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 90
  • 59
  • 51
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 642
  • 280
  • 159
  • 138
  • 137
  • 100
  • 72
  • 69
  • 67
  • 66
  • 66
  • 63
  • 57
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Nelineární neparametrické modely pro finanční časové řady / Nonlinear nonparametric models for financial time series

Klačanská, Júlia January 2012 (has links)
The thesis studies nonlinear nonparametric models used in time series analy- sis. It gives basic introduction to the time series and states different nonlinear nonparametric models including their estimates. Special attention is paid to three of them, CHARN, FAR and AFAR model. Their properties and esti- mation techniques are presented. We also show techniques that select values of the parametres used further in estimation methods. The properties of time series models are investigated in simulation and real data studies. 1
212

A Generalization of The Partition Problem in Statistics

Zhou, Jie 20 December 2013 (has links)
In this dissertation, the problem of partitioning a set of treatments with respect to a control treatment is considered. Starting in 1950's a number of researchers have worked on this problem and have proposed alternative solutions. In Tong (1979), the authors proposed a formulation to solve this problem and hundreds of researchers and practitioners have used that formulation for the partition problem. However, Tong's formulation is somewhat rigid and misleading for the practitioners, if the distance between the ``good'' and the ``bad'' populations is large. In this case, the indifference zone gets quite large and the undesirable feature of the Tong's formulation to partition the populations in the indifference zone, without any penalty, can potentially lead Tong's formulation to produce misleading partitions. In this dissertation, a generalization of the Tong's formulation is proposed, under which, the treatments in the indifference zone are not partitioned as ``good'' or ``bad'', but are partitioned as a identifiable set. For this generalized partition, a fully sequential and a two-stage procedure is proposed and its theoretical properties are derived. The proposed procedures are also studied via Monte Carlo Simulation studies. The thesis concludes with some non-parametric partition procedures and the study of robustness of the various available procedures in the statistical literature.
213

Multiscale Change-point Segmentation: Beyond Step Functions

Guo, Qinghai 03 February 2017 (has links)
No description available.
214

Understanding Deep Neural Networks and other Nonparametric Methods in Machine Learning

Yixi Xu (6668192) 02 August 2019 (has links)
<div>It is a central problem in both statistics and computer science to understand the theoretical foundation of machine learning, especially deep learning. During the past decade, deep learning has achieved remarkable successes in solving many complex artificial intelligence tasks. The aim of this dissertation is to understand deep neural networks (DNNs) and other nonparametric methods in machine learning. In particular, three machine learning models have been studied: weight normalized DNNs, sparse DNNs, and the compositional nonparametric model.</div><div></div><div><br></div><div>The first chapter presents a general framework for norm-based capacity control for <i>L<sub>p,q</sub></i> weight normalized DNNs. We establish the upper bound on the Rademacher complexities of this family. Especially, with an <i>L<sub>1,infty</sub></i> normalization, we discuss properties of a width-independent capacity control, which only depends on the depth by a square root term. Furthermore, if the activation functions are anti-symmetric, the bound on the Rademacher complexity is independent of both the width and the depth up to a log factor. In addition, we study the weight normalized deep neural networks with rectified linear units (ReLU) in terms of functional characterization and approximation properties. In particular, for an <i>L<sub>1,infty</sub></i> weight normalized network with ReLU, the approximation error can be controlled by the <i>L<sub>1</sub></i> norm of the output layer.</div><div></div><div><br></div><div>In the second chapter, we study <i>L<sub>1,infty</sub></i>-weight normalization for deep neural networks with bias neurons to achieve the sparse architecture. We theoretically establish the generalization error bounds for both regression and classification under the <i>L<sub>1,infty</sub></i>-weight normalization. It is shown that the upper bounds are independent of the network width and <i>k<sup>1/2</sup></i>-dependence on the network depth <i>k</i>. These results provide theoretical justifications on the usage of such weight normalization to reduce the generalization error. We also develop an easily implemented gradient projection descent algorithm to practically obtain a sparse neural network. We perform various experiments to validate our theory and demonstrate the effectiveness of the resulting approach.</div><div></div><div><br></div><div>In the third chapter, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of <i>2k+1</i> nodes, where each node is either a summation, a multiplication, or the application of one of the <i>q</i> basis functions to one of the <i>m<sub>1</sub></i> covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is <i>O(k </i>log<i>(m<sub>1</sub>q)+</i>log<i>(k!))</i>, and the necessary number of samples is <i>Omega(k </i>log<i>(m<sub>1</sub>q)-</i>log<i>(k!))</i>. We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.</div>
215

Fisher's Randomization Test versus Neyman's Average Treatment Test

Georgii Hellberg, Kajsa-Lotta, Estmark, Andreas January 2019 (has links)
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.
216

Multiscale Total Variation Estimators for Regression and Inverse Problems

Álamo, Miguel del 24 May 2019 (has links)
No description available.
217

Parametrização de Sistemas de Equações Diferenciais Ordinárias no crescimento de bovinos de corte e produção de gases / Parameterization of Ordinary Differential Equations Systems in the growth of beef cattle and production of gases

Biase, Adriele Giaretta 05 February 2016 (has links)
Parametrizações de modelos e estruturas de correlações dos parâmetros no âmbito agropecuário são importantes por caracterizarem o comportamento de um sistema em resposta a variações de múltiplos cenários (clima, genótipos, dietas nutricionais, dentre outros fatores) que existem em escalas globais. O objetivo foi contribuir com inferências estatísticas na produção de gases CO2 [um potente Gás de Efeito Estufa (GEE)] nas fermentações in vitro de feno de alfafa, comparando métodos frequentistas com novas metodologias surgidas na literatura científica como a combinação dos métodos de Rejeição por Atraso e o Metropólis Adaptativo (RAMA), até então não testados para predições de gases de fermentação in vitro. Além disso, modelos de séries temporais foram usados para previsão da produção de CO2 nas fermentações de gases in vitro de feno de alfafa. Dentro do contexto de crescimento de gado de corte, foi realizada pela primeira vez uma abordagem para predições individuais dos animais para taxa de ganho de peso e a necessidade de energia para mantença baseada na dinâmica de crescimento e composição química corporal do Modelo de Crescimento de Davis (MCD), com comparação de análise de covariância multivariada entre diferentes cenários (gêneros, sistemas e genótipo cruzados), em um experimento a campo no Brasil. Adicionalmente calibrações dos parâmetros baseadas na amostra de cada cenário, pelos ajustes do MCD e usando análise frequentista, bootstrap não-paramétrico e simulações Monte Carlo foram realizadas com os dados nacionais (raça cruzada) e comparada com as estimativas originais do modelo obtido com raças Britânicas (Bos taurus). Os principais critérios adotados para avaliar os ajustes dos modelos foram o Erro Quadrático Médio de Predição (EQMP), o Critério de Informação Akaike (AIC) e o Critério de Informação Bayesiano (BIC). Os resultados não só contribuirão para o avanço da literatura existente, mas também auxiliarão a indústria de carne bovina e produtores rurais a encontrar especificações do mercado de carne, tanto a nível nacional e internacional. Concluiu-se que i) na produção de gases: o modelo ARIMA (1, 1, 2) ajustou a produção acumulativa de CO2, atingindo o valor máximo de 1,1066 (mL) no tempo de 47,5 h e a equação é indicada para estimar a produção de gases; ii) no crescimento de gados de corte usando as estimativas individuais do MCD, os vetores de efeitos de energia de mantença e o acréscimo de proteína possuem efeitos pronunciados quanto as interações entre sistemas e gêneros; iii) no crescimento de gados de corte usando as estimativas da amostra total com MCD, os genótipos cruzados tiveram maior gasto de energia de mantença e foram mais rápidos de maturação em comparação tanto com os animais de genótipos Britânicos (Bos taurus) e touros Nelores. A técnica de bootstrap não-paramétrica estimou com sucesso as distribuições dos parâmetros (que tiveram distribuição probabilidade normal para maioria dos cenários). Correlação negativa entre os parâmetros de acréscimo de DNA e energia de mantença foram encontrados para animais machos não castrados do sistema extensivo, indicando que foram mais eficientes no uso da energia. A generalização de tal relação ainda demanda estudos mais abrangentes e aprofundados. / Model parameter fitting and parameter correlation structures are important for characterize a system\'s behaviour in response to multiple scenarios variations (climate, genotypes, nutritional diet and other factors). The aim was to contribute to statistical inferences in the production of CO2 [a potent greenhouse gas (GHG)] in vitro fermentation of alfalfa hay, comparing frequentist methods with new methodologies that emerged in the scientific literature, such as the combination of a delay Rejection and the Adaptive Metropolis methods (RAMA), not yet tested for in vitro fermentation gases predictions. In addition, time series models were used to predict CO2 production in the in vitro fermentation of alfalfa hay. For the first time, individual predictions of animal weight gain rate and energy of maintenance based on the growth dynamics and body composition Davis Growth Model (DGM) was carried out besides multivariate covariance analysis of different scenarios (genres, systems and crossed genotype). Additionally, parameter estimation based on sample of each scenario, using frequentist analysis, nonparametric bootstrap and Monte Carlo simulations were performed with national data (cross breed) and compared to the original estimates of the model obtained with British breeds (Bos taurus). The main criteria used to evaluate the model accuracy were the Mean Square Error of Prediction (MSEP), the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). The results not only contribute to the scientific progress in modeling animal growth and composition, but also assist the beef industry and farmers to adjust the production process to the meat market specifications, both nationally and internationally. For in vitro gas production, we concluded that the ARIMA (1, 1, 2) model presented the highest accuracy in predicting cumulative CO2 production and the respective equation is recommended for estimating CO2 production. In the growth beef cattle using the individual estimates DGM, average vectors from maintenance of energy and protein accretion showed pronounced effects as the interactions between systems and genres. Also the total sample with DGM, cross-breed genotypes had higher maintenance energy expenditure and were faster-maturing compared with British genotypes animals(Bos taurus) and Nellore bulls estimates. Bootstrap nonparametric with downhill simplex optimization method successfully estimated the distributions of the parameters (that had normal probability distribution for most scenarios). Uncastrated male animals of the extensive system showed negative correlation between the protein deposition rate and requirement for energy maintenance, indicating that animals with faster lean tissue deposition were also more efficient in energy usage. We warn that the generalization of this finding demands studies with larger populations.
218

Análise da série do índice de Depósito Interfinanceiro: modelagem da volatilidade e apreçamento de suas opções. / Analysis of Brazilian Interbank Deposit Index series: volatility modeling and option pricing

Mauad, Roberto Baltieri 05 December 2013 (has links)
Modelos bastante utilizados atualmente no apreçamento de derivativos de taxas de juros realizam, muitas vezes, premissas excessivamente restritivas com relação à volatilidade da série do ativo objeto. O método de Black and Scholes e o de Vasicek, por exemplo, consideram a variância da série como constante no tempo e entre as diferentes maturidades, suposição que pode não ser a mais adequada para todos os casos. Assim, entre as técnicas alternativas de modelagem da volatilidade que vêm sendo estudadas, destacam-se as regressões por kernel. Discutimos neste trabalho a modelagem não paramétrica por meio da referida técnica e posterior apreçamento das opções em um modelo HJM Gaussiano. Analisamos diferentes especificações possíveis para a estimação não paramétrica da função de volatilidade através de simulações de Monte Carlo para o apreçamento de opções sobre títulos zero cupom, e realizamos um estudo empírico utilizando a metodologia proposta para o apreçamento de opções sobre IDI no mercado brasileiro. Um dos principais resultados encontrados é o bom ajuste da metodologia proposta no apreçamento de opções sobre títulos zero cupom. / Many models which have been recently used for derivatives pricing make restrictive assumptions about the volatility of the underlying object. Black-Scholes and Vasicek models, for instance, consider the volatility of the series as constant throughout time and maturity, an assumption that might not be the most appropriate for all cases. In this context, kernel regressions are important technics which have been researched recently. We discuss in this framework nonparametric modeling using the aforementioned technic and posterior option pricing using a Gaussian HJM model. We analyze different specifications for the nonparametric estimation of the volatility function using Monte Carlo simulations for the pricing of options on zero coupon bonds and conduct an empirical study using the proposed methodology for the pricing of options on the Interbank Deposit Index (IDI) in the Brazilian market. One of our main results is the good adjustment of the proposed methodology on the pricing of options on zero coupon bonds.
219

Análise de variância multivariada com a utilização de testes não -paramétricos e componentes principais baseados em matrizes de postos. / Multivariate analysis of variance using nonparametric tests and principal components based on rank matrices.

Pontes, Antonio Carlos Fonseca 19 July 2005 (has links)
Métodos não-paramétricos têm aplicação ampla na análise de dados, tendo em vista que não são limitados pela necessidade de imposição de distribuições populacionais específicas. O caráter multivariado de dados provenientes de estudos nas ciências do comportamento, ecológicos, experimentos agrícolas e muitos outros tipos, e o crescimento contínuo da tecnologia computacional, têm levado a um crescente interesse no uso de métodos multivariados não-paramétricos. A aplicação da análise de variância multivariada não-paramétrica é pouco inacessível ao pesquisador, exceto através de métodos aproximados baseados nos valores assintóticos da estatística de teste. Portanto, este trabalho tem por objetivo apresentar uma rotina na linguagem C que realiza testes baseados numa extensão multivariada do teste univariado de Kruskal- Wallis, usando a técnica das permutações. Para pequenas amostras, todas as configurações de tratamentos são obtidas para o cálculo do valor-p. Para grandes amostras, um número fixo de configurações aleatórias é usado, obtendo assim valores de significância aproximados. Além disso, um teste alternativo é apresentado com o uso de componentes principais baseados nas matrizes de postos. / Nonparametric methods have especially broad applications in the analysis of data since they are not bound by restrictions on the population distribution. The multivariate character of behavioural, ecological, agricultural and many other types of data and the continued improvement in computer technology have led to a sharp interest in the use of nonparametric multivariate methods in data analysis. The application of nonparametric multivariate analysis is inaccessible to applied research, except by approximation methods based on asymptotic values of the test statistic. Thus, this work aims to presenting a routine in the C language that runs multivariate tests based on a multivariate extension of the univariate Kruskal-Wallis test, using permutation technique. For small samples, all possible treatment configurations are used in order to obtain the p-value. For large samples, a fixed number of random configurations are used, obtaining an approximated significance values. In addition, another alternative test is presented using principal components based on rank matrices.
220

Three essays on financial econometrics. / CUHK electronic theses & dissertations collection

January 2013 (has links)
本文由三篇文章構成。首篇是關於多維變或然分佈預測的檢驗。第三篇是關於非貝斯結構性轉變的VAR 模型。或然分佈預測的檢驗是基於檢驗PIT(probability integral transformation) 序的均勻份佈性質與獨性質。第一篇文章基於Clements and Smith (2002) 的方法提出新的位置正變換。這新的變換改善原有的對稱問題,以及提高檢驗的power。第二篇文章建對於多變或然分佈預測的data-driven smooth 檢驗。通過蒙特卡模擬,本文驗證這種方法在小樣本下的有效性。在此之前,由於高維模型的複雜性,大部分的研究止於二維模型。我們在文中提出有效的方法把多維變換至單變。蒙特卡模擬實驗,以及在組融據的應用中,都證實這種方法的優勢。最後一篇文章提出非貝斯結構性轉變的VAR 模型。在此之前,Chib(1998) 建的貝斯結構性轉變模型須要預先假定構性轉變的目。因此他的方法須要比較同構性轉變目模型的優。而本文提出的stick-breaking 先驗概,可以使構性轉變目在估計中一同估計出。因此我們的方法具有robust 之性質。通過蒙特卡模擬,我們考察存在著四個構性轉變的autoregressive VAR(2) 模型。結果顯示我們的方法能準確地估計出構性轉變的發生位置。而模型中的65 個估計都十分接近真實值。我們把這方法應用在多個對沖基回報序。驗測出的構性轉變位置與市場大跌的時段十分吻合。 / This thesis consists of three essays on financial econometrics. The first two essays are about multivariate density forecast evaluations. The third essay is on nonparametric Bayesian change-point VAR model. We develop a method for multivariate density forecast evaluations. The density forecast evaluation is based on checking uniformity and independence conditions of the probability integral transformation of the observed series in question. In the first essay, we propose a new method which is a location-adjusted version of Clements and Smith (2002) that corrects asymmetry problem and increases testing power. In the second essay, we develop a data-driven smooth test for multivariate density forecast evaluation and show some evidences on its finite sample performance using Monte Carlo simulations. Previous to our study, most of the works are up to bivariate model as it is difficult to evaluate with the existing methods. We propose an efficient dimensional reduction approach to reduce the dimension of multivariate density evaluation to a univariate one. We perform various Monte Carlo simulations and two applications on financial asset returns which show that our test performs well. The last essay proposes a nonparametric extension to existing Bayesian change-point model in a multivariate setting. Previous change-point model of Chib (1998) requires specification of the number of change points a priori. Hence a posterior model comparison is needed for di erent change-point models. We introduce the stick-breaking prior to the change-point process that allows us to endogenize the number of change points into the estimation procedure. Hence, the number of change points is simultaneously determined with other unknown parameters. Therefore our model is robust to model specification. We preform a Monte Carlo simulation of bivariate vector autoregressive VAR(2) process which is subject to four structural breaks. Our model estimate the break locations with high accuracy and the posterior estimates of the 65 parameters are closed to the true values. We apply our model to various hedge fund return processes and the detected change points coincide with market crashes. / Detailed summary in vernacular field only. / Ko, Iat Meng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 176-194). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Multivariate Density Forecast Evaluation: A Modified Approach --- p.7 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.2 --- Evaluating Density Forecasts --- p.13 / Chapter 2.3 --- Monte Carlo Simulations --- p.18 / Chapter 2.3.1 --- Bivariate normal distribution --- p.19 / Chapter 2.3.2 --- The Ramberg distribution --- p.21 / Chapter 2.3.3 --- Student’s t and uniform distributions --- p.24 / Chapter 2.4 --- Empirical Applications --- p.24 / Chapter 2.4.1 --- AR model --- p.25 / Chapter 2.4.2 --- GARCH model --- p.27 / Chapter 2.5 --- Conclusion --- p.29 / Chapter 3 --- Multivariate Density Forecast Evaluation: Smooth Test Approach --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Exponential Transformation for Multi-dimension Reduction --- p.47 / Chapter 3.3 --- The Smooth Test --- p.56 / Chapter 3.4 --- The Data-Driven Smooth Test Statistic --- p.66 / Chapter 3.4.1 --- Selection of K --- p.66 / Chapter 3.4.2 --- Choosing p of the Portmanteau based test --- p.69 / Chapter 3.5 --- Monte Carlo Simulations --- p.70 / Chapter 3.5.1 --- Multivariate normal and Student’s t distributions --- p.71 / Chapter 3.5.2 --- VAR(1) model --- p.74 / Chapter 3.5.3 --- Multivariate GARCH(1,1) Model --- p.78 / Chapter 3.6 --- Density Forecast Evaluation of the DCC-GARCH Model in Density Forecast of Spot-Future returns and International Equity Markets --- p.80 / Chapter 3.7 --- Conclusion --- p.87 / Chapter 4 --- Stick-Breaking Bayesian Change-Point VAR Model with Stochastic Search Variable Selection --- p.111 / Chapter 4.1 --- Introduction --- p.111 / Chapter 4.2 --- The Bayesian Change-Point VAR Model --- p.116 / Chapter 4.3 --- The Stick-breaking Process Prior --- p.120 / Chapter 4.4 --- Stochastic Search Variable Selection (SSVS) --- p.121 / Chapter 4.4.1 --- Priors on Φ[subscript j] = vec(Φ[subscript j]) = --- p.122 / Chapter 4.4.2 --- Prior on Σ[subscript j] --- p.123 / Chapter 4.5 --- The Gibbs Sampler and a Monte Carlo Simulation --- p.123 / Chapter 4.5.1 --- The posteriors of ΦΣ[subscript j] and Σ[subscript j] --- p.123 / Chapter 4.5.2 --- MCMC Inference for SB Change-Point Model: A Gibbs Sampler --- p.126 / Chapter 4.5.3 --- A Monte Carlo Experiment --- p.128 / Chapter 4.6 --- Application to Daily Hedge Fund Return --- p.130 / Chapter 4.6.1 --- Hedge Funds Composite Indices --- p.132 / Chapter 4.6.2 --- Single Strategy Hedge Funds Indices --- p.135 / Chapter 4.7 --- Conclusion --- p.138 / Chapter A --- Derivation and Proof --- p.166 / Chapter A.1 --- Derivation of the distribution of (Z₁ - EZ₁) x (Z₂ - EZ₂) --- p.166 / Chapter A.2 --- Derivation of limiting distribution of the smooth test statistic without parameter estimation uncertainty ( θ = θ₀) --- p.168 / Chapter A.3 --- Proof of Theorem 2 --- p.170 / Chapter A.4 --- Proof of Theorem 3 --- p.172 / Chapter A.5 --- Proof of Theorem 4 --- p.174 / Chapter A.6 --- Proof of Theorem 5 --- p.175 / Bibliography --- p.176

Page generated in 0.1034 seconds