Spelling suggestions: "subject:"kernel regression"" "subject:"fernel regression""
11 |
配合價量關係技術型態在臺灣股票市場的應用 / The Applications of Price-Volume Matching Technical Algorithms on the Taiwan Stock Market鍾淳豐, Chung, Chun-Feng Unknown Date (has links)
技術分析一直以來是國內投資人在進行股票交易時極為重要的參考依據。學術界對於技術分析的研究常止於指標的獲利性,然而對於技術分析師所倚重的技術型態,則由於涉及指標量化的問題,所以少有學者對此進行研究。Lo, Mamaysky, and Wang﹙2000﹚以核迴歸來平滑股價的波動,並對技術型態加以量化,開啟了學術界研究技術型態的大門。
本研究參考Lo, Mamaysky, and Wang﹙2000﹚的股價平滑模型,並配合價量關係,研究技術型態在臺灣股票市場的有效性。本研究目的在於探討技術型態出現時的報酬率分配是否會和技術型態未出現時的報酬率分配有所不同,並且能否進一步依照不同的報酬率分配來訂定不同的投資策略。除此之外,本研究還探討成交量是否會對股價變動造成影響。
本研究實證結果發現:
1.技術型態出現時的報酬率分配的確和未出現時的報酬率分配之間存有顯著差異。
2.成交量會對股價變動造成影響。本研究發現,在實證模型中加入成交量因子後,技術型態出現時及未出現時的報酬率分配其兩者之間的差異會更加顯著。
3.利用交易策略形成期所模擬出的技術型態交易策略,在未來五年內可以獲得穩定的報酬,並且可擊敗買入持有策略。
4.技術型態的有效性,不會因為類股或股票的不同而有所差異,這是因為技術分析是反應投資人心理狀態及資訊分析後的結果,並不會因為股票特性的不同而改變。
|
12 |
Mathematical approaches to digital color image denoisingDeng, Hao 14 September 2009 (has links)
Many mathematical models have been designed to remove noise from images. Most of them focus on grey value images with additive artificial noise. Only very few specifically target natural color photos taken by a digital camera with real noise. Noise in natural color photos have special characteristics that are substantially different
from those that have been added artificially.
In this thesis previous denoising models are reviewed. We analyze the strengths and weakness of existing denoising models by showing where they perform well and where they don't. We put special focus on two models: The steering kernel regression model and the non-local model. For Kernel Regression model, an adaptive bilateral
filter is introduced as complementary to enhance it. Also a non-local bilateral filter is proposed as an application of the idea of non-local means filter. Then the idea of cross-channel denoising is proposed in this thesis. It is effective in
denoising monochromatic images by understanding the characteristics of digital noise in natural color images. A non-traditional color space is also introduced specifically for this purpose. The cross-channel paradigm can be applied to most of the exisiting models to greatly improve their performance for denoising natural color images.
|
13 |
Linear Subspace and Manifold Learning via Extrinsic GeometrySt. Thomas, Brian Stephen January 2015 (has links)
<p>In the last few decades, data analysis techniques have had to expand to handle large sets of data with complicated structure. This includes identifying low dimensional structure in high dimensional data, analyzing shape and image data, and learning from or classifying large corpora of text documents. Common Bayesian and Machine Learning techniques rely on using the unique geometry of these data types, however departing from Euclidean geometry can result in both theoretical and practical complications. Bayesian nonparametric approaches can be particularly challenging in these areas. </p><p> </p><p>This dissertation proposes a novel approach to these challenges by working with convenient embeddings of the manifold valued parameters of interest, commonly making use of an extrinsic distance or measure on the manifold. Carefully selected extrinsic distances are shown to reduce the computational cost and to increase accuracy of inference. The embeddings are also used to yield straight forward derivations for nonparametric techniques. The methods developed are applied to subspace learning in dimension reduction problems, planar shapes, shape constrained regression, and text analysis.</p> / Dissertation
|
14 |
Modely s proměnlivými koeficienty / Varying coefficient modelsSekera, Michal January 2017 (has links)
The aim of this thesis is to provide an overview of the varying coefficient mod- els - a class of regression models that allow the coefficients to vary as functions of random variables. This concept is described for independent samples, longi- tudinal data, and time series. Estimation methods include polynomial spline, smoothing spline, and local polynomial methods for models of a linear form and local maximum likelihood method for models of a generalized linear form. The statistical properties focus on the consistency and asymptotical distribution of the estimators. The numerical study compares the finite sample performance of the estimators of coefficient functions. 1
|
15 |
Trajectory Similarity Based Prediction for Remaining Useful Life EstimationWang, Tianyi 06 December 2010 (has links)
No description available.
|
16 |
比較使用Kernel和Spline法的傘型迴歸估計 / Compare the Estimation on Umbrella Function by Using Kernel and Spline Regression Method賴品霖, Lai, Pin Lin Unknown Date (has links)
本研究探討常用的兩個無母數迴歸方法,核迴歸與樣條迴歸,在具有傘型限制式下,對於傘型函數的估計與不具限制式下的傘型函數估計比較,同時也探討不同誤差變異對估計結果的影響,並進一步探討受限制下兩方法的估計比較。本研究採用「估計頂點位置與實際頂點位置差」及「誤差平方和」作為衡量估計結果的指標。在帶寬及節點的選取上,本研究採用逐一剔除交互驗證法來篩選。模擬結果顯示,受限制的核函數在誤差變異較大的頂點位置估計較佳,誤差變異縮小時反而頂點位置估計較差,受限制的B-樣條函數也有類似的狀況。而在兩方法的比較上,對於較小的誤差變異,核函數的頂點位置估計能力不如樣條函數,但在整體的誤差平方和上卻沒有太大劣勢,當誤差變異較大時,核函數的頂點位置估計能力有所提升,整體誤差平方和仍舊維持還不錯的結果。 / In this study, we give an umbrella order constraint on kernel and spline regression model. We compare their estimation in two measurements, one is the difference of estimate peak and true peak, the other one is the sum of square difference on predict and the true value. We use leave-one-out cross validation to select bandwidth for kernel function and also to decide the number of knots for spline function. The effect of different error size is also considered. Some of R packages are used when doing simulation. The result shows that when the error size is bigger, the prediction of peak location is better in both constrained kernel and spline estimation. The constrained spline regression tends to provide better peak location estimation compared to constrained kernel regression.
|
17 |
Graph Theory and Dynamic Programming Framework for Automated Segmentation of Ophthalmic Imaging BiomarkersChiu, Stephanie Ja-Yi January 2014 (has links)
<p>Accurate quantification of anatomical and pathological structures in the eye is crucial for the study and diagnosis of potentially blinding diseases. Earlier and faster detection of ophthalmic imaging biomarkers also leads to optimal treatment and improved vision recovery. While modern optical imaging technologies such as optical coherence tomography (OCT) and adaptive optics (AO) have facilitated in vivo visualization of the eye at the cellular scale, the massive influx of data generated by these systems is often too large to be fully analyzed by ophthalmic experts without extensive time or resources. Furthermore, manual evaluation of images is inherently subjective and prone to human error.</p><p>This dissertation describes the development and validation of a framework called graph theory and dynamic programming (GTDP) to automatically detect and quantify ophthalmic imaging biomarkers. The GTDP framework was validated as an accurate technique for segmenting retinal layers on OCT images. The framework was then extended through the development of the quasi-polar transform to segment closed-contour structures including photoreceptors on AO scanning laser ophthalmoscopy images and retinal pigment epithelial cells on confocal microscopy images. </p><p>The GTDP framework was next applied in a clinical setting with pathologic images that are often lower in quality. Algorithms were developed to delineate morphological structures on OCT indicative of diseases such as age-related macular degeneration (AMD) and diabetic macular edema (DME). The AMD algorithm was shown to be robust to poor image quality and was capable of segmenting both drusen and geographic atrophy. To account for the complex manifestations of DME, a novel kernel regression-based classification framework was developed to identify retinal layers and fluid-filled regions as a guide for GTDP segmentation.</p><p>The development of fast and accurate segmentation algorithms based on the GTDP framework has significantly reduced the time and resources necessary to conduct large-scale, multi-center clinical trials. This is one step closer towards the long-term goal of improving vision outcomes for ocular disease patients through personalized therapy.</p> / Dissertation
|
18 |
Extending covariance structure analysis for multivariate and functional dataSheppard, Therese January 2010 (has links)
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
|
19 |
Bandwidth Selection in Nonparametric Kernel Estimation / Bandweitenwahl bei nichtparametrischer KernschätzungSchindler, Anja 29 September 2011 (has links)
No description available.
|
20 |
Optimum Savitzky-Golay Filtering for Signal EstimationKrishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE).
The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity.
A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression.
Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature.
Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
|
Page generated in 0.0764 seconds