Spelling suggestions: "subject:"[een] SMOOTHING"" "subject:"[enn] SMOOTHING""
311 |
Akcijų kainų kintamumo analizė / Stock price volatility analysisŠimkutė, Jovita 16 August 2007 (has links)
Darbe „Akcijų kainų kintamumo analizė“ nagrinėjami ir lyginami Baltijos (Lietuvos, Latvijos, Estijos) bei Lotynų Amerikos (Meksikos, Venesuelos) šalių duomenys. Atliekama pasirinktų akcijų kainų grąžų analizė. Jai naudojami trijų metų kiekvienos dienos duomenys (akcijų kainos). Pirmoje darbo dalyje supažindinama su bendra prognozavimo metodų teorija, aprašomi skirtingi, dažnai literatūroje ir praktikoje sutinkami modeliai. Antrojoje dalyje aprašyti prognozavimo metodai taikomi realiems duomenims, t.y. pasirinktoms akcijoms. Prognozuojama akcijų kainų grąža, kuri po to yra palyginama su realia reikšme, apskaičiuojamos prognozavimo metodų paklaidos. Pagrindinis darbo tikslas – atlikti lyginamąją prognozavimo modelių analizę su pasirinktomis akcijomis ir atrinkti tuos metodus, kurie duoda geriausius rezultatus. Darbo tikslui įgyvendinti naudojama SAS statistinio paketo ekonometrikos ir laiko eilučių analizės posistemė SAS/ETS (Time Series Forecasting System). / Most of empirical surveys in macro and financial economics are based on time series analysis. In this work, data of Baltic and Latin America countries is being analyzed and compared. Analysis of stock price returns is presented using daily long term (three years) period data. In the first part of this work general forecasting theory is presented, also different methods, frequently met in the literature and practice, are described. In the second part, forecasting models are being applied for real data. We present results of forecasting stock returns comparing them with real values. Also a precision of forecasts is being calculated, which let us to decide about propriety of each model. Consequently, the aim of this work is to forecast returns of stock price by various time series models and to choose the best one. The analysis was made using SAS statistical package and its econometrics and Time Series Analysis System (SAS/ETS).
|
312 |
Variable Selection and Function Estimation Using Penalized MethodsXu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods:
variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics.
Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models.
While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
|
313 |
Nonlinear Image RestorationUngan, Cahit Ugur 01 November 2005 (has links) (PDF)
This thesis analyzes the process of deblurring of degraded images generated by space-variant nonlinear image systems with Gaussian observation noise. The restoration of blurred images is performed by using two methods / a modified version of the Optimum Decoding Based Smoothing Algorithm and the Bootstrap Filter Algorithm which is a version of Particle Filtering methods. A computer software called MATLAB is used for performing the simulations of image estimation. The results of some simulations for various observation and image models are presented.
|
314 |
Incremental smoothing and mappingKaess, Michael 17 November 2008 (has links)
Incremental smoothing and mapping (iSAM) is presented, a novel approach to the simultaneous localization and mapping (SLAM) problem. SLAM is the problem of estimating an observer's position from local measurements only, while creating a consistent map of the environment. The problem is difficult because even very small errors in the local measurements accumulate over time and lead to large global errors. iSAM provides an exact and efficient solution to the SLAM estimation problem while also addressing data association. For the estimation problem, iSAM provides an exact solution by performing smoothing, which keeps all previous poses as part of the estimation problem, and therefore avoids linearization errors. iSAM uses methods from sparse linear algebra to provide an efficient incremental solution. In particular, iSAM deploys a direct equation solver based on QR matrix factorization of the naturally sparse smoothing information matrix. Instead of refactoring the matrix whenever new measurements arrive, only the entries of the factor matrix that actually change are calculated. iSAM is efficient even for robot trajectories with many loops as it performs periodic variable reordering to avoid unnecessary fill-in in the factor matrix. For the data association problem, I present state of the art data association techniques in the context of iSAM and present an efficient algorithm to obtain the necessary estimation uncertainties in real-time based on the factored information matrix. I systematically evaluate the components of iSAM as well as the overall algorithm using various simulated and real-world data sets.
|
315 |
Ανάπτυξη και αξιολόγηση μεθοδολογίας για τη δημιουργία πλεγματικών (gridded) ισοτοπικών δεδομένωνΣαλαμαλίκης, Βασίλειος 20 April 2011 (has links)
Διάφορες κλιματολογικές, υδρολογικές και περιβαλλοντικές μελέτες απαιτούν ακριβή γνώση της χωρικής κατανομής των σταθερών ισοτόπων του υδρογόνου και του οξυγόνου στον υετό. Δεδομένου ότι ο αριθμός των σταθμών συλλογής δειγμάτων υετού για ισοτοπική ανάλυση είναι μικρός και όχι ομογενώς κατανεμημένος σε πλανητικό επίπεδο, η πλανητκή κατανομή των σταθερών ισοτόπων μπορεί να υπολογισθεί μέσω της δημιουργίας πλεγματικών ισοτοπικών δεδομένων, για τη δημιουργία των οποίων έχουν προταθεί διάφορες μέθοδοι. Ορισμένες χρησιμοποιούν εμπειρικές σχέσεις και γεωστατιστικές μεθόδους ώστε να ελαχιστοποιήσουν τα σφάλματα λόγω παρεμβολής. Στην εργασία αυτή γίνεται μια προσπάθεια να δημιουργηθούν βάσεις πλεγματικών δεδομένων της ισοτοπικής σύστασης του υετού με ανάλυση 10΄ × 10΄ για την περιοχή της Κεντρικής και Ανατολικής Μεσογείου. Προσδιορίζονται στατιστικά πρότυπα λαμβάνοντας υπ’ όψιν γεωγραφικές και μετεωρολογικές παραμέτρους, ως ανεξάρτητες μεταβλητές. Η αρχική μεθοδολογία χρησιμοποιεί μόνο το υψόμετρο της περιοχής και το γεωγραφικό της πλάτος ως ανεξάρτητες μεταβλητές. Επειδή η ισοτοπική σύσταση εξαρτάται και από το γεωγραφικό μήκος προστέθηκαν στα υφιστάμενα πρότυπα, εκτός των γεωγραφικών μεταβλητών και μετεωρολογικές. Προτείνεται σειρά προτύπων τα οποία περιλαμβάνουν είτε ορισμένες είτε συνδυασμό αυτών των παραμέτρων. Η αξιολόγηση των προτύπων γίνεται με εφαρμογή των μεθόδων Thin Plate Splines (TPSS) και Ordinary Kriging (ΟΚ). / Several climatic, hydrological and environmental studies require the accurate knowledge of the spatial distribution of stable isotopes in precipitation. Since the number of rain sampling stations for isotope analysis is small and not evenly distributed around the globe, the global distribution of stable isotopes can be calculated via the production of gridded isotopic data sets. Several methods have been proposed for this purpose. Some of them use empirical equations and geostatistical methods in order to minimize eventual errors due to interpolation. In this work a methodology is proposed for the development of 10΄ × 10΄ gridded isotopic data of precipitation in Central and Eastern Mediterranean. Statistical models are developed taking into account geographical and meteorological parameters as independent variables. The initial methodology takes into account only the altitude and latitude of an area. Since however the isotopic composition of precipitation depends also on longitude, the existing models have been modified by adding meteorological parameters as independent variables also. A series of models is proposed taking into account some or a combination of the above mentioned variables. The models are validated using the Thin Plate Smoothing Splines (TPSS) and the Ordinary Kriging (OK) methods.
|
316 |
High-resolution climate variable generation for the Western CapeJoubert, Sarah Joan 03 1900 (has links)
Thesis (MSc (Geography and Environmental Studies))--University of Stellenbosch, 2007. / Due to the relative scarcity of weather stations, the climate conditions of large areas are not
adequately represented by a weather station. This is especially true for regions with complex
topographies or low population densities. Various interpolation techniques and software packages
are available with which the climate of such areas can be calculated from surrounding weather
stations’ data. This study investigates the possibility of using the software package ANUSPLIN to
create accurate climate maps for the Western Cape, South Africa.
ANUSPLIN makes use of thin plate smoothing splines and a digital elevation model to convert
point data into grid format to represent an area’s climatic conditions. This software has been used
successfully throughout the world, therefore a large body of literature is available on the topic,
highlighting the limitations and successes of this interpolation method.
Various factors have an effect on a region’s climate, the most influential being location (distance
from the poles or equator), topography (height above sea level), distance from large water bodies,
and other topographical factors such as slope and aspect. Until now latitude, longitude and the
elevation of a weather station have most often been used as input variables to create climate grids,
but the new version of ANUSPLIN (4.3) makes provision for additional variables. This study
investigates the possibility of incorporating the effect of the surrounding oceans and topography
(slope and aspect) in the interpolation process in order to create climate grids with a resolution of
90m x 90m. This is done for monthly mean daily maximum and minimum temperature and the
mean monthly rainfall for the study area for each month of the year.
Not many projects where additional variables have been incorporated in the interpolation process
using ANUSPLIN are to be found in the literature, thus further investigation into the correct
transformation and the units of these variables had to be done before they could be successfully
incorporated. It was found that distance to oceans influences a region’s maximum and minimum
temperatures, and to a lesser extent rainfall, while aspect and slope has an influence on a region’s
rainfall.
In order to assess the accuracy of the interpolation process, two methods were employed, namely
statistical values produced during the spline function calculations by ANUSPLIN, and the removal
of a selected number of stations in order to compare the interpolated values with the actual measured values. The analysis showed that more accurate maps were obtained when additional
variables were incorporated into the interpolation process.
Once the best transformations and units were identified for the additional variables, climate maps
were produced in order to compare them with existing climate grids available for the study area. In
general the temperatures were higher than those of the existing grids. For the rainfall grids
ANUSPLIN’s produced higher rainfall values throughout the study region compared to the existing
grids, except for the Southwestern Cape where the rainfall values were lower on north-facing slopes
and high-lying area
|
317 |
Contribution à l'amélioration de la qualité des états de surfaces des prothèses orthopédiques / Contribution to the surface quality improvement of orthopedic prosthesesAzzam, Noureddine 19 October 2015 (has links)
Une prothèse de genou est généralement, composée de deux parties fixées respectivement sur le fémur et sur le tibia et d’une troisième, dite intercalaire. Durant le processus de fabrication de ces composants des déformations apparaissent au niveau des bruts de fonderie. Les fabricants de prothèses choisissent d’assurer l’épaisseur nominale de la prothèse en enlevant une épaisseur constante sur le brut de fonderie. Cette opération est généralement réalisée manuellement. L’objectif de ces travaux de thèse est de contribuer à l’automatisation de ces opérations en proposant une méthode d’adaptation des trajectoires d’usinage aux variations géométriques de la surface cible. L’objectif de ce travail de recherche est d’adapter une trajectoire d’usinage sur un modèle nominal pour enlever une épaisseur constante sur une surface brute de fonderie mesurée. La méthode proposée commence par une étape d’alignement de la surface mesurée sur la trajectoire nominale en utilisant un algorithme d’ICP. Par la suite, la trajectoire nominale est déformée pour venir enlever l'épaisseur désirée sur la surface brute mesurée. Cette dernière est définie, dans ces travaux, suivant un modèle STL. Naturellement, les discontinuités de ce type de modèle induit une impression des motifs du STL sur la trajectoire adaptée et, donc, sur la pièce usinée. Par la suite, afin de d’atténuer ce problème et d’améliorer la qualité de fabrication, il est proposé de procéder à un lissage de la trajectoire.Afin de valider les développements théoriques de ces travaux, des essais ont été réalisés sur une machine cinq axes pour l’ébauche de composants fémoraux d’une prothèse uni-compartimentale de genou. / Commonly, knee prostheses are composed of two parts fixed respectively on femur and tibia, and a third one called intercalary. During the manufacturing process, of these components distortions appear on roughcast workpiece geometry. Thus, prosthesis manufacturers choose to ensure the nominal thickness of the prosthesis by removing a constant thickness on the roughcast workpiece. This operation is generally carried out realized manually.The aim of this thesis is to contribute to the automation of these manual operations by providing a method to adapt the machining toolpaths at geometrical variations of the target surface. The aim of this research work is to adapt a machining toolpath computed on a nominal model to remove a constant thickness on a roughcast measured surface. The proposed method starts with an alignment step of the measured surface on the nominal toolpath using an ICP algorithm. Subsequently, the nominal toolpath is deformed to remove the desired thickness of the measured rough surface defined in presented case by a STL model. Naturally, discontinuities of this type of model induce the apparition of pattern for the STL on the adapted toolpath and thus on the machined workpiece. Subsequently, to limit this problem and to improve the quality of realized surface, it is proposed a toolpath smoothing method. To validate theoretical developments of this work, tests were carried out on a five-axis machine for roughing of femoral components of a unicompartmental knee prosthesis.
|
318 |
[en] ON ADDRESSING IRREGULARITIES IN ELECTRICITY LOAD TIME-SERIES AND SHORT TERM LOAD FORECASTING / [es] UN SISTEMA INTEGRADO DE MONITORAMIENTO Y PREVISIÓN DE CARGA ELÉCTRICA A CORTO PLAZO / [pt] UM SISTEMA INTEGRADO DE MONITORAÇÃO E PREVISÃO DE CARGA ELÉTRICA DE CURTO PRAZOHELIO FRANCISCO DA SILVA 19 July 2001 (has links)
[pt] As alterações na legislação do Setor de Energia Elétrica
Brasileiro em fins do milênio passado, provocou profundas
mudanças no planejamento da Operação do Sistema e na
Comercialização de energia elétrica no Brasil.
O desmembramento das atividades de geração, de transmissão
e de distribuição de energia elétrica criou novas
características no comportamento dos Agentes
Concessionários e as previsões de demanda por energia
elétrica, que sempre foram ferramenta importante, por
exemplo, na programação da operação, passaram a ser
indispensáveis também, na comercialização de energia
elétrica no mercado livre.
Neste novo cenário, a obtenção e o armazenamento de dados
confiáveis passou a ser parte integrante do patrimônio das
Empresas e um sistema eficiente de previsões de
carga passou a ser um diferencial na mesa de negociações.
Os Agentes concessionários e o Operador Nacional do Sistema
Elétrico vêm fazendo investimentos para aperfeiçoar os seus
sistemas de aquisição de dados, entretanto em
sistemas de multipontos algumas falhas imprevistas durante
a sincronização da telemedição podem ocorrer, provocando
defeitos nas séries.
Nas séries de minuto em minuto, por exemplo, uma falha de
algumas horas acarreta centenas de registros defeituosos e
as principais publicações a respeito de modelagens de
séries temporais para tratamento de dados não abordam as
dificuldades encontradas diante de grandes falhas
consecutivas nos dados. / [en] As a result of the continuing privatization process within
the energy sector,electricity load forecasting is a ritical
tool for decision-making in the Industry.
Reliable forecasts are now needed not only for developing
strategies for business planning and short term operational
scheduling, but also to define the spot market
electricity price. The forecasting process is data-ntensive
and interest has been driven to shorter and shorter
intervals. Large investments are being made in modernizing
and improving metering systems, so as to make more data
available to the forecaster. However, the forecaster is
still faced with irregular time-series.
Gaps, missing values, spurious information or repeated
values in the time-series can result from transmission
errors or small failures in the recording process. These so-
called irregularities have led to research that focused on
either iterative processes,like the Kalman filter and the
EM algorithm, or applications of the statistical literature
on treatment of missing values and outliers. Nevertheless,
these methods often result in large forecast errors when
confronted with consecutive failures in the data.
On the other hand, the minute to minute series have a large
amount of points and so the one day ahead forecast horizont
becomes very large to handling with the conventional
methods. In this context, we propose an alternative to
detect and replace values and present a methodology to
perform the forecasting process by using of other
information in the time-series that relate to the
variability and seasonality, which are commonly encountered
in electricity load-forecasting data.
We illustrate the method and address the problem as part of
a wider project that aims at the development of an
automatic on line system for tracking the Brazilian
Interlinked Electric Network Operation and performing short
term load forecasting.
The data were collected by ONS / ELETROBRAS - Brazil. We
concentrate on 10 minutes data for the years 1997-1999 of
Light Serviços de Eletricidade S.A. (Rio de Janeiro and its
surroundings). / [es] Las alteraciones en la legislación del Sector de Energía
Elétrica Brasilero a finales del milenio pasado, provocó
profundos cambios en el planificación de la Operación del
Sistema y en la Comercialización de energía eléctrica en
Brasil. La desarticulación de las actividades de
generación, de transmisión y de distribuición de energía
eléctrica creó nuevas características en el comportamiento
de los Agentes Concesionarios. Así, las previsiones de
demanda por energía eléctrica, que siempre fueron una
herramienta importante, por ejemplo, en la programación de
la operación, pasaron a ser indispensables también en la
comercialización de energía eléctrica en el mercado libre.
En este nuevo escenario, la obtención y almacenamiento de
datos confiables pasó a ser parte integrante del patrimonio
de las Empresas y un sistema eficiente de previsiones de
carga constituye un diferencial en la mesa de
negociaciones. Los Agentes concesionarios y el Operador
Nacional del Sistema Eléctrico han invertido en el
perfeccionamiento de sus sistemas de adquisición de datos.
Sin embargo, en sistemas de multipuntos algunas fallas
imprevistas durante la sincronización de la telemedición
pueden ocurrir, provocando defectos en las series. En las
series de minuto en minuto, por ejemplo, una falla de
algunas horas trae consigo centenas de registros
defectuosos y las principales publicaciones sobre modelos
de series temporales para tratamiento de datos no abordan
las dificuldades encontradas frente a grandes fallas
consecutivas en los datos.
|
319 |
Coping with rural risk : assets, labour allocation, migration, and community networksMalaeb, Bilal January 2016 (has links)
Given the importance of agricultural income for rural households, erratic weather conditions pose an austere threat to these households' livelihoods. This thesis explores ways through which households in agrarian economies smooth their consumption, engage in community networks, and readjust their labour allocation in response to shocks. In a setting of inherent risk, absence of institutional insurance, and labour market inefficiencies, poor households are often left to their own devices to cope with risk. The aim of this study is to examine the different risk-coping strategies adopted by households in rural India, assess their effectiveness, and derive implications for public policy. The results suggest that, in an environment characterised by agro-climatic risk, households are able to self-insure and smooth their consumption in the face of income shocks. Their coping mechanisms, however, may reduce their resilience to future shocks. In fact, small landholders tend to rely more heavily on their productive asset stock, while medium landholders find it optimal to preserve and accumulate their productive assets when exposed to exogenous income shocks. Households also change their labour allocation and reduce their self-employment in agriculture. Furthermore, households in rural areas can migrate to urban areas or engage in societal risk-sharing arrangements to mitigate the risk. The results of this thesis suggest that being part of a community network discourages individuals' migration and increases the likelihood of undertaking riskier activities. The findings also confirm the importance of portfolio adjustments and the diversification of household assets in buffering consumption. These conclusions form the basis of several policy implications, the most important of which is providing formal insurance schemes to encourage the accumulation of assets, technology, and skills.
|
320 |
Metody pro periodické a nepravidelné časové řady / Methods for periodic and irregular time seriesHanzák, Tomáš January 2014 (has links)
Title: Methods for periodic and irregular time series Author: Mgr. Tomáš Hanzák Department: Department of Probability and Mathematical Statistics Supervisor: Prof. RNDr. Tomáš Cipra, DrSc. Abstract: The thesis primarily deals with modifications of exponential smoothing type methods for univariate time series with periodicity and/or certain types of irregularities. A modified Holt method for irregular times series robust to the problem of "time-close" observations is suggested. The general concept of seasonality modeling is introduced into Holt-Winters method including a linear interpolation of seasonal indices and usage of trigonometric functions as special cases (the both methods are applicable for irregular observations). The DLS estimation of linear trend with seasonal dummies is investigated and compared with the additive Holt-Winters method. An autocorrelated term is introduced as an additional component in the time series decomposition. The suggested methods are compared with the classical ones using real data examples and/or simulation studies. Keywords: Discounted Least Squares, Exponential smoothing, Holt-Winters method, Irregular observations, Time series periodicity
|
Page generated in 0.0479 seconds