• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 11
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 25
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Předpovídání realizované volatility: Záleží na skocích v cenách? / Forecasting realized volatility: Do jumps in prices matter?

Lipták, Štefan January 2012 (has links)
This thesis uses Heterogeneous Autoregressive models of Realized Volatility on five-minute data of three of the most liquid financial assets - S&P 500 Futures index, Euro FX and Light Crude NYMEX. The main contribution lies in the length of the datasets which span the time period of 25 years (13 years in case of Euro FX). Our aim is to show that decomposing realized variance into continuous and jump components improves the predicatability of RV also on extremely long high frequency datasets. The main goal is to investigate the dynamics of the HAR model parameters in time. Also, we examine if volatilities of various assets behave differently. The results reveal that decomposing RV into its components indeed im- proves the modeling and forecasting of volatility on all datasets. However, we found that forecasts are best when based on short, 1-2 years, pre-forecast periods due to high dynamics of HAR model's parameters in time. This dynamics is revealed also by a year-by-year estimation on all datasets. Con- sequently, we consider HAR models to be inapproppriate for modeling RV on such long datasets as they are not able to capture the dynamics of RV. This was indicated on all three datasets, thus, we conclude that volatility behaves similarly for different types of assets with similar liquidity. 1
52

Variação da ordem ótima de modelo autorregressivo com a força de contração muscular e a duração do eletromiograma. / Variation of optimal autoregressive order with electromyogram length and contraction force

Cecília Romaro 02 April 2015 (has links)
Os sinais de eletromiografia de agulha podem ser modelados por um sistema linear invariante no tempo (SLIT). A pergunta é: Quantos coeficientes são necessários para tal? O presente mestrado estuda, para sinais de eletromiografia de agulha gravados sob as mesmas condições experimentais, como varia o número ótimo de coeficientes autorregressivos com o comprimento das épocas e com a força de contração muscular concomitantemente. O estudo foi realizado tendo como base sinais de 10%, 25%, 50% e 80% da máxima contração voluntária (MCV) e tendo épocas de 500ms, 250ms, 100ms, 50ms e 25ms de seis indivíduos normais. Desta forma, uma função densidade de probabilidade é sugerida para a ordem do modelo autorregressivo que melhor descreva o sinal de eletromiografia obtido a uma força de contração específica e que tenha uma duração de época definida. / Needle electromyography signals (EMG) can be modeled by a linear time invariant system (LTI). The posed question is How many coefficients are needed for an adequate modeling? This Masters dissertation studies how the optimal number of autoregressive coefficients changes concomitantly with the epoch length and the muscle contraction force for needle electromyography signals recorded under the same experimental conditions. The study was conducted on signals from six normal individuals at 10%, 25%, 50% and 80% of the maximum voluntary contraction and epoch lengths of 500ms, 250ms, 100ms, 50ms and 25ms. Thus, a probability density function is suggested for the autoregressive model order that best describes the electromyographic signal obtained at a specific \"contraction force\" and has a defined \"epoch length\".
53

Feature Based Modulation Recognition For Intrapulse Modulations

Cevik, Gozde 01 September 2006 (has links) (PDF)
In this thesis study, a new method for automatic recognition of intrapulse modulations has been proposed. This new method deals the problem of modulation recognition with a feature-based approach. The features used to recognize the modulation type are Instantaneous Frequency, Instantaneous Bandwidth, Amplitude Modulation Depth, Box Dimension and Information Dimension. Instantaneous Bandwidth and Instantaneous Frequency features are extracted via Autoregressive Spectrum Modeling. Amplitude Modulation Depth is used to express the depth of amplitude change on the signal. The other features, Box Dimension and Information Dimension, are extracted using Fractal Theory in order to classify the modulations on signals depending on their shapes. A modulation database is used in association with Fractal Theory to decide on the modulation type of the analyzed signal, by means of a distance metric among fractal dimensions. Utilizing these features in a hierarchical flow, the new modulation recognition method is achieved. The proposed method has been tested for various intrapulse modulation types. It has been observed that the method has acceptably good performance even for low SNR cases and for signals with small PW.
54

Structural Analysis And Forecasting Of Annual Rainfall Series In India

Sreenivasan, K R 01 1900 (has links)
The objective of the present study is to forecast annual rainfall taking into account the periodicities and structure of the stochastic component. This study has six Chapters. Chapter 1 presents introduction to the problem and objectives of the study. Chapter 2 consists of review of literature. Chapter 3 deals with the model formulation and development. Chapter 4 gives an account of the application of the model. Chapter 5 presents results and discussions. Chapter 6 gives the conclusions drawn from the study. In this thesis the following model formulations are presented in order to achieve the objective. Fourier analysis model is used to identify periodicities that are present in the rainfall series.1 These periodic components are used to obtain discrotized ranges which is an essential input for the Fourier series model. Auto power regression model is developed for estimation of rainfall and hence to compute the first order residuals errlt The parameters of the model are estimated using genetic algorithm. The auto power regression model is of the form, ( Refer the PDF File for Formula) where αi and βi are parameters and M indicates modular value. Fourier series model is formulated and solved through genetic algorithm to estimate the parameters amplitude R, phase Φ and periodic frequency wj for the residual series errlt. The ranges for the parameters R, Φ and wj were obtained from Fourier analysis model. errl't= /µerrlt+ Σj Rcos(wjt+ Φ) Further, an integrated auto power regression and Fourier series model developed (with parameters of the model being known from the above analysis) to estimate new rainfall series Zesťt=Zµ Σ t αi(ZMi-t ) βi+µerrl+ Σj Rcos(wjt+ Φ) and the second order residuals, err2t is computed using, err2t = (zt –Zesťt) Thus, the periodicities are removed in the errlt series and the second order residuals err'2f obtained represents the stochastic component of the actual rainfall series. Auto regressive model is formulated to study the structure of the stochastic component err2t. The auto regressive model of order two AR(2) is found to fit well. The parameters of the AR(2) model were estimated using method of least squares. An exponential weighting function is developed to compute the weight considering weight as a function of AR{2) parameters. The product of weight and Gaussian white noise N(0, óerr2) is termed as weighted stochastic component. Also, drought analysis is performed considering annual (January to December) and summer monsoon (June to September) rainfall totals, to determine average drought interval (idrt) which is used in assigning signs to the random component of the forecasting model. In the final form of the forecasting model. Zest”t = Z µ Σ t αi(ZMi-t ) βi+µerrl+ Σj Rcos(wjt+ Φ) ± WT(Φ1, Φ2)N(0, óerr2) The weighted stochastic component is added or subtracted considering two criteria. Criterion I is used for all rainfall series except all-India series for which criterion II is used. The criteria also consider average drought interval Further, it can be seen that a ± sign is introduced to add or subtract the weighted stochastic component, albeit the stochastic component itself can either be positive or negative. The introduction of ± sign on the already signed value (instead of absolute value) is found to improve the forecast in the sense of obtaining more number of point rainfall estimates within 20 percent error. Incorporating significant periodicities, and weighted stochastic component along with average drought interval into the forecasting formulation is the main feature of the model. Thus, in the process of rainfall prediction, the genetic algorithm is used as an efficient tool in estimating optimal parameters of the auto power regression and the Fourier series models, without the use of an expensive nonlinear least square algorithm. The model application is demonstrated considering different annual rainfall series relating to IMD-Regions (RI...R5), all-india (AI), IMD-Subdivisions (S1...S29), Zones (Z1...Z10) and all-Karnataka (AK). The results of the proposed model are encouraging in providing improved forecasts. The model considers periodicity, average critical drought frequency and weighted stochastic component in forecasting the rainfall series. The model performed well in achieving success-rate of 70 percent with percentage error less than 20 percent in 4 out of 5 IMD Regions (R2 to R5), all-India, 17 out of 29 IMD Subdivisions (S1 to S5, S7 to S9, S18, S19, S21, S24 to S29) and all-Karnataka rainfall series. The model performance for Zones was not that-satisfactory as only 2 out of 10 Zones [Z1 and Z2) met the criterion. In a separate study, an effort was made to forecast annual rainfall using IMSL subroutine SPWF -which estimates Wiener forecast parameters. Monthly data is considered for the study. The Wiener parameters obtained were used to estimate monthly rainfall. The annual estimates obtained by simple aggregation of the monthly estimates compared extremely well with the actual annual rainfall values. A success rate of more than 80 percent with percentage error less than 10 percent is achieved in 4 out of 5 IMD Regions (R2 to R5), all-India, 18 out of 29 IMD Subdivisions (S1 to S8, S14, S18, S19, S22 to S24, S26 to S29) and all-Karnataka rainfall series. Whereas a success rate of 80 percent within 20 percent error is achieved in 4 out of 5 IMD Regions (except R1), all-India, 25 outof 29 IMD Subdivisions (except S10, S11, S12 and S17), all- Karnataka and 8 out of 10 Zones (except Z6 and Z8)(Please refer PDF File for Formulas)
55

The US Dollar, Oil Prices and the US Current Account

Abdel Razek, Noha Unknown Date
No description available.
56

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
57

Forecasting tourism demand for South Africa / Louw R.

Louw, Riëtte. January 2011 (has links)
Tourism is currently the third largest industry within South Africa. Many African countries, including South Africa, have the potential to achieve increased economic growth and development with the aid of the tourism sector. As tourism is a great earner of foreign exchange and also creates employment opportunities, especially low–skilled employment, it is identified as a sector that can aid developing countries to increase economic growth and development. Accurate forecasting of tourism demand is important due to the perishable nature of tourism products and services. Little research on forecasting tourism demand in South Africa can be found. The aim of this study is to forecast tourism demand (international tourist arrivals) to South Africa by making use of different causal models and to compare the forecasting accuracy of the causal models used. Accurate forecasts of tourism demand may assist policy–makers and business concerns with decisions regarding future investment and employment. An overview of South African tourism trends indicates that although domestic arrivals surpass foreign arrivals in terms of volume, foreign arrivals spend more in South Africa than domestic tourists. It was also established that tourist arrivals from Africa (including the Middle East), form the largest market of international tourist arrivals to South Africa. Africa is, however, not included in the empirical analysis mainly due to data limitations. All the other markets namely Asia, Australasia, Europe, North America, South America and the United Kingdom are included as origin markets for the empirical analysis and this study therefore focuses on intercontinental tourism demand for South Africa. A review of the literature identified several determinants of tourist arrivals, including income, relative prices, transport cost, climate, supply–side factors, health risks, political stability as well as terrorism and crime. Most researchers used tourist arrivals/departures or tourist spending/receipts as dependent variables in empirical tourism demand studies. The first approach used to forecast tourism demand is a single equation approach, more specifically an Autoregressive Distributed Lag Model. This relationship between the explanatory variables and the dependent variable was then used to ex post forecast tourism demand for South Africa from the six markets identified earlier. Secondly, a system of equation approach, more specifically a Vector Autoregressive Model and Vector Error Correction Model were estimated for each of the identified six markets. An impulse response analysis was undertaken to determine the effect of shocks in the explanatory variables on tourism demand using the Vector Error Correction Model. It was established that it takes on average three years for the effect on tourism demand to disappear. A variance decomposition analysis was also done using the Vector Error Correction Model to determine how each variable affects the percentage forecast variance of a certain variable. It was found that income plays an important role in explaining the percentage forecast variance of almost every variable. The Vector Autoregressive Model was used to estimate the short–run relationship between the variables and to ex post forecast tourism demand to South Africa from the six identified markets. The results showed that enhanced marketing can be done in origin markets with a growing GDP in order to attract more arrivals from those areas due to the high elasticity of the real GDP per capita in the long run and its positive impact on tourist arrivals. It is mainly up to the origin countries to increase their income per capita. Focussing on infrastructure development and maintenance could contribute to an increase in future tourist arrivals. It is evident that arrivals from Europe might have a negative relationship with the number of hotel rooms available since tourists from this region might prefer accommodation with a safari atmosphere such as bush lodges. Investment in such accommodation facilities and the marketing of such facilities to Europeans may contribute to an increase in arrivals from Europe. The real exchange rate also plays a role in the price competitiveness of the destination country. Therefore, in order for South Africa to be more price competitive, inflation rate control can be a way to increase price competitiveness rather than to have a fixed exchange rate. Forecasting accuracy was tested by estimating the Mean Absolute Percentage Error, Root Mean Square Error and Theil’s U of each model. A Seasonal Autoregressive Integrated Moving Average (SARIMA) model was estimated for each origin market as a benchmark model to determine forecasting accuracy against this univariate time series approach. The results showed that the Seasonal Autoregressive Integrated Moving Average model achieved more accurate predictions whereas the Vector Autoregressive model forecasts were more accurate than the Autoregressive Distributed Lag Model forecasts. Policy–makers can use both the SARIMA and VAR model, which may generate more accurate forecast results in order to provide better policy recommendations. / Thesis (M.Com. (Economics))--North-West University, Potchefstroom Campus, 2011.
58

Forecasting tourism demand for South Africa / Louw R.

Louw, Riëtte. January 2011 (has links)
Tourism is currently the third largest industry within South Africa. Many African countries, including South Africa, have the potential to achieve increased economic growth and development with the aid of the tourism sector. As tourism is a great earner of foreign exchange and also creates employment opportunities, especially low–skilled employment, it is identified as a sector that can aid developing countries to increase economic growth and development. Accurate forecasting of tourism demand is important due to the perishable nature of tourism products and services. Little research on forecasting tourism demand in South Africa can be found. The aim of this study is to forecast tourism demand (international tourist arrivals) to South Africa by making use of different causal models and to compare the forecasting accuracy of the causal models used. Accurate forecasts of tourism demand may assist policy–makers and business concerns with decisions regarding future investment and employment. An overview of South African tourism trends indicates that although domestic arrivals surpass foreign arrivals in terms of volume, foreign arrivals spend more in South Africa than domestic tourists. It was also established that tourist arrivals from Africa (including the Middle East), form the largest market of international tourist arrivals to South Africa. Africa is, however, not included in the empirical analysis mainly due to data limitations. All the other markets namely Asia, Australasia, Europe, North America, South America and the United Kingdom are included as origin markets for the empirical analysis and this study therefore focuses on intercontinental tourism demand for South Africa. A review of the literature identified several determinants of tourist arrivals, including income, relative prices, transport cost, climate, supply–side factors, health risks, political stability as well as terrorism and crime. Most researchers used tourist arrivals/departures or tourist spending/receipts as dependent variables in empirical tourism demand studies. The first approach used to forecast tourism demand is a single equation approach, more specifically an Autoregressive Distributed Lag Model. This relationship between the explanatory variables and the dependent variable was then used to ex post forecast tourism demand for South Africa from the six markets identified earlier. Secondly, a system of equation approach, more specifically a Vector Autoregressive Model and Vector Error Correction Model were estimated for each of the identified six markets. An impulse response analysis was undertaken to determine the effect of shocks in the explanatory variables on tourism demand using the Vector Error Correction Model. It was established that it takes on average three years for the effect on tourism demand to disappear. A variance decomposition analysis was also done using the Vector Error Correction Model to determine how each variable affects the percentage forecast variance of a certain variable. It was found that income plays an important role in explaining the percentage forecast variance of almost every variable. The Vector Autoregressive Model was used to estimate the short–run relationship between the variables and to ex post forecast tourism demand to South Africa from the six identified markets. The results showed that enhanced marketing can be done in origin markets with a growing GDP in order to attract more arrivals from those areas due to the high elasticity of the real GDP per capita in the long run and its positive impact on tourist arrivals. It is mainly up to the origin countries to increase their income per capita. Focussing on infrastructure development and maintenance could contribute to an increase in future tourist arrivals. It is evident that arrivals from Europe might have a negative relationship with the number of hotel rooms available since tourists from this region might prefer accommodation with a safari atmosphere such as bush lodges. Investment in such accommodation facilities and the marketing of such facilities to Europeans may contribute to an increase in arrivals from Europe. The real exchange rate also plays a role in the price competitiveness of the destination country. Therefore, in order for South Africa to be more price competitive, inflation rate control can be a way to increase price competitiveness rather than to have a fixed exchange rate. Forecasting accuracy was tested by estimating the Mean Absolute Percentage Error, Root Mean Square Error and Theil’s U of each model. A Seasonal Autoregressive Integrated Moving Average (SARIMA) model was estimated for each origin market as a benchmark model to determine forecasting accuracy against this univariate time series approach. The results showed that the Seasonal Autoregressive Integrated Moving Average model achieved more accurate predictions whereas the Vector Autoregressive model forecasts were more accurate than the Autoregressive Distributed Lag Model forecasts. Policy–makers can use both the SARIMA and VAR model, which may generate more accurate forecast results in order to provide better policy recommendations. / Thesis (M.Com. (Economics))--North-West University, Potchefstroom Campus, 2011.
59

Ανάπτυξη μεθόδων ανάλυσης ηλεκτροεγκεφαλογραφήματος με χρήση μοντέλων συνδεσιμότητας και μεγεθών εντροπίας

Γιαννακάκης, Γιώργος Α. 20 April 2011 (has links)
Σκοπός της παρούσας διδακτορικής διατριβής είναι η ανάπτυξη και η εφαρμογή εξελιγμένων αλγορίθμων ανάλυσης ηλεκτροεγκεφαλογραφήματος ηρεμίας (rest EEG) και προκλητών δυναμικών (ERP) για την εξαγωγή νευροφυσιολογικών συμπερασμάτων σχετικά με νευρολογικές/ψυχιατρικές ασθένειες. Οι τεχνικές που αναπτύσσονται εφαρμόζονται τόσο σε συνθετικά σήματα όσο και σε πραγματικά σήματα μαρτύρων και ατόμων με δυσλεξία, που υποβάλλονται στην ακουστική δοκιμασία Wechsler. Αρχικά μελετούνται τα συμβατικά χαρακτηριστικά προκλητών δυναμικών που αποτελούνται από τα πλάτη των κορυφώσεων και τους λανθάνοντες χρόνους πραγματοποίησής τους μετά το ερέθισμα. Μέσω στατιστικών αναλύσεων αναδεικνύεται ότι τα άτομα με δυσλεξία παρουσιάζουν σημαντικά μικρότερο πλάτος κορύφωσης N100 το οποίο μάλιστα συσχετίζεται με την απόδοση μνήμης. Επίσης, ο προσυνειδητός χρόνος απόκρισης στα ηχητικά ερεθίσματα παρουσιάζεται σε συγκεκριμένα ηλεκτρόδια σημαντικά παρατεταμένος σε άτομα με δυσλεξία. Οι ενεργειακές διαφοροποιήσεις στα φάσματα EEG/ERP προσφέρουν σημαντική πληροφορία σχετικά με το βαθμό ενεργοποίησης των διαφόρων περιοχών του εγκεφάλου. Η ανάλυση ενεργειακών διαφοροποιήσεων πραγματοποιείται στο πεδίο χρόνου-συχνότητας, αναδεικνύοντας χρονικές μεταβολές του φασματικού περιεχομένου. Στο πλαίσιο αυτό, αξιολογούνται συγκριτικά τεχνικές αναπαράστασης χρόνου-συχνότητας τόσο δεύτερης τάξης όσο και προσαρμοστικές. Ο αλγόριθμος matching pursuit αποδεικνύεται ιδιαίτερα αποτελεσματικός στη μείωση των διαγώνιων όρων και στην ανάδειξη ενεργειακών κορυφών. Για τη στατιστική αποτίμηση των ενεργειακών διαφορών στις ζώνες συχνοτήτων δ (0-4 Hz), θ (5-7 Hz), α (8-13 Hz), β (14-30 Hz), προτείνεται μεθοδολογία βασισμένη στο συνδυασμό μεθόδων κανονικοποίησης και διόρθωσης πολλαπλών συγκρίσεων. Η ύπαρξη σημαντικών ενεργειακών διαφοροποιήσεων ενδεχόμενα είναι απόρροια του διαφορετικού τρόπου λειτουργικής συνδεσιμότητας μεταξύ των δύο μελετούμενων ομάδων (μαρτύρων, ατόμων με δυσλεξία). Για το σκοπό αυτό, υπολογίζονται μεγέθη συνδεσιμότητας και αιτιότητας μεταξύ ηλεκτροεγκεφαλογραφικών καταγραφών, με χρήση του μοντέλου πολλαπλής παλινδρόμησης σε συνδυασμό με τις μεθόδους εκτίμησης Yule-Walker, Burg και Least Squares, καταδεικνύοντας την ανωτερότητα των δύο τελευταίων όσον αφορά στην ακρίβεια πρόβλεψης. Μετά από εκτεταμένη συγκριτική αξιολόγηση των μεγεθών αιτιότητας, προτείνεται ένα νέο μέγεθος ανάδειξης των άμεσων ροών δραστηριότητας, το οποίο βασίζεται στο συνδυασμό της μη κανονικοποιημένης κατευθυνόμενης συνάρτησης μεταφοράς και της μερικής κατευθυνόμενης συμφωνίας. Το μέγεθος αυτό αποδεικνύεται ιδιαίτερα αποδοτικό στη μείωση ψευδών ή μη άμεσων ροών και παρουσιάζει φασματικές ιδιότητες παρόμοιες με αυτές των εμπλεκόμενων κυματομορφών. Η εφαρμογή του σε ηλεκτροεγκεφαλογράφημα ηρεμίας, όπου ικανοποιείται η συνθήκη στασιμότητας, οδηγεί στην ανάδειξη διαφοροποιήσεων σε συγκεκριμένες ροές δραστηριότητας. Στην περίπτωση μη στάσιμων χρονοσειρών, όπως είναι τα προκλητά δυναμικά, χρησιμοποιείται δυναμικό μοντέλο πολλαπλής παλινδρόμησης για την εκτίμηση των μεγεθών σύζευξης. Μελετάται η ικανότητας αναπαράστασης γρήγορα μεταβαλλόμενων αιτιακών σχέσεων, με χρήση τόσο της προσέγγισης μικρού χρονικού παραθύρου όσο και προσαρμοζόμενων φίλτρων Kalman. Η μελέτη περιλαμβάνει την επίδραση του επιπέδου θορύβου, του συντελεστή προσαρμογής και της χρονικής μεταβολής των συνδέσεων του προτύπου συνδεσιμότητας. Το φίλτρο Kalman αποδεικνύεται ιδιαίτερα ακριβές στην εκτίμηση της χρονικής εξέλιξης των συντελεστών του μοντέλου τόσο σε συνθετικά όσο και σε πραγματικά ηλεκτροεγκεφαλογραφικά δεδομένα. Επιπλέον, μελετήθηκε η προβλεψιμότητα/πολυπλοκότητα των χρονοσειρών, με χρήση μεγεθών φασματικής και προσεγγιστικής εντροπίας. Η φασματική εντροπία και οι παραλλαγές της αποτελούν μεγέθη που αναδεικνύουν τη φασματική πολυπλοκότητα μιας χρονοσειράς και σχετίζονται με φαινόμενα συγχρονισμού και επικράτησης συγκεκριμένων ζωνών συχνοτήτων. Επειδή χρειάζεται να μελετηθεί η χρονική εξέλιξη της πολυπλοκότητας αυτής, τα μεγέθη αυτά υπολογίζονται τόσο με χρήση μετασχηματισμού κυματιδίου όσο και με χρήση βέλτιστου πυρήνα, καταδεικνύοντας την ανωτερότητα του τελευταίου στο διαχωρισμό μεταξύ των δύο ομάδων (μαρτύρων, ατόμων με δυσλεξία). Η αναπαράσταση με χρήση βέλτιστου πυρήνα επιτρέπει την προσαρμογή του πυρήνα σε κάθε υπό ανάλυση σήμα, κάτι το οποίο έχει ιδιαίτερη σημασία σε περιπτώσεις που παρατηρείται έντονη διακύμανση μεταξύ των καταγραφών. Τέλος, μέσω της προσεγγιστικής εντροπίας μελετάται η ύπαρξη όμοιων προτύπων παρατηρήσεων κατά μήκος των χρονοσειρών τόσο σε συγκεκριμένα ηλεκτρόδια όσο και μεταξύ ηλεκτροδίων. Οι μέθοδοι που παρουσιάζονται στο πλαίσιο της παρούσας διδακτορικής διατριβής συμβάλλουν στην πιο αντικειμενική και αξιόπιστη μελέτη συγχρονισμού, αιτιακών σχέσεων και πολυπλοκότητας κατά την ανάλυση ηλεκτροεγκεφαλογραφικών καταγραφών. / The purpose of the present Ph.D. thesis is to develop and apply advanced algorithms for EEG/ERP signal analysis in order to study neurophysiological alterations associated with dyslexia. The used methods aim at a reliable analysis of synchronization, causal connectivity and complexity of EEG/ERP signals and are evaluated on both synthetic and real EEG/ERP signals of dyslexics and controls, acquired during Wechsler auditory test. First, the conventional components of ERP waveforms (peak amplitudes, latencies) are studied. Statistical analysis points out that dyslexics’ signals present significantly lower N100 amplitudes which are known to be associated with memory performance. An important parameter in dyslexia is the pre-attentive reaction time to auditory stimuli which is reflected through P50 latency and is found to be significantly prolonged at specific electrodes. Energy differentiations in time-frequency between the two groups (dyslexics and controls) are examined, enabling study of the temporal changes of ERP content. Various second order and adaptive time-frequency methods are comparatively assessed in terms of their accuracy in representing temporally changing spectra. Matching pursuit is proved to be quite effective in cross terms suppression and representation of energy peaks. Significant energy differentiations at delta (0-4 Hz), theta (5-7 Hz), alpha (8-13 Hz) and beta (14-30 Hz) frequency bands are detected, through a methodology of statistical evaluation based on normalization and multiple comparisons correction methods. The presence of significant energy differentiations may be the result of differing functional connectivity patterns between the two groups (controls, dyslexics). In order to study causal connectivity patterns, the multivariate autoregressive model is estimated using the Yule-Walker, Burg and Least Squares methods, with Burg and Least Squares proved to provide superior performance in terms of prediction error. A new measure for the estimation of direct causal interactions is proposed, which is based on the combination of the full frequency directed transfer function and the partial directed coherence, exhibiting spectral properties similar with those of the involved signals, and increased efficiency in suppressing false and non direct flows. Study of rest EEG connectivity patterns, by means of the new connectivity measure, revealed differentiations in specific activity flows between the two groups under study (controls and dyslexics). In order to calculate coupling measures of non-stationary signals, like ERP, the dynamic autoregressive model is used and its ability to accurately represent rapid changes of causal interactions is assessed using short window and adaptive Kalman filter approaches. The superiority of the Kalman filter approach in terms of the accuracy provided in the estimation of the model’s autoregressive parameters is demonstrated on both synthetic and real EEG/ERP signals. Furthermore, the predictability/complexity of EEG/ERP time-series of dyslexics versus controls was studied, using measures of spectral and approximate entropy. Spectral entropy and its modifications quantify the spectral complexity of time-series and are related with synchronization and dominance of specific frequency bands. In order to study the temporal evolution of signals’ spectral complexity, wavelet transform and optimal kernel approaches were used, and the superiority of the latter concerning its ability to discriminate the two groups was demonstrated. The representation through optimal kernel permits the adjustment to each analyzed signal, a property that is quite important in analyzing data characterized by intense variability. Finally, through approximate entropy, the presence of differentiations in predictability of EEG time series related with single electrodes or pairs of electrodes is studied, demonstrating that dyslexics’ signals are characterized by more predictable patterns.
60

Abordagem Bayesiana do modelo AR(1) para dados em painel: uma aplicação em dados temporais de microarray / Bayesian approach of AR(1) panel data model: application in microarray time series data

Morais, Telma Suely da Silva 05 December 2008 (has links)
Made available in DSpace on 2015-03-26T13:32:05Z (GMT). No. of bitstreams: 1 texto completo.pdf: 717763 bytes, checksum: e623d83648529a004b8aa2a3e4877433 (MD5) Previous issue date: 2008-12-05 / We considered a Bayesian analysis of first order autoregressive, AR(1), panel data model, using exact likelihood function, comparative analysis of prior distributions and predictive distributions of future observations. The methodology efficiency was evaluated by a simulation study using three prior, which were related to different Generalized Beta distributions: symmetric, asymmetric and flat prior. We applied the proposed methodology to microarray time series real data of HeLa cells. The forecast of gene expression in one future time showed high efficiency. / Considerou-se uma análise Bayesiana do modelo auto- regressivo de primeira ordem, AR(1), para dados em painel, de forma a utilizar a função de verossimilhança exata, a análise de comparação de distribuições a priori e a obtenção de distribuições preditivas de dados futuros. A eficiência da metodologia proposta foi avaliada mediante um estudo de simulação, no qual a distribuição Beta Generalizada foi usada para representar 3 diferentes prioris: simétrica, assimétrica e constante. Realizou-se uma aplicação em dados reais de expressão gênica temporal de células HeLa gerados por microarray. Os resultados mostraram alta eficiência na previsão da expressão gênica para um instante futuro.

Page generated in 0.0857 seconds