• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 22
  • 10
  • 8
  • 7
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 38
  • 32
  • 29
  • 27
  • 25
  • 25
  • 20
  • 19
  • 19
  • 18
  • 17
  • 16
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Artificial Neural Networks for Financial Time Series Prediction

Malas, Dana January 2023 (has links)
Financial market forecasting is a challenging and complex task due to the sensitivity of the market to various factors such as political, economic, and social factors. However, recent advances in machine learning and computation technology have led to an increased interest in using deep learning for forecasting financial data. One the one hand, the famous efficient market hypothesis states that the market is so efficient that no one can consistently benefit from it, and the random walk theory suggests that asset prices are unpredictable based on historical data. On the other hand, previous research has shown that financial time series can be forecasted to some extent using artificial neural networks (ANNs). Despite being a relatively new addition to financial research with less study than the traditional models such as moving averages and linear regression models, ANNs have been shown to outperform the traditional models to some extent. Hence, considering the efficient market hypothesis and the random walk theory, there is a knowledge gap on whether neural networks can be used for financial time series prediction. This paper explores the use of ANNs, specifically recurrent neural networks, to predict financial time series data using a long short-term memory (LSTM) network model. The study will employ an experimental research strategy to construct and test an LSTM model to predict financial time series data, with the aim of examining its performance and evaluating it relative to other models and methods. For evaluating its performance, evaluation metrics are computed and the model is compared with a constructed simple moving average (SMA) model as well as other models in existing studies. The paper also explores the application and processing of transformed financial data, where it was found that achieving stationarity by data transformation was not necessary for the LSTM model to perform better. The study also found that the LSTM model outperformed the SMA model when hyperparameters were set to capture long-term dependencies. However, in the short-term, the SMA model outperformed the LSTM model.
62

Efficient Sampling Plans for Control Charts When Monitoring an Autocorrelated Process

Zhong, Xin 15 March 2006 (has links)
This dissertation investigates the effects of autocorrelation on the performances of various sampling plans for control charts in detecting special causes that may produce sustained or transient shifts in the process mean and/or variance. Observations from the process are modeled as a first-order autoregressive process plus a random error. Combinations of two Shewhart control charts and combinations of two exponentially weighted moving average (EWMA) control charts based on both the original observations and on the process residuals are considered. Three types of sampling plans are investigated: samples of n = 1, samples of n > 1 observations taken together at one sampling point, or samples of n > 1 observations taken at different times. In comparing these sampling plans it is assumed that the sampling rate in terms of the number of observations per unit time is fixed, so taking samples of n = 1 allows more frequent plotting. The best overall performance of sampling plans for control charts in detecting both sustained and transient shifts in the process is obtained by taking samples of n = 1 and using an EWMA chart combination with a observations chart for mean and a residuals chart for variance. The Shewhart chart combination with the best overall performance, though inferior to the EWMA chart combination, is based on samples of n > 1 taken at different times and with a observations chart for mean and a residuals chart for variance. / Ph. D.
63

Individual Response to Botulinum Toxin Therapy in Movement Disorders: A Time Series Analysis Approach

Leplow, Bernd, Pohl, Johannes, Wöllner, Julia, Weise, David 27 October 2023 (has links)
On a group level, satisfaction with botulinum neurotoxin (BoNT) treatment in neurological indications is high. However, it is well known that a relevant amount of patients may not respond as expected. The aim of this study is to evaluate the BoNT treatment outcome on an individual level using a statistical single-case analysis as an adjunct to traditional group statistics. The course of the daily perceived severity of symptoms across a BoNT cycle was analyzed in 20 cervical dystonia (CD) and 15 hemifacial spasm (HFS) patients. A parametric single-case autoregressive integrated moving average (ARIMA) time series analysis was used to detect individual responsiveness to BoNT treatment. Overall, both CD and HFS patients significantly responded to BoNT treatment with a gradual worsening of symptom intensities towards BoNT reinjection. However, only 8/20 CD patients (40%) and 5/15 HFS patients (33.3%) displayed the expected U-shaped curve of BoNT efficacy across a single treatment cycle. CD (but not HFS) patients who followed the expected outcome course had longer BoNT injection intervals, showed a better match to objective symptom assessments, and were characterized by a stronger certainty to control their somatic symptoms (i.e., internal medical locus of control). In addition to standard evaluation procedures, patients should be identified who do not follow the mean course-of-treatment effect. Thus, the ARIMA single-case time series analysis seems to be an appropriate addition to clinical treatment studies in order to detect individual courses of subjective symptom intensities.
64

Profitability of Technical Trading Strategies in the Swedish Equity Market / Lönsamhet för tekniska handelsstrategier på den svenska aktiemarknaden

Alam, Azmain, Norrström, Gustav January 2021 (has links)
This study aims to see if it is possible to generate abnormal returns in the Swedishstock market through the use of three different trading strategies based on technicalindicators. As the indicators are based on historical price data only, the study assumesweak market efficiency according to the efficient market hypothesis. The study isconducted using daily prices for OMX Stockholm PI and STOXX 600 Europe from theperiod between 1 January 2010 and 31 December 2019. Trading positions has beentaken in the OMX Stockholm PI index while STOXX 600 Europe has been used torepresent the market portfolio. Abnormal returns has been defined as the Jensen’s αin a Fama French three factor model with Carhart ­extension. This period has beencharacterised by increasing prices (a bull market) which may have had an impact onthe results. Furthermore, a higher frequency of rebalancing for the Fama ­French andCarhart model could also increase the quality of the results. The results indicate thatall three strategies has generated abnormal returns during the period. / Denna studie syftar till att se om det är möjligt att generera överavkastning på densvenska aktiemarknaden genom att använda tre olika handelsstrategier baserade påtekniska indikatorer. Eftersom indikatorerna endast baseras på historiska prisdataantar studien svag marknadseffektivitet enligt den effektiva marknadshypotesen.Studien genomförs med hjälp av dagliga priser för OMX Stockholm PI och STOXX 600Europe från perioden 1 januari 2010 till 31 december 2019. Positionerna i studien hartagits i OMX Stockholm PI medan STOXX 600 Europe har använts för att representeramarknadsportföljen . Överavkastning har definierats som Jensens α i en Fama French trefaktormodell med Carhart-­utvidgning. Perioden som används i studien har präglatsav stigande priser (en bull market) som kan ha påverkat resultatet. Dessutom skulleen högre frekvens av ombalansering av Fama ­French och Carhart-­modellen ocksåkunna öka kvaliteten på resultaten. Resultaten visar att alla tre strategier har genereratonormal avkastning under perioden.
65

An online-integrated condition monitoring and prognostics framework for rotating equipment

Alrabady, Linda Antoun Yousef January 2014 (has links)
Detecting abnormal operating conditions, which will lead to faults developing later, has important economic implications for industries trying to meet their performance and production goals. It is unacceptable to wait for failures that have potential safety, environmental and financial consequences. Moving from a “reactive” strategy to a “proactive” strategy can improve critical equipment reliability and availability while constraining maintenance costs, reducing production deferrals, decreasing the need for spare parts. Once the fault initiates, predicting its progression and deterioration can enable timely interventions without risk to personnel safety or to equipment integrity. This work presents an online-integrated condition monitoring and prognostics framework that addresses the above issues holistically. The proposed framework aligns fully with ISO 17359:2011 and derives from the I-P and P-F curve. Depending upon the running state of machine with respect to its I-P and P-F curve an algorithm will do one of the following: (1) Predict the ideal behaviour and any departure from the normal operating envelope using a combination of Evolving Clustering Method (ECM), a normalised fuzzy weighted distance and tracking signal method. (2) Identify the cause of the departure through an automated diagnostics system using a modified version of ECM for classification. (3) Predict the short-term progression of fault using a modified version of the Dynamic Evolving Neuro-Fuzzy Inference System (DENFIS), called here MDENFIS and a tracking signal method. (4) Predict the long term progression of fault (Prognostics) using a combination of Autoregressive Integrated Moving Average (ARIMA)- Empirical Mode Decomposition (EMD) for predicting the future input values and MDENFIS for predicting the long term progression of fault (output). The proposed model was tested and compared against other models in the literature using benchmarks and field data. This work demonstrates four noticeable improvements over previous methods: (1) Enhanced testing prediction accuracy, (2) comparable processing time if not better, (3) the ability to detect sudden changes in the process and finally (4) the ability to identify and isolate the problem source with high accuracy.
66

探討技術分析在臺灣股票市場的獲利性:以臺灣中型100成分股為例 / The profitability of technical analysis: evidence from TWSE mid-cap 100 Index constituents

吳晉敏 Unknown Date (has links)
技術分析一直是許多研究的熱門主題,也被眾多市場參與者廣泛運用在市場交易,而最普遍且最受歡迎的技術分析工具即為移動平均法。 本研究設計三種移動平均交易方法(一種只考慮收盤價,一種考慮收盤價及交易量,而另一種則將交易量作為收盤價的權重),每種交易方法皆使用五天為短期移動平均天數,十天、五十天、一百天、一百五十天、兩百天為長期移動平均天數,總計十五種移動平均交易規則,運用在臺灣中型100成分股以產生買進與賣出訊號,並依訊號進行交易動作,進而在未考慮交易成本的假設下計算出單次交易的平均報酬、平均持有天數,以及Hit ratio(正報酬的交易次數占總交易次數的比例),藉以探討移動平均法在此種股票的獲利性。而以交易量為價格權重來產生移動平均交易方法是基於相信帶有較高交易量的價格較有意義,盼藉以測試此種方法是否正如預期,相較於一般傳統的價格移動平均法有更好的績效。 本研究雖然未考慮交易成本,但呈現的單次交易平均報酬可以提供讀者與實際臺灣股票市場交易成本作比較,藉以了解考慮交易成本後的情況。而本研究除了呈現所有成分股單次交易的平均報酬、平均持有天數及Hit ratio的平均值,也將成分股依照ICB行業分類指標分成幾個主要產業,並呈現各產業內成分股的平均值,企圖了解特定交易方法是否在特定產業有較好的績效。 結果顯示,產生最好績效的移動平均交易方法也僅能有一半的交易次數得到正報酬,而就整體而言,將交易量作為價格權重的移動平均方法,也沒有產生相較於傳統價格移動平均法更好的績效,因此可以說,這類的技術分析對於這些股票無法有較好的績效。 / Technical analysis has been widely studied and used by many researchers and market participants. The most common and popular technical trading rule is moving average since it is mathematically well defined and used by many analysts. This article examines the profitability of technical analysis for FTSE TWSE Mid-Cap Taiwan 100 Index constituents under the hypothesis of no transaction costs. It uses three strategies (Price Strategy, Price and Volume Strategy, and PV Strategy) and fifteen moving average rules to generate buy and sell signals, and then compute average returns per trading, average holding days per trading, and hit ratios to see the profitability. It is believed that prices come with high volumes are more meaningful than those with low volumes. All of these strategies and trading rules are not only used for all constituents of FTSE TWSE Mid-Cap Taiwan 100 Index without consid-ering industry classifications but also for each major industry classifications of these constituents. Therefore, we can understand whether specific trading rules have better performances for specific industries of these stocks. The results are not that optimistic. Overall Price and Volume Strategy has the best results of hit ratio, however, the highest value is barely 50%, which means it can only have a half trading times positive returns. As for PV Strategy which uses weighted price moving average to trade, the performance has no significantly better than using simple price moving average rule. It can say that Technical Analysis like moving average can hardly have good performances on these stocks.
67

以狀態轉換模型模擬最適移動平均線組合 / Simulation of optimal moving average combination- based on regime switching model

黃致穎, Huang, Chih Ying Unknown Date (has links)
學術上不接受技術分析等方法,認為股價已經在市場上充分反應,過去的歷史股價不能對未來進行預測。然而,業界或一般的投資人,卻往往把技術分析拿來做為買賣的依據。實際上以歷史資料做模擬交易,卻可以發現許多技術分析的法則在某些市場、股票、期間之中,可以獲得相對於買進賣出更好的報酬。有趣的是,任何一種操作法則或是特定一組參數選擇,在樣本外的操作則無法完全發現同樣的結果。故以技術分析所獲得的超額報酬,究竟是此機制有效還是單純運氣成分,許多技術分析的文獻以及著作往往著墨甚少。 本論文利用狀態轉換模型(Regime Switching Model)捕捉台灣加權股價指數,將股價的動態分為上漲以及下跌兩種狀態,並估計其市場參數—漲跌速度、漲跌速度標準差、轉換機率。其次將所估計的市場參數做為模擬的依據,可發現在單純隨機的環境下,某些市場參數組合存在移動平均線的交易策略明顯優於買進持有策略。研究中以敏感度分析的方法,呈現各個單一市場參數的改變情形,對於操作績效影響的方向。 最後將2001~2010的的台灣加權股價指數,估計市場參數並找尋當下最適的移動平均組合,允許每季重新調整參數,並實際以收盤價做為買賣模擬。結果發現移動平均線操作,確實能提供比買進持有更好的報酬,並減低每年報酬率變異。
68

Statistical signal processing in sensor networks with applications to fault detection in helicopter transmissions

Galati, F. Antonio Unknown Date (has links) (PDF)
In this thesis two different problems in distributed sensor networks are considered. Part I involves optimal quantiser design for decentralised estimation of a two-state hidden Markov model with dual sensors. The notion of optimality for quantiser design is based on minimising the probability of error in estimating the hidden Markov state. Equations for the filter error are derived for the continuous (unquantised) sensor outputs (signals), which are used to benchmark the performance of the quantisers. Minimising the probability of filter error to obtain the quantiser breakpoints is a difficult problem therefore an alternative method is employed. The quantiser breakpoints are obtained by maximising the mutual information between the quantised signals and the hidden Markov state. This method is known to work well for the single sensor case. Cases with independent and correlated noise across the signals are considered. The method is then applied to Markov processes with Gaussian signal noise, and further investigated through simulation studies. Simulations involving both independent and correlated noise across the sensors are performed and a number of interesting new theoretical results are obtained, particularly in the case of correlated noise. In Part II, the focus shifts to the detection of faults in helicopter transmission systems. The aim of the investigation is to determine whether the acoustic signature can be used for fault detection and diagnosis. To investigate this, statistical change detection algorithms are applied to acoustic vibration data obtained from the main rotor gearbox of a Bell 206 helicopter, which is run at high load under test conditions.
69

Multi Look-Up Table Digital Predistortion for RF Power Amplifier Linearization

Gilabert Pinal, Pere Lluís 12 February 2008 (has links)
Aquesta Tesi Doctoral se centra en el disseny d'un nou linealitzador de Predistorsió Digital (Digital Predistortion - DPD) capaç de compensar la dinàmica i els efectes no lineals introduïts pels Amplificadors de Potència (Power Amplifiers - PAs). Un dels trets més rellevants d'aquest nou predistorsionador digital i adaptatiu consisteix en ser deduïble a partir d'un model de PA anomenat Nonlinear Auto-Regressive Moving Average (NARMA). A més, la seva arquitectura multi-LUT (multi-Taula) permet la implementació en un dispositiu Field Programmable Gate Array (FPGA).La funció de predistorsió es realitza en banda base, per tant, és independent de la banda freqüencial on es durà a terme l'amplificació del senyal de RF, el que pot resultar útil si tenim en compte escenaris multibanda o reconfigurables. D'altra banda, el fet que aquest DPD tingui en compte els efectes de memòria introduïts pel PA, representa una clara millora de les prestacions aconseguides per un simple DPD sense memòria. En comparació amb d'altres DPDs basats en models més computacionalment complexos, com és el cas de les xarxes neuronals amb memòria (Time-Delayed Neural Networks - TDNN), la estructura recursiva del DPD proposat permet reduir el nombre de LUTs necessàries per compensar els efectes de memòria del PA. A més, la seva estructura multi-LUT permet l'escalabilitat, és a dir, activar or desactivar les LUTs que formen el DPD en funció de la dinàmica que presenti el PA.En una primera aproximació al disseny del DPD, és necessari identificar el model NARMA del PA. Un dels majors avantatges que presenta el model NARMA és la seva capacitat per trobar un compromís entre la fidelitat en l'estimació del PA i la complexitat computacional introduïda. Per reforçar aquest compromís, l' ús d'algoritmes heurístics de cerca, com són el Simulated Annealing o els Genetic Algorithms, s'utilitzen per trobar els retards que millor caracteritzen la memòria del PA i per tant, permeten la reducció del nombre de coeficients necessaris per caracteritzar-la. Tot i així, la naturalesa recursiva del model NARMA comporta que, de cara a garantir l'estabilitat final del DPD, cal dur a terme un estudi previ sobre l'estabilitat del model.Una vegada s'ha obtingut el model NARMA del PA i s'ha verificat l'estabilitat d'aquest, es procedeix a l'obtenció de la funció de predistorsió a través del mètode d'identificació predictiu. Aquest mètode es basa en la continua identificació del model NARMA del PA i posteriorment, a partir del model obtingut, es força al PA perquè es comporti de manera lineal. Per poder implementar la funció de predistorsió en la FPGA, cal primer expressar-la en forma de combinacions en paral·lel i cascada de les anomenades Cel·les Bàsiques de Predistorsió (BPCs), que són les unitats fonamentals que composen el DPD. Una BPC està formada per un multiplicador complex, un port RAM dual que actua com a LUT (taula de registres) i un calculador d'adreces. Les LUTs s'omplen tenint en compte una distribució uniforme dels continguts i l'indexat d'aquestes es duu a terme mitjançant el mòdul de l'envoltant del senyal. Finalment, l'adaptació del DPD consisteix en monitoritzar els senyals d'entrada i sortida del PA i anar duent a terme actualitzacions periòdiques del contingut de les LUTs que formen les BPCs. El procés d'adaptació del contingut de les LUTs es pot dur a terme en la mateixa FPGA encarregada de fer la funció de predistorsió, o de manera alternativa, pot ser duta a terme per un dispositiu extern (com per exemple un DSP - Digital Signal Processor) en una escala de temps més relaxada. Per validar l'exposició teòrica i provar el bon funcionalment del DPD proposat en aquesta Tesi, es proporcionen resultats tant de simulació com experimentals que reflecteixen els objectius assolits en la linealització del PA. A més, certes qüestions derivades de la implementació pràctica, tals com el consum de potència o la eficiència del PA, són també tractades amb detall. / This Ph.D. thesis addresses the design of a new Digital Predistortion (DPD) linearizer capable to compensate the unwanted nonlinear and dynamic behavior of power amplifiers (PAs). The distinctive characteristic of this new adaptive DPD is its deduction from a Nonlinear Auto Regressive Moving Average (NARMA) PA behavioral model and its particular multi look-up table (LUT) architecture that allows its implementation in a Field Programmable Gate Array (FPGA) device.The DPD linearizer presented in this thesis operates at baseband, thus becoming independent on the final RF frequency band and making it suitable for multiband or reconfigurable scenarios. Moreover, the proposed DPD takes into account PA memory effects compensation which representsan step forward in overcoming classical limitations of memoryless predistorters. Compared to more computational complex DPDs with dynamic compensation, such Time-Delayed Neural Networks (TDNN), this new DPD takes advantage of the recursive nature of the NARMA structure to relax the number of LUTs required to compensate memory effects in PAs. Furthermore, its parallel multi-LUT architecture is scalable, that is, permits enabling or disabling the contribution of specific LUTs depending on the dynamics presented by a particular PA.In a first approach, it is necessary to identify a NARMA PA behavioral model. The extraction of PA behavioral models for DPD linearization purposes is carried out by means of input and output complex envelope signal observations. One of the major advantages of the NARMA structure regards its capacity to deal with the existing trade-off between computational complexity and accuracy in PA behavioral modeling. To reinforce this compromise, heuristic search algorithms such the Simulated Annealing or Genetic Algorithms are utilized to find the best sparse delays that permit accurately reproducing the PA nonlinear dynamic behavior. However, due to the recursive nature of the NARMA model, an stability test becomes a previous requisite before advancing towards DPD linearization.Once the PA model is identified and its stability verified, the DPD function is extracted applying a predictive predistortion method. This identification method relies just on the PA NARMA model and consists in adaptively forcing the PA to behave as a linear device. Focusing in the DPD implementation, it is possible to map the predistortion function in a FPGA, but to fulfill this objective it is first necessary to express the predistortion function as a combined set of LUTs.In order to store the DPD function into a FPGA, it has to be stated in terms of parallel and cascade Basic Predistortion Cells (BPCs), which are the fundamental building blocks of the NARMA based DPD. A BPC is formed by a complex multiplier, a dual port RAM memory block acting as LUT and an address calculator. The LUT contents are filled following an uniform spacing procedure and its indexing is performed with the amplitude (modulus) of the signal's envelope.Finally, the DPD adaptation consists in monitoring the input-output data and performing frequent updates of the LUT contents that conform the BPCs. This adaptation process can be carried out in the same FPGA in charge of performing the DPD function, or alternatively can be performed by an external device (i.e. a DSP device) in a different time-scale than real-time operation.To support all the theoretical design and to prove the linearization performance achieved by this new DPD, simulation and experimental results are provided. Moreover, some issues derived from practical experimentation, such as power consumption and efficiency, are also reported and discussed within this thesis.
70

Value at Risk: A Standard Tool in Measuring Risk : A Quantitative Study on Stock Portfolio

Ofe, Hosea, Okah, Peter January 2011 (has links)
The role of risk management has gained momentum in recent years most notably after the recent financial crisis. This thesis uses a quantitative approach to evaluate the theory of value at risk which is considered a benchmark to measure financial risk. The thesis makes use of both parametric and non parametric approaches to evaluate the effectiveness of VAR as a standard tool in measuring risk of stock portfolio. This study uses the normal distribution, student t-distribution, historical simulation and the exponential weighted moving average at 95% and 99% confidence levels on the stock returns of Sonny Ericsson, Three Months Swedish Treasury bill (STB3M) and Nordea Bank. The evaluations of the VAR models are based on the Kupiec (1995) Test. From a general perspective, the results of the study indicate that VAR as a proxy of risk measurement has some imprecision in its estimates. However, this imprecision is not all the same for all the approaches. The results indicate that models which assume normality of return distribution display poor performance at both confidence levels than models which assume fatter tails or have leptokurtic characteristics. Another finding from the study which may be interesting is the fact that during the period of high volatility such as the financial crisis of 2008, the imprecision of VAR estimates increases. For the parametric approaches, the t-distribution VAR estimates were accurate at 95% confidence level, while normal distribution approach produced inaccurate estimates at 95% confidence level. However both approaches were unable to provide accurate estimates at 99% confidence level. For the non parametric approaches the exponentially weighted moving average outperformed the historical simulation approach at 95% confidence level, while at the 99% confidence level both approaches tend to perform equally. The results of this study thus question the reliability on VAR as a standard tool in measuring risk on stock portfolio. It also suggest that more research should be done to improve on the accuracy of VAR approaches, given that the role of risk management in today’s business environment is increasing ever than before. The study suggest VAR should be complemented with other risk measures such as Extreme value theory and stress testing, and that more than one back testing techniques should be used to test the accuracy of VAR.

Page generated in 0.0941 seconds