• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 16
  • 6
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 318
  • 174
  • 154
  • 127
  • 114
  • 111
  • 87
  • 82
  • 77
  • 77
  • 64
  • 59
  • 59
  • 58
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

LSTM Neural Network Models for Market Movement Prediction

Li, Edwin January 2018 (has links)
Interpreting time varying phenomena is a key challenge in the capital markets. Time series analysis using autoregressive methods has been carried out over the last couple of decades, often with reassuring results. However, such methods sometimes fail to explain trends and cyclical fluctuations, which may be characterized by long-range dependencies or even dependencies between the input features. The purpose of this thesis is to investigate whether recurrent neural networks with LSTM-cells can be used to capture these dependencies, and ultimately be used as a complement for index trading decisions. Experiments are made on different setups of the S&P-500 stock index, and two distinct models are built, each one being an improvement of the previous model. The first model is a multivariate regression model, and the second model is a multivariate binary classifier. The output of each model is used to reason about the future behavior of the index. The experiment shows for the configuration provided that LSTM RNNs are unsuitable for predicting exact values of daily returns, but gives satisfactory results when used to predict the direction of the movement. / Att förstå och kunna förutsäga hur index varierar med tiden och andra parametrar är ett viktigt problem inom kapitalmarknader. Tidsserieanalys med autoregressiva metoder har funnits sedan årtionden tillbaka, och har oftast gett goda resultat. Dessa metoder saknar dock möjligheten att förklara trender och cykliska variationer i tidsserien, något som kan karaktäriseras av tidsvarierande samband, men även samband mellan parametrar som indexet beror utav. Syftet med denna studie är att undersöka om recurrent neural networks (RNN) med long short-term memory-celler (LSTM) kan användas för att fånga dessa samband, för att slutligen användas som en modell för att komplettera indexhandel. Experimenten är gjorda mot en modifierad S&P-500 datamängd, och två distinkta modeller har tagits fram. Den ena är en multivariat regressionsmodell för att förutspå exakta värden, och den andra modellen är en multivariat klassifierare som förutspår riktningen på nästa dags indexrörelse. Experimenten visar för den konfiguration som presenteras i rapporten att LSTM RNN inte passar för att förutspå exakta värden för indexet, men ger tillfredsställande resultat när modellen ska förutsäga indexets framtida riktning.
212

Time series Forecast of Call volume in Call Centre using Statistical and Machine Learning Methods

Baldon, Nicoló January 2019 (has links)
Time series is a collection of points gathered at regular intervals. Time series analysis explores the time correlations and tries to model it according to trend and seasonality. One of the most relevant tasks, in time series analysis, is forecasting future values, which is considered fundamental in many real-world scenarios. Nowadays, many companies forecast using hand-written models or naive statistical models. Call centers are the front end of the organization, managing the relationship with the customers. A key challenge for call centers remains the call load forecast and the optimization of the schedule. Call load indicates the number of calls a call center receives. The call load forecast is mostly exploited to schedule the staff. They are interested in the short term forecast to handle the unforeseen and to optimize the staff schedule, and in the long term forecast to hire or assign staff to other tasks. Machine learning has been applied to several fields reporting excellent results, and recently, time series forecasting problems have gained a high-interest thanks to the new recurrent network, named Long-short Term Memory. This thesis has explored the capabilities of machine learning in modeling and forecasting call load time series, characterized by a strong seasonality, both at daily and hourly scale. We compare Seasonal Artificial Neural Network (ANN) and a Long-Short Term Memory (LSTM) models with Seasonal Autoregressive Integrated Moving Average (SARIMA) model, which is one of the most common statistical method utilized by call centers. The primary metric used to evaluate the results is the Normalized Mean Squared Error (NMSE), the secondary is the Symmetric Mean Absolute Percentage Error (SMAPE), utilized to calculate the accuracy of the models. We carried out our experiments on three different datasets provided by the Teleopti. Experimental results have proven SARIMA to be more accurate in forecasting at daily scale across the three datasets. It performs better than the Seasonal ANN and the LSTM with a limited amount of data points. At hourly scale, Seasonal ANN and LSTM outperform SARIMA, showing robustness across a forecasting horizon of 160 points. Finally, SARIMA has shown no correlation between the quality of the model and the number of data points, while both SANN and LSTM improves together with the number of sample / Tidsserie är en samling punkter som samlas in med jämna mellanrum. Tidsseriens analys undersöker tidskorrelationerna och försöker modellera den enligt trend och säsongsbetonade. En av de mest relevanta uppgifterna, i tidsserieranalys, är att förutse framtida värden, som anses vara grundläggande i många verkliga scenarier. Numera förutspår många företag med handskrivna modeller eller naiva statistiska modeller. Callcenter är organisationens främre del och hanterar relationen med kunderna. En viktig utmaning för callcentra är fortfarande samtalslastprognosen och optimeringen av schemat. Samtalslast indikerar antalet samtal ett callcenter tar emot. Samtalslastprognosen utnyttjas mest för att schemalägga personalen. De är intresserade av den kortsiktiga prognosen för att hantera det oförutsedda och för att optimera personalplanen och på långsiktigt prognos för att anställa eller tilldela personal till andra uppgifter. Maskininlärning har använts på flera fält som rapporterar utmärkta resultat, och nyligen har prognosproblem i tidsserier fått ett stort intresse tack vare det nya återkommande nätverket, som heter Long-short Term Memory. Den här avhandlingen har undersökt kapaciteten för maskininlärning i modellering och prognoser samtalsbelastningstidsserier, kännetecknad av en stark säsongsbetonning, både på daglig och timskala. Vi jämför modeller med säsongsmässigt artificiellt neuralt nätverk (ANN) och ett LSTM-modell (Long- Short Term Memory) med Seasonal Autoregressive Integrated Moving Average (SARIMA)-modell, som är en av de vanligaste statistiska metoderna som används av callcenter. Den primära metriken som används för att utvärdera resultaten är det normaliserade medelkvadratfelet (NMSE), det sekundära är det symmetriska genomsnittet absolut procentuellt fel (SMAPE), som används för att beräkna modellernas noggrannhet. Vi genomförde våra experiment på tre olika datasätt från Teleopti. Experimentella resultat har visat att SARIMA är mer exakt när det gäller prognoser i daglig skala över de tre datasätten. Det presterar bättre än Seasonal ANN och LSTM med en begränsad mängd datapoäng. På timskala överträffar Seasonal ANN och LSTM SARIMA och visar robusthet över en prognoshorisont på 160 poäng. SARIMA har slutligen inte visat någon korrelation mellan modellens kvalitet och antalet datapunkter, medan både SANN och LSTM förbättras tillsammans med antalet sampel.
213

Polyphonic Music Instrument Detection on Weakly Labelled Data using Sequence Learning Models / Polyfonisk musikinstrumentdetektion på svagt märkta data med hjälp av sekvensinlärningsmodeller

Mukhedkar, Dhananjay January 2020 (has links)
Polyphonic or multiple music instrument detection is a difficult problem compared to detecting single or solo instruments in an audio recording. As music is time series data it be can modelled using sequence learning methods within deep learning. Recently, temporal convolutional networks (TCN) have shown to outperform conventional recurrent neural networks (RNN) on various sequence modelling tasks. Though there have been significant improvements in deep learning methods, data scarcity becomes a problem in training large scale models. Weakly labelled data is an alternative where a clip is annotated for presence or absence of instruments without specifying the times at which an instrument is sounding. This study investigates how TCN model compares to a Long Short-Term Memory (LSTM) model while trained on weakly labelled dataset. The results showed successful training of both models along with generalisation on a separate dataset. The comparison showed that TCN performed better than LSTM, but only marginally. Therefore, from the experiments carried out it could not be explicitly concluded if TCN is convincingly a better choice over LSTM in the context of instrument detection, but definitely a strong alternative. / Polyfonisk eller multipel musikinstrumentdetektering är ett svårt problem jämfört med att detektera enstaka eller soloinstrument i en ljudinspelning. Eftersom musik är tidsseriedata kan den modelleras med hjälp av sekvensinlärningsmetoder inom djup inlärning. Nyligen har ’Temporal Convolutional Network’ (TCN) visat sig överträffa konventionella ’Recurrent Neural Network’ (RNN) på flertalet sekvensmodelleringsuppgifter. Även om det har skett betydande förbättringar i metoder för djup inlärning, blir dataknapphet ett problem vid utbildning av storskaliga modeller. Svagt märkta data är ett alternativ där ett klipp kommenteras för närvaro av frånvaro av instrument utan att ange de tidpunkter då ett instrument låter. Denna studie undersöker hur TCN-modellen jämförs med en ’Long Short-Term Memory’ (LSTM) -modell medan den tränas i svagt märkta datasätt. Resultaten visade framgångsrik utbildning av båda modellerna tillsammans med generalisering i en separat datasats. Jämförelsen visade att TCN presterade bättre än LSTM, men endast marginellt. Därför kan man från de genomförda experimenten inte uttryckligen dra slutsatsen om TCN övertygande är ett bättre val jämfört med LSTM i samband med instrumentdetektering, men definitivt ett starkt alternativ.
214

Long Document Understanding using Hierarchical Self Attention Networks

Kekuda, Akshay January 2022 (has links)
No description available.
215

Ambient-vibration-based Long-term SHM of Bridges Using Two-stage Output-only System Identification / 二段階出力のみのシステム同定による常時振動に基づく橋梁の長期モニタリング

Jiang, Wenjie 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第24895号 / 工博第5175号 / 新制||工||1988(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 KIM Chul-Woo, 教授 杉浦 邦征, 教授 八木 知己 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
216

Application of Machine Learning and AI for Prediction in Ungauged Basins

Pin-Ching Li (16734693) 03 August 2023 (has links)
<p>Streamflow prediction in ungauged basins (PUB) is a process generating streamflow time series at ungauged reaches in a river network. PUB is essential for facilitating various engineering tasks such as managing stormwater, water resources, and water-related environmental impacts. Machine Learning (ML) has emerged as a powerful tool for PUB using its generalization process to capture the streamflow generation processes from hydrological datasets (observations). ML’s generalization process is impacted by two major components: data splitting process of observations and the architecture design. To unveil the potential limitations of ML’s generalization process, this dissertation explores its robustness and associated uncertainty. More precisely, this dissertation has three objectives: (1) analyzing the potential uncertainty caused by the data splitting process for ML modeling, (2) investigating the improvement of ML models’ performance by incorporating hydrological processes within their architectures, and (3) identifying the potential biases in ML’s generalization process regarding the trend and periodicity of streamflow simulations.</p><p>The first objective of this dissertation is to assess the sensitivity and uncertainty caused by the regular data splitting process for ML modeling. The regular data splitting process in ML was initially designed for homogeneous and stationary datasets, but it may not be suitable for hydrological datasets in the context of PUB studies. Hydrological datasets usually consist of data collected from diverse watersheds with distinct streamflow generation regimes influenced by varying meteorological forcing and watershed characteristics. To address the potential inconsistency in the data splitting process, multiple data splitting scenarios are generated using the Monte Carlo method. The scenario with random data splitting results accounts for frequent covariate shift and tends to add uncertainty and biases to ML’s generalization process. The findings in this objective suggest the importance of avoiding the covariate shift during the data splitting process when developing ML models for PUB to enhance the robustness and reliability of ML’s performance.</p><p>The second objective of this dissertation is to investigate the improvement of ML models’ performance brought by Physics-Guided Architecture (PGA), which incorporates ML with the rainfall abstraction process. PGA is a theory-guided machine learning framework integrating conceptual tutors (CTs) with ML models. In this study, CTs correspond to rainfall abstractions estimated by Green-Ampt (GA) and SCS-CN models. Integrating the GA model’s CTs, which involves information on dynamic soil properties, into PGA models leads to better performance than a regular ML model. On the contrary, PGA models integrating the SCS-CN model's CTs yield no significant improvement of ML model’s performance. The results of this objective demonstrate that the ML’s generalization process can be improved by incorporating CTs involving dynamic soil properties.</p><p>The third objective of this dissertation is to explore the limitations of ML’s generalization process in capturing trend and periodicity for streamflow simulations. Trend and periodicity are essential components of streamflow time series, representing the long-term correlations and periodic patterns, respectively. When the ML models generate streamflow simulations, they tend to have relatively strong long-term periodic components, such as yearly and multiyear periodic patterns. In addition, compared to the observed streamflow data, the ML models display relatively weak short-term periodic components, such as daily and weekly periodic patterns. As a result, the ML’s generalization process may struggle to capture the short-term periodic patterns in the streamflow simulations. The biases in ML’s generalization process emphasize the demands for external knowledge to improve the representation of the short-term periodic components in simulating streamflow.</p>
217

Predicting Electricity Consumption with ARIMA and Recurrent Neural Networks

Enerud, Klara January 2024 (has links)
Due to the growing share of renewable energy in countries' power systems, the need for precise forecasting of electricity consumption will increase. This paper considers two different approaches to time series forecasting, autoregressive moving average (ARMA) models and recurrent neural networks (RNNs). These are applied to Swedish electricity consumption data, with the aim of deriving simple yet efficient predictors. An additional aim is to analyse the impact of day of week and temperature on forecast accuracy. The models are evaluated on both long- and mid-term forecasting horizons, ranging from one day to one month. The results show that neural networks are superior for this task, although stochastic seasonal ARMA models also perform quite well. Including external variables only marginally improved the ARMA predictions, and had somewhat unclear effects on the RNN forecasting accuracy. Depending on the network model used, adding external variables had either a slightly positive or slightly negative impact on prediction accuracy.
218

Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks

Nawaz, Sabeen January 2020 (has links)
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages. / Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
219

Blood Glucose Level Prediction via Seamless Incorporation of Raw Features Using RNNs

Mirshekarianbabaki, Sadegh 03 July 2018 (has links)
No description available.
220

Need for Wheel Speed : Generating synthetic wheel speeds using LSTM and GANs

Berglund, Erik January 2022 (has links)
Time series as data in the machine learning research area has been dominated by prediction and forecasting techniques. Ever since the inception of generative models, the interest in generating time series has increased. Time series data appears in many different fields with financial and medical gathering much of the interest. This thesis is instead focusing on the automotive field with a heavy focus on wheel speed data. The issue with wheel speed data, or any other vehicle signal, is that they take a long time to gather since a person has to drive around in order to get the data.  This thesis investigates the possibility to generate vehicle signals with a large focus on wheel speed signals. To better understand the difference between different car models and which vehicle signals are most useful. The classification of vehicle time series was done with a stacked LSTM network. A thorough analysis of the network parameters was made and an accuracy of over 80\% was achieved when classifying 6 different vehicle models. For time series generation a GAN with LSTM networks was used, based on CRNNGAN. The generated samples were evaluated by people experienced with the data and by using both PCA and t-SNE. The result is bad and is too noisy. Only one of the vehicle signals could be generated in a satisfying manner and that signal was significantly less complex since it was a binary signal being either 0 or 1.

Page generated in 0.0687 seconds