• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 308
  • 96
  • 41
  • 24
  • 17
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 617
  • 321
  • 207
  • 173
  • 140
  • 115
  • 105
  • 101
  • 89
  • 78
  • 67
  • 56
  • 55
  • 55
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Recurrent neural networks in electricity load forecasting / Rekurrenta neurala nätverk i prognostisering av elkonsumtion

Alam, Samiul January 2018 (has links)
In this thesis two main studies are conducted to compare the predictive capabilities of feed-forward neural networks (FFNN) and long short-term memory networks (LSTM) in electricity load forecasting. The first study compares univariate networks using past electricity load, as well as multivariate networks using past electricity load and air temperature, in day-ahead load forecasting using varying lookback periods and sparsity of past observations. The second study compares FFNNs and LSTMs of different complexities (i.e. network sizes) when restrictions imposed by limitations of the real world are taken into consideration. No significant differences are found between the predictive performances of the two neural network approaches. However, adding air temperature as extra input to the LSTM is found to significantly decrease its performance. Furthermore, the predictive performance of the FFNN is found to significantly decrease as the network complexity grows, while the predictive performance of the LSTM is found to increase as the network complexity grows. All the findings considered, we do not find that there is enough evidence in favour of the LSTM in electricity load forecasting. / I denna uppsats beskrivs två studier som jämför feed-forward neurala nätverk (FFNN) och long short-term memory neurala nätverk (LSTM) i prognostisering av elkonsumtion. I den första studien undersöks univariata modeller som använder tidigare elkonsumtion, och flervariata modeller som använder tidigare elkonsumtion och temperaturmätningar, för att göra prognoser av elkonsumtion för nästa dag. Hur långt bak i tiden tidigare information hämtas ifrån samt upplösningen av tidigare information varieras. I den andra studien undersöks FFNN- och LSTM-modeller med praktiska begränsningar såsom tillgänglighet av data i åtanke. Även storleken av nätverken varieras. I studierna finnes ingen skillnad mellan FFNN- och LSTM-modellernas förmåga att prognostisera elkonsumtion. Däremot minskar FFNN-modellens förmåga att prognostisera elkonsumtion då storleken av modellen ökar. Å andra sidan ökar LSTM-modellens förmåga då storkelen ökar. Utifrån dessa resultat anser vi inte att det finns tillräckligt med bevis till förmån för LSTM-modeller i prognostisering av elkonsumtion.
202

Model comparison of patient volume prediction in digital health care / Jämförelse av modeller för förutsägelse av patientvolym inom digital vård

Hellstenius, Sasha January 2018 (has links)
Accurate predictions of patient volume are an essential tool to improve resource allocation and doctor utilization in the traditional, as well as the digital health care domain. Varying methods for patient volume prediction within the traditional health care domain has been studied in contemporary research, while the concept remains underexplored within the digital health care domain. In this paper, an evaluation of how two different non-linear state-of-the-art time series prediction models compare when predicting patient volume within the digital health care domain is presented. The models compared are the feed forward Multi-layer Percepron (MLP) and the recursive Long Short-Term Memory (LSTM) network. The results imply that the prediction problem itself is straightforward, while also indicating that there are significant differences in prediction accuracy between the evaluated models. The conclusions presented state that that the LSTM model offers substantial prediction advantages that outweigh the complexity overhead for the given problem. / En korrekt förutsägelse av patientvolym är essentiell för att förbättra resursallokering av läkare inom traditionell liksom digital vård. Olika metoder för förutsägelse av patientvolym har undersökts inom den traditionella vården medan liknande studier inom den digitala sektorn saknas. I denna uppsats undersöks två icke-linjära moderna metoder för tidsserieanalys av patientvolym inom den digitala sjukvården. Modellerna som undersöks är multi-lagersperceptronen (MLP) samt Long Short-Term Memory (LSTM) nätverket. Resultaten som presenteras indikerar att problemet i sig är okomplicerat samtidigt som det visar sig finnas signifikanta skillnader i korrektheten av förutsägelser mellan de olika modellerna. Slutsatserna som presenteras pekar på att LSTM-modellen erbjuder signifikanta fördelar som överväger komplexitets- och prestandakostnaden.
203

Predict Next Location of Users using Deep Learning

Guan, Xing January 2019 (has links)
Predicting the next location of a user has been interesting for both academia and industry. Applications like location-based advertising, traffic planning, intelligent resource allocation as well as in recommendation services are some of the problems that many are interested in solving. Along with the technological advancement and the widespread usage of electronic devices, many location-based records are created. Today, deep learning framework has successfully surpassed many conventional methods in many learning tasks, most known in the areas of image and voice recognition. One of the neural network architecture that has shown the promising result at sequential data is Recurrent Neural Network (RNN). Since the creation of RNN, much alternative architecture have been proposed, and architectures like Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU) are one of the popular ones that are created[5]. This thesis uses GRU architecture and features that incorporate time and location into the network to forecast people’s next location In this paper, a spatial-temporal neural network (ST-GRU) has been proposed. It can be seen as two parts, which are ST and GRU. The first part is a feature extraction algorithm that pulls out the information from a trajectory into location sequences. That process transforms the trajectory into a friendly sequence format in order to feed into the model. The second part, GRU is proposed to predict the next location given a user’s trajectory. The study shows that the proposed model ST-GRU has the best results comparing the baseline models. / Att förutspå vart en individ är på väg har varit intressant för både akademin och industrin. Tillämpningar såsom platsbaserad annonsering, trafikplanering, intelligent resursallokering samt rekommendationstjänster är några av de problem som många är intresserade av att lösa. Tillsammans med den tekniska utvecklingen och den omfattande användningen av elektroniska enheter har många platsbaserade data skapats. Idag har tekniken djupinlärning framgångsrikt överträffat många konventionella metoder i inlärningsuppgifter, bland annat inom områdena bild och röstigenkänning. En neural nätverksarkitektur som har visat lovande resultat med sekventiella data kallas återkommande neurala nätverk (RNN). Sedan skapandet av RNN har många alternativa arkitekturer skapats, bland de mest kända är Long Short Term Memory (LSTM) och Gated Recurrent Units (GRU). Den här studien använder en modifierad GRU där man bland annat lägger till attribut såsom tid och distans i nätverket för att prognostisera nästa plats. I det här examensarbetet har ett rumsligt temporalt neuralt nätverk (ST-GRU) föreslagits. Den består av två delar, nämligen ST och GRU. Den första delen är en extraktionsalgoritm som drar ut relevanta korrelationer mellan tid och plats som är inkorporerade i nätverket. Den andra delen, GRU, förutspår nästa plats med avseende på användarens aktuella plats. Studien visar att den föreslagna modellen ST-GRU ger bättre resultat jämfört med benchmarkmodellerna.
204

Predicting Customer Churn Using Recurrent Neural Networks / Prediktera kundbeteende genom användning av återkommande neurala nätverk

Ljungehed, Jesper January 2017 (has links)
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The results show that the model performs better than random. This thesis also investigated the use of the K-means algorithm as a replacement to a rule-extraction algorithm. The K-means algorithm contributed to a more comprehensive analytical context regarding the churn prediction of the proposed model. / Illojalitet prediktering används för att identifiera kunder som är påväg att bli mindre lojala och är ett hjälpsamt verktyg för att ett företag ska kunna driva en konkurrenskraftig verksamhet. I detaljhandel behöves en dynamisk definition av illojalitet för att korrekt kunna identifera illojala kunder. Kundens livstidsvärde är ett mått på monetärt värde av en kundrelation. En avstannad förändring av detta värde indikerar en minskning av kundens lojalitet. Denna rapport föreslår en ny metod för att utföra illojalitet prediktering. Den föreslagna metoden består av ett återkommande neuralt nätverk som används för att identifiera illojalitet hos kunder genom att prediktera kunders livstidsvärde. Resultaten visar att den föreslagna modellen presterar bättre jämfört med slumpmässig metod. Rapporten undersöker också användningen av en k-medelvärdesalgoritm som ett substitut för en regelextraktionsalgoritm. K-medelsalgoritm bidrog till en mer omfattande analys av illojalitet predikteringen.
205

Renal Artery Stenosis As Etiology of Recurrent Flash Pulmonary Edema and Role of Imaging in Timely Diagnosis and Management

Bhattad, Pradnya B., Jain, Vinay 09 April 2020 (has links)
Renal hypoperfusion from renal artery stenosis (RAS) activates the renin-angiotensin system, which in turn causes volume overload and hypertension. Atherosclerosis and fibromuscular dysplasia are the most common causes of renal artery stenosis. Recurrent flash pulmonary edema, also known as Pickering syndrome, is commonly associated with bilateral renal artery stenosis. There should be a high index of clinical suspicion for renal artery stenosis in the setting of recurrent flash pulmonary edema and severe hypertension in patients with atherosclerotic disease. Duplex ultrasonography is commonly recommended as the best initial test for the detection of renal artery stenosis. Computed tomography (CT) angiography (CTA) or magnetic resonance (MR) angiography (MRA) are useful diagnostic imaging studies for the detection of renal artery stenosis in patients where duplex ultrasonography is difficult. If duplex ultrasound, CTA, and MRA are indeterminate or pose a risk of significant renal impairment, renal angiography is useful for a definitive diagnosis of RAS. The focus of medical management for RAS relies on controlling renovascular hypertension and aggressive lifestyle modification with control of atherosclerotic disease risk factors. The restoration of renal artery patency by revascularization in the setting of RAS due to atherosclerosis may help in the management of hypertension and minimize renal dysfunction.
206

Gauss-newton Based Learning For Fully Recurrent Neural Networks

Vartak, Aniket Arun 01 January 2004 (has links)
The thesis discusses a novel off-line and on-line learning approach for Fully Recurrent Neural Networks (FRNNs). The most popular algorithm for training FRNNs, the Real Time Recurrent Learning (RTRL) algorithm, employs the gradient descent technique for finding the optimum weight vectors in the recurrent neural network. Within the framework of the research presented, a new off-line and on-line variation of RTRL is presented, that is based on the Gauss-Newton method. The method itself is an approximate Newton's method tailored to the specific optimization problem, (non-linear least squares), which aims to speed up the process of FRNN training. The new approach stands as a robust and effective compromise between the original gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL.
207

Characterizing the Informativity of Level II Book Data for High Frequency Trading

Nielsen, Logan B. 10 April 2023 (has links) (PDF)
High Frequency Trading (HFT) algorithms are automated feedback systems interacting with markets to maximize returns on investments. These systems have the potential to read different resolutions of market information at any given time, where Level I information is the minimal information about an equity--essentially its price--and Level II information is the full order book at that time for that equity. This paper presents a study of using Recurrent Neural Network (RNN) models to predict the spread of the DOW Industrial 30 index traded on NASDAQ, using Level I and Level II data as inputs. The results show that Level II data does not significantly improve the prediction of spread when predicting less than 100 millisecond into the future, while it is increasingly informative for spread predictions further into the future. This suggests that HFT algorithms should not attempt to make use of Level II information, and instead reallocate that computation power for improved trading performance, while slower trading algorithms may very well benefit from processing the complete order book.
208

Flexible Joint Hierarchical Gaussian Process Model for Longitudinal and Recurrent Event Data

Su, Weiji 22 October 2020 (has links)
No description available.
209

Interpretable natural language processing models with deep hierarchical structures and effective statistical training

Zhaoxin Luo (17328937) 03 November 2023 (has links)
<p dir="ltr">The research focuses on improving natural language processing (NLP) models by integrating the hierarchical structure of language, which is essential for understanding and generating human language. The main contributions of the study are:</p><ol><li><b>Hierarchical RNN Model:</b> Development of a deep Recurrent Neural Network model that captures both explicit and implicit hierarchical structures in language.</li><li><b>Hierarchical Attention Mechanism:</b> Use of a multi-level attention mechanism to help the model prioritize relevant information at different levels of the hierarchy.</li><li><b>Latent Indicators and Efficient Training:</b> Integration of latent indicators using the Expectation-Maximization algorithm and reduction of computational complexity with Bootstrap sampling and layered training strategies.</li><li><b>Sequence-to-Sequence Model for Translation:</b> Extension of the model to translation tasks, including a novel pre-training technique and a hierarchical decoding strategy to stabilize latent indicators during generation.</li></ol><p dir="ltr">The study claims enhanced performance in various NLP tasks with results comparable to larger models, with the added benefit of increased interpretability.</p>
210

Translational Studies of Human Papillomavirus

Bedard, Mary 02 June 2023 (has links)
No description available.

Page generated in 0.1632 seconds