• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 107
  • 51
  • 42
  • 32
  • 26
  • 25
  • 22
  • 19
  • 18
  • 17
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Enhanced Learning for Restricted Hopfield Networks

Halabian, Faezeh 10 June 2021 (has links)
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
22

Depression tendency detection of Chinese texts in social media data based on Convolutional Neural Networks and Recurrent neural networks.

Xu, Kaiwei, Fei, Yuhang January 2022 (has links)
No description available.
23

Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory

Gustafsson, Anton, Sjödal, Julian January 2020 (has links)
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
24

Text-Based Speech Video Synthesis from a Single Face Image

Zheng, Yilin January 2019 (has links)
No description available.
25

Well On/Off Time Classification Using RNNs and a Developed Well Simulator to Generate Realistic Well Production Data

AlHammad, Yousef 07 1900 (has links)
Supervised machine learning (ML) projects require data for model training, validation, and testing. However, the confidential nature of field and well production data often hinders the progress of ML projects. To address this issue, we developed a well simulator that generates realistic well production data based on physical, governing differential equations. The simulation models the reservoir, wellbore, flowline, and choke coupled using transient nodal analysis to solve for transient flow rate, pressure, and temperature as a function of variable choke opening over time in addition to a wide range of static parameters for each component. The simulator’s output is then perturbed using the gauge transfer function to introduce systematic and random errors, creating a dataset for ML projects without the need for confidential production data. We then generated a simulated dataset to train a recurrent neural network (RNN) on the task of classifying well on/off times. This task typically requires a significant number of manhours to manually filter and verify data for hundreds or thousands of wells. Our RNN model achieves high accuracy in classifying the correct on/off labels, representing a promising step towards a fully-automated rate allocation process. Our simulator for well production data can be used for other ML projects, circumventing the need for confidential data, and enabling the study and development of different ML models to streamline and automate various oil and gas work processes. Overall, the success of our RNN model demonstrates the potential of ML to improve the operational efficiency of various oil and gas work processes.
26

LSTM Based Deep Learning Models for Prediction of Univariate Time Series Data(An Experiment to Predict New Daily Cases of Covid-19)

Zarean, Zeinab 15 September 2022 (has links)
No description available.
27

The Convolutional Recurrent Structure in Computer Vision Applications

Xie, Dong 12 1900 (has links)
By organically fusing the methods of convolutional neural network (CNN) and recurrent neural network (RNN), this dissertation focuses on the application of optical character recognition and image classification processing. The first part of this dissertation presents an end-to-end novel receipt recognition system for capturing effective information from receipts (CEIR). The main contributions of this research part are divided into three parts. First, this research develops a preprocessing method for receipt images. Second, the modified connectionist text proposal network is introduced to execute text detection. Third, the CEIR combines the convolutional recurrent neural network with the connectionist temporal classification with maximum entropy regularization as a loss function to update the weights in networks and extract the characters from receipt. The CEIR system is validated with the scanned receipts optical character recognition and information extraction (SROIE) database. Furthermore, the CEIR system has strong robustness and can be extended to a variety of different scenarios beyond receipts. For the convolutional recurrent structure application of land use image classification, this dissertation comes up with a novel deep learning model for land use classification, the convolutional recurrent land use classifier (CRLUC), which further improves the accuracy in classifying remote sensing land use images. Besides, the convolutional fully-connected neural networks with hard sample memory pool structure (CFMP) is invented to tackle the remote sensing land use image classification tasks. The CRLUC and CFMP algorithm performances are tested in popular datasets. Experimental studies show the proposed algorithms can classify images with higher accuracy and fewer training episodes compared to popular image classification algorithms.
28

Predicting Customer Churn Using Recurrent Neural Networks / Prediktera kundbeteende genom användning av återkommande neurala nätverk

Ljungehed, Jesper January 2017 (has links)
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The results show that the model performs better than random. This thesis also investigated the use of the K-means algorithm as a replacement to a rule-extraction algorithm. The K-means algorithm contributed to a more comprehensive analytical context regarding the churn prediction of the proposed model. / Illojalitet prediktering används för att identifiera kunder som är påväg att bli mindre lojala och är ett hjälpsamt verktyg för att ett företag ska kunna driva en konkurrenskraftig verksamhet. I detaljhandel behöves en dynamisk definition av illojalitet för att korrekt kunna identifera illojala kunder. Kundens livstidsvärde är ett mått på monetärt värde av en kundrelation. En avstannad förändring av detta värde indikerar en minskning av kundens lojalitet. Denna rapport föreslår en ny metod för att utföra illojalitet prediktering. Den föreslagna metoden består av ett återkommande neuralt nätverk som används för att identifiera illojalitet hos kunder genom att prediktera kunders livstidsvärde. Resultaten visar att den föreslagna modellen presterar bättre jämfört med slumpmässig metod. Rapporten undersöker också användningen av en k-medelvärdesalgoritm som ett substitut för en regelextraktionsalgoritm. K-medelsalgoritm bidrog till en mer omfattande analys av illojalitet predikteringen.
29

Characterizing the Informativity of Level II Book Data for High Frequency Trading

Nielsen, Logan B. 10 April 2023 (has links) (PDF)
High Frequency Trading (HFT) algorithms are automated feedback systems interacting with markets to maximize returns on investments. These systems have the potential to read different resolutions of market information at any given time, where Level I information is the minimal information about an equity--essentially its price--and Level II information is the full order book at that time for that equity. This paper presents a study of using Recurrent Neural Network (RNN) models to predict the spread of the DOW Industrial 30 index traded on NASDAQ, using Level I and Level II data as inputs. The results show that Level II data does not significantly improve the prediction of spread when predicting less than 100 millisecond into the future, while it is increasingly informative for spread predictions further into the future. This suggests that HFT algorithms should not attempt to make use of Level II information, and instead reallocate that computation power for improved trading performance, while slower trading algorithms may very well benefit from processing the complete order book.
30

Interpretable natural language processing models with deep hierarchical structures and effective statistical training

Zhaoxin Luo (17328937) 03 November 2023 (has links)
<p dir="ltr">The research focuses on improving natural language processing (NLP) models by integrating the hierarchical structure of language, which is essential for understanding and generating human language. The main contributions of the study are:</p><ol><li><b>Hierarchical RNN Model:</b> Development of a deep Recurrent Neural Network model that captures both explicit and implicit hierarchical structures in language.</li><li><b>Hierarchical Attention Mechanism:</b> Use of a multi-level attention mechanism to help the model prioritize relevant information at different levels of the hierarchy.</li><li><b>Latent Indicators and Efficient Training:</b> Integration of latent indicators using the Expectation-Maximization algorithm and reduction of computational complexity with Bootstrap sampling and layered training strategies.</li><li><b>Sequence-to-Sequence Model for Translation:</b> Extension of the model to translation tasks, including a novel pre-training technique and a hierarchical decoding strategy to stabilize latent indicators during generation.</li></ol><p dir="ltr">The study claims enhanced performance in various NLP tasks with results comparable to larger models, with the added benefit of increased interpretability.</p>

Page generated in 0.0909 seconds