Spelling suggestions: "subject:"recurrent neural networks"" "subject:"decurrent neural networks""
41 |
Stability and Switchability in Recurrent Neural NetworksPerumal, Subramoniam January 2008 (has links)
No description available.
|
42 |
Prediction of manufacturing operations sequence using recurrent neural networksMehta, Manish P. January 1997 (has links)
No description available.
|
43 |
[pt] DETECTOR DE ASSINATURAS DE GÁS EM LEVANTAMENTOS SÍSMICOS UTILIZANDO LSTM / [en] DIRECT HYDROCARBON INDICATORS BASED ON LSTMLUIZ FERNANDO TRINDADE SANTOS 02 April 2020 (has links)
[pt] Detectar reservatórios de hidrocarbonetos a partir de um levantamento sísmico é uma tarefa complexa, que requer profissionais especializados e muito tempo
de trabalho. Por isso, atualmente, existem muitas pesquisas que buscam automatizar
essa tarefa utilizando redes neurais profundas. Seguindo o sucesso das redes convolucionais profundas, CNNs, na identificação de objetos em imagens e vídeos, as
CNNs tem sido utilizadas como detectores de eventos geológicos nas imagens sísmica. O treinamento de uma rede neural profunda atual, entretanto, requer centenas
de milhares de dados rotulados. Se tratarmos os dados sísmicos como imagens, os
reservatórios de hidrocarbonetos geralmente constituem uma pequena sub imagem
incapaz de fornecer tantas amostras. A metodologia proposta nesta dissertação trata
o dado sísmico como um conjunto de traços e a amostra que alimenta a rede neural
são trechos de um sinal unidimensional parecido com um sinal de som ou voz. Com
essa entrada uma marcação de um reservatório numa sísmica geralmente já fornece
o número necessário de amostras rotuladas para o treinamento. Um outro aspecto
importante da nossa proposta é a utilização de uma rede neural recorrente. A influencia de um reservatório de hidrocarboneto num traço sísmico se dá não somente
no local onde ele se encontra, mas em todo o traço que se segue. Por isso propomos
a utilização de uma rede do tipo longa memória de curto prazo (Long Short-Term
Memory, LSTM) para caracterizar regiões que apresentem assinaturas de gás em
imagens sísmicas. Esta dissertação detalha ainda a implementação da metodologia proposta e os testes feitos nos dados sísmicos públicos Netherlands F3-Block.
Os resultados alcançados avaliados pelos índices de sensibilidade, especificidade,
acurácia e AUC foram todos excelentes, acima de 95 por cento. / [en] Detecting hydrocarbon reservoirs from a seismic survey is a complex task,
requiring specialized professionals and long time. Consequently, many authors
today seek to automate this task by using deep neural networks. Following the
success of deep convolutional networks, CNNs, in the identification of objects
in images and videos, CNNs have been used as detectors of geological events
in seismic images. Training a deep neural network, however, requires hundreds
of thousands of labeled data, that is, samples that we know the response that
the network must provide. If we treat seismic data as images, the hydrocarbon
reservoirs usually constitute a small sub-image unable to provide so many samples.
The methodology proposed in this dissertation treats the seismic data as a set
of traces and the sample that feeds the neural network are fragments of a onedimensional signal resembling a sound or voice signal. A labeled reservoir seismic
image usually provides the required number of labeled one-dimensional samples for
training. Another important aspect of our proposal is the use of a recurrent neural
network. The influence of a hydrocarbon reservoir on a seismic trace occurs not only
in its location but throughout the trace that follows. For this reason, we propose
the use of a Long Short-Term Memory, LSTM, network to characterize regions
that present gas signatures in seismic images. This dissertation further details the
implementation of the proposed methodology and test results on the Netherlands
F3-Block public seismic data. The results on this data set, evaluated by sensitivity,
specificity, accuracy and AUC indexes, are all excellent, above 95 percent.
|
44 |
High-Dimensional Generative Models for 3D PerceptionChen, Cong 21 June 2021 (has links)
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data.
The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval. / Doctor of Philosophy / The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization.
Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
|
45 |
One Size Does Not Fit All: Optimizing Sequence Length with Recurrent Neural Networks for Spectrum SensingMoore, Megan O.'Neal 28 June 2021 (has links)
With the increase in spectrum congestion, intelligent spectrum sensing systems have become more important than ever before. In the field of Radio Frequency Machine Learning (RFML), techniques like deep neural networks and reinforcement learning have been used to develop more complex spectrum sensing systems that are not reliant on expert features. Architectures like Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have shown great promise for applications like automated modulation classification, signal detection, and specific emitter ID. Research in these areas has primarily focused on "one size fits all" networks that assume a fixed signal length in both training and inference. However, since some signals are more complex than others, due to channel conditions, transmitter/receiver effects, etc., being able to dynamically utilize just enough of the received symbols to make a reliable decision allows for more efficient decision making in applications such as electronic warfare and dynamic spectrum sharing. Additionally, the operator may want to get to the quickest possible decision.
Recurrent neural networks have been shown to outperform other architectures when processing temporally correlated data, such as from wireless communication signals. However, compared to other architectures, such as CNNs, RNNs can suffer from drastically longer training and evaluation times due to their inherent sample-by-sample data processing. While traditional usage of both of these architectures typically assumes a fixed observation interval during both training and testing, the sample-by-sample processing capabilities of recurrent neural networks opens the door for "decoupling" these intervals. This is invaluable in real-world applications due to the relaxation of the typical requirement of a fixed time duration of the signals of interest. This work illustrates the benefits and considerations needed when "decoupling" these observation intervals for spectrum sensing applications. In particular, this work shows that, intuitively, recurrent neural networks can be leveraged to process less data (i.e. shorter observation intervals) for simpler inputs (less complicated signal types or channel conditions). Less intuitively, this works shows that the "decoupling" is dependent on appropriate training to avoid bias and insure generalization. / Master of Science / With the increase in spectrum congestion, intelligent spectrum sensing systems have become more important than ever before. In the field of Radio Frequency Machine Learning (RFML), techniques like deep neural networks and reinforcement learning have been used to develop more complex spectrum sensing systems that are not reliant on expert features. Architectures like Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have shown great promise for applications like automated modulation classification, signal detection, and specific emitter ID. Research in these areas has primarily focused on "one size fits all" networks that assume a fixed signal length in both training and inference. However, since some signals are more complex than others, due to channel conditions, transmitter/receiver effects, etc., being able to dynamically utilize just enough of the received symbols to make a reliable decision allows for more efficient decision making in applications such as electronic warfare and dynamic spectrum sharing. Additionally, the operator may want to get to the quickest possible decision.
Recurrent neural networks have been shown to outperform other architectures when processing temporally correlated data, such as from wireless communication signals. However, compared to other architectures, such as CNNs, RNNs can suffer from drastically longer training and evaluation times due to their inherent sample-by-sample data processing. While traditional usage of both of these architectures typically assumes a fixed observation interval during both training and testing, the sample-by-sample processing capabilities of recurrent neural networks opens the door for "decoupling" these intervals. This is invaluable in real-world applications due to the relaxation of the typical requirement of a fixed time duration of the signals of interest. This work illustrates the benefits and considerations needed when "decoupling" these observation intervals for spectrum sensing applications. In particular, this work shows that, intuitively, recurrent neural networks can be leveraged to process less data (i.e. shorter observation intervals) for simpler inputs (less complicated signal types or channel conditions). Less intuitively, this works shows that the "decoupling" is dependent on appropriate training to avoid bias and insure generalization.
|
46 |
Greedy Inference Algorithms for Structured and Neural ModelsSun, Qing 18 January 2018 (has links)
A number of problems in Computer Vision, Natural Language Processing, and Machine Learning produce structured outputs in high-dimensional space, which makes searching for the global optimal solution extremely expensive. Thus, greedy algorithms, making trade-offs between precision and efficiency, are widely used. Unfortunately, they in general lack theoretical guarantees.
In this thesis, we prove that greedy algorithms are effective and efficient to search for multiple top-scoring hypotheses from structured (neural) models: 1) Entropy estimation. We aim to find deterministic samples that are representative of Gibbs distribution via a greedy strategy. 2) Searching for a set of diverse and high-quality bounding boxes. We formulate this problem as the constrained maximization of a monotonic sub-modular function such that there exists a greedy algorithm having near-optimal guarantee. 3) Fill-in-the-blank. The goal is to generate missing words conditioned on context given an image. We extend Beam Search, a greedy algorithm applicable on unidirectional expansion, to bidirectional neural models when both past and future information have to be considered.
We test our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks and show that they are effective and efficient. / Ph. D. / The rapid progress has been made in Computer Vision (e.g., detecting what and where objects are shown in an image), Natural Language Processing (e.g., translating a sentence in English to Chinese), and Machine learning (e.g., inference over graph models). However, a number of problems produce structured outputs in high-dimensional space, e.g., semantic segmentation requires predicting the labels (e.g., dog, cat, or person, etc) of all super-pixels, the search space is huge, say L<sup>n</sup>, where L is the number of object labels and n is the number of super-pixels. Thus, searching for the global optimal solution is often intractable. Instead, we aim to prove that greedy algorithms that produce reasonable solutions, e.g., near-optimal, are much effective and efficient. There are three tasks studied in the thesis: 1) Entropy estimation. We attempt to search for a finite number of semantic segmentations which are representative and diverse such that we can approximate the entropy of the distribution over output space by applying the existing model on the image. 2) Searching for a set of diverse bounding boxes that are most likely to contain an object. We formulate this problem as an optimization problem such that there exist a greedy algorithm having theoretical guarantee. 3) Fill-in-the-blank. We attempt to generate missing words in the blanks around which there are contexts available. We tested our proposed approaches on a series of Computer Vision and Natural Language Processing benchmarks, e.g., MS COCO, PASCAL VOC, etc, and show that they are indeed effective and efficient.
|
47 |
Reconfigurable neurons - making the most of configurable logic blocks (CLBs)Ghani, A., See, Chan H., Migdadi, Hassan S.O., Asif, Rameez, Abd-Alhameed, Raed, Noras, James M. January 2015 (has links)
No / An area-efficient hardware architecture is used to map fully parallel cortical columns on Field Programmable Gate Arrays (FPGA) is presented in this paper. To demonstrate the concept of this work, the proposed architecture is shown at the system level and benchmarked with image and speech recognition applications. Due to the spatio-temporal nature of spiking neurons, this has allowed such architectures to map on FPGAs in which communication can be performed through the use of spikes and signal can be represented in binary form. The process and viability of designing and implementing the multiple recurrent neural reservoirs with a novel multiplier-less reconfigurable architectures is described.
|
48 |
Exploration and Evaluation of RNN Models on Low-Resource Embedded Devices for Human Activity Recognition / Undersökning och utvärdering av RNN-modeller på resurssvaga inbyggda system för mänsklig aktivitetsigenkänningBjörnsson, Helgi Hrafn, Kaldal, Jón January 2023 (has links)
Human activity data is typically represented as time series data, and RNNs, often with LSTM cells, are commonly used for recognition in this field. However, RNNs and LSTM-RNNs are often too resource-intensive for real-time applications on resource constrained devices, making them unsuitable. This thesis project is carried out at Wrlds AB, Stockholm. At Wrlds, all machine learning is run in the cloud, but they have been attempting to run their AI algorithms on their embedded devices. The main task of this project was to investigate alternative network structures to minimize the size of the networks to be used on human activity data. This thesis investigates the use of Fast GRNN, a deep learning algorithm developed by Microsoft researchers, to classify human activity on resource-constrained devices. The FastGRNN algorithm was compared to state-of-the-art RNNs, LSTM, GRU, and Simple RNN in terms of accuracy, classification time, memory usage, and energy consumption. This research is limited to implementing the FastRNN algorithm on Nordic SoCs using their SDK and TensorFlow Lite Micro. The result of this thesis shows that the proposed network has similar performance as LSTM networks in terms of accuracy while being both considerably smaller and faster, making it a promising solution for human activity recognition on embedded devices with limited computational resources and merits further investigation. / Rörelse igenkännings analys är oftast representerat av tidsseriedata där ett RNN modell meden LSTM arkitektur är oftast den självklara vägen att ta. Dock så är denna arkitektur väldigt resurskrävande för applikationer i realtid och gör att det uppstår problem med resursbegränsad hårdvara. Detta examensarbete är utfört i samarbete med Wrlds Technologies AB. På Wrlds så körs deras maskin inlärningsmodeller på molnet och lokalt på mobiltelefoner. Wrlds har nu påbörjat en resa för att kunna köra modeller direkt på små inbyggda system. Examensarbete kommer att utvärdera en FastGRNN som är en NN-arkitektur utvecklad av Microsoft i syfte att användas på resurs begränsad hårdvara. FastGRNN algoritmen jämfördes med andra högkvalitativa arkitekturer som RNNs, LSTM, GRU och en simpel RNN. Träffsäkerhet, klassifikationstid, minnesanvändning samt energikonsumtion användes för att jämföra dom olika varianterna. Detta arbete kommer bara att utvärdera en FastGRNN algoritm på en Nordic SoCs och kommer att användas deras SDK samt Tensorflow Lite Micro. Resultatet från detta examensarbete visar att det utvärderade nätverket har liknande prestanda som ett LSTM nätverk men också att nätverket är betydligt mindre i storlek och därmed snabbare. Detta betyder att ett FastGRNN visar lovande resultat för användningen av rörelseigenkänning på inbyggda system med begränsad prestanda kapacitet.
|
49 |
Tracking a ball during bounce and roll using recurrent neural networks / Följning av en boll under studs och rull med hjälp av återkopplande neurala nätverkRosell, Felicia January 2018 (has links)
In many types of sports, on-screen graphics such as an reconstructed ball trajectory, can be displayed for spectators or players in order to increase understanding. One sub-problem of trajectory reconstruction is tracking of ball positions, which is a difficult problem due to the fast and often complex ball movement. Historically, physics based techniques have been used to track ball positions, but this thesis investigates using a recurrent neural network design, in the application of tracking bouncing golf balls. The network is trained and tested on synthetically created golf ball shots, created to imitate balls shot out from a golf driving range. It is found that the trained network succeeds in tracking golf balls during bounce and roll, with an error rate of under 11 %. / Grafik visad på en skärm, så som en rekonstruerad bollbana, kan användas i många typer av sporter för att öka en åskådares eller spelares förståelse. För att lyckas rekonstruera bollbanor behöver man först lösa delproblemet att följa en bolls positioner. Följning av bollpositioner är ett svårt problem på grund av den snabba och ofta komplexa bollrörelsen. Tidigare har fysikbaserade tekniker använts för att följa bollpositioner, men i den här uppsatsen undersöks en metod baserad på återkopplande neurala nätverk, för att följa en studsande golfbolls bana. Nätverket tränas och testas på syntetiskt skapade golfslag, där bollbanorna är skapade för att imitera golfslag från en driving range. Efter träning lyckades nätverket följa golfbollar under studs och rull med ett fel på under 11 %.
|
50 |
Predicting customer purchase behavior within Telecom : How Artificial Intelligence can be collaborated into marketing efforts / Förutspå köpbeteenden inom telekom : Hur Artificiell Intelligens kan användas i marknadsföringsaktiviteterForslund, John, Fahlén, Jesper January 2020 (has links)
This study aims to investigate the implementation of an AI model that predicts customer purchases, in the telecom industry. The thesis also outlines how such an AI model can assist decision-making in marketing strategies. It is concluded that designing the AI model by following a Recurrent Neural Network (RNN) architecture with a Long Short-Term Memory (LSTM) layer, allow for a successful implementation with satisfactory model performances. Stepwise instructions to construct such model is presented in the methodology section of the study. The RNN-LSTM model further serves as an assisting tool for marketers to assess how a consumer’s website behavior affect their purchase behavior over time, in a quantitative way - by observing what the authors refer to as the Customer Purchase Propensity Journey (CPPJ). The firm empirical basis of CPPJ, can help organizations improve their allocation of marketing resources, as well as benefit the organization’s online presence by allowing for personalization of the customer experience. / Denna studie undersöker implementeringen av en AI-modell som förutspår kunders köp, inom telekombranschen. Studien syftar även till att påvisa hur en sådan AI-modell kan understödja beslutsfattande i marknadsföringsstrategier. Genom att designa AI-modellen med en Recurrent Neural Network (RNN) arkitektur med ett Long Short-Term Memory (LSTM) lager, drar studien slutsatsen att en sådan design möjliggör en framgångsrik implementering med tillfredsställande modellprestation. Instruktioner erhålls stegvis för att konstruera modellen i studiens metodikavsnitt. RNN-LSTM-modellen kan med fördel användas som ett hjälpande verktyg till marknadsförare för att bedöma hur en kunds beteendemönster på en hemsida påverkar deras köpbeteende över tiden, på ett kvantitativt sätt - genom att observera det ramverk som författarna kallar för Kundköpbenägenhetsresan, på engelska Customer Purchase Propensity Journey (CPPJ). Den empiriska grunden av CPPJ kan hjälpa organisationer att förbättra allokeringen av marknadsföringsresurser, samt gynna deras digitala närvaro genom att möjliggöra mer relevant personalisering i kundupplevelsen.
|
Page generated in 0.0863 seconds