Spelling suggestions: "subject:"deep beliefs networks"" "subject:"keep beliefs networks""
1 |
Aplicação de Deep Learning em dados refinados para Mineração de OpiniõesJost, Ingo 26 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-12T19:13:14Z
No. of bitstreams: 1
Ingo Jost.pdf: 1217467 bytes, checksum: bf67cd6724b1cd182a12a3cd7b5af1eb (MD5) / Made available in DSpace on 2015-06-12T19:13:14Z (GMT). No. of bitstreams: 1
Ingo Jost.pdf: 1217467 bytes, checksum: bf67cd6724b1cd182a12a3cd7b5af1eb (MD5)
Previous issue date: 2015-02-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Deep Learning é uma sub-área de Aprendizado de Máquina que tem obtido resultados sa- tisfatórios em várias áreas de aplicação, implementada por diferentes algoritmos, como Stacked Auto-encoders ou Deep Belief Networks. Este trabalho propõe uma modelagem que aplica uma implementação de um classificador que aborda técnicas de Deep Learning em Mineração de Opiniões, área que tem sido alvo de constantes estudos, dada a necessidade das corporações buscarem a compreensão que clientes possuem de seus produtos ou serviços. O favorecimento do crescimento de Mineração de Opiniões também se dá pelo ambiente colaborativo da Web 2.0, em que várias ferramentas propiciam a emissão de opiniões. Os dados utilizados passaram por um refinamento na etapa de pré-processamento com o intuito de aplicar Deep Learning, da qual uma das principais atribuições é a seleção de características, em dados refinados em vez de dados mais brutos. A promissora tecnologia de Deep Learning combinada com a estratégia de refinamento demonstrou nos experimentos a obtenção de resultados competitivos com outros estudos relacionados e abrem perspectiva de extensão deste trabalho. / Deep Learning is a Machine Learning’s sub-area that have achieved satisfactory results in different application areas, implemented by different algorithms, such as Stacked Auto- encoders or Deep Belief Networks. This work proposes a research that applies a classifier that implements Deep Learning concepts in Opinion Mining, area has been approached by con- stant researches, due the need of corporations seeking the understanding that customers have of your products or services. The Opinion Mining’s growth is favored also by the collaborative Web 2.0 environment, where multiple tools provide issuing opinions. The data used for exper- iments were refined in preprocessing step in order to apply Deep Learning, which it one of the main tasks the feature selection, in refined data, instead of applying Deep Learning in more raw data. The refinement strategy combined with the promising technology of Deep Learning has demonstrated in preliminary experiments the achievement of competitive results with other studies and opens the perspective for extension of this work.
|
2 |
Automated Prediction of Solar Flares Using SDO Data. The Development of An Automated Computer System for Predicting Solar Flares Based on SDO Satellite Data Using HMI Images Analysis, Visualisation, and Deep Learning TechnologiesAbed, Ali K. January 2021 (has links)
Nowadays, space weather has become an international issue to the world's countries
because of its catastrophic effect on space-borne and ground-based systems, and
industries, impacting our lives. One of the main solar activities that is considered as a
major driver of space weather is solar flares. Solar flares can be defined as an enormous
eruption in the sun's atmosphere. This phenomenon happens when magnetic energy stored
in twisted magnetic fields, usually near sunspots, is suddenly released. Yet, their
occurrence is not fully understood. These flares can affect the Earth by the release of
massive quantities of charged particles and electromagnetic radiation. Investigating the
associations between solar flares and sunspot groups is helpful in comprehending the
possible cause and effect relationships among solar flares and sunspot features. 01 This
thesis proposes a new approach developed by integrating advances in image processing,
machine learning, and deep learning with advances in solar physics to extract valuable
knowledge from historical solar data related to sunspot regions and flares.
This dissertation aims to achieve the following:
1) We developed a new prediction algorithm based on the Automated Solar Activity
Prediction system (ASAP) system. The proposed algorithm updates the ASAP system
by extending the training process and optimizing the learning rules to the optimize
performance better. Two neural networks are used in the proposed approach. The first
neural network is used to predict whether a specific sunspot class at a particular time
is likely to produce a significant flare or not. The second neural network is used to
predict the type of this flare, X or M-class.
2) We proposed a new system called the ASAP_Deep system built on top of the ASAP
system introduced in [6] but improves the system with an updated deep learning-based
prediction capability. In addition, we successfully apply Convolutional Neural
Network (CNN) to the sunspot group image without any pr-eprocessing or feature
extraction. Moreover, our system results are considerably better, especially for the
false alarm ratio (FAR); this reduces the losses resulting from the protection measures
applied by companies. In addition, the proposed system achieves a relatively high
score of True Skill Statistic (TSS) and Heidke Skill Score (HSS).
3) We presented a novel system that used the Deep Belief Networks (DBNs) to predict
the solar flares occurrence. The input data are SDO/HMI Intensitygram and
Magnetogram images. The model outputs are "Flare or No-Flare" of significant flare
occurrence (M and X-class flares). In addition, we created a dataset from the sunspots
groups extracted from SDO HMI Intensitygram images. We compared the results
obtained from the complete suggested system with those of three previous flare forecast models using several statistical metrics.
In our view, these developed methods and results represent an excellent initial
step toward enhancing the accuracy of flare forecasting, enhance our understanding of flare occurrence, and develop efficient flare prediction systems. The systems, implementation, results, and future work are explained in this dissertation.
|
3 |
Application of Convolutional Deep Belief Networks to Domain AdaptationLiu, Ye 09 September 2014 (has links)
No description available.
|
4 |
Hardware Implementation and Applications of Deep Belief NetworksImbulgoda Liyangahawatte, Gihan Janith Mendis January 2016 (has links)
No description available.
|
5 |
Machine Learning Methods for Articulatory DataBerry, Jeffrey James January 2012 (has links)
Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data useful to speech scientists, and for speech technology applications. This dissertation presents feature extraction methods for ultrasound images of the tongue and for data collected with an Electro-Magnetic Articulograph (EMA). The usefulness of these features is tested in several phoneme classification tasks. Feature extraction methods for ultrasound tongue images presented here consist of automatically tracing the tongue surface contour using a modified Deep Belief Network (DBN) (Hinton et al. 2006), and methods inspired by research in face recognition which use the entire image. The tongue tracing method consists of training a DBN as an autoencoder on concatenated images and traces, and then retraining the first two layers to accept only the image at runtime. This 'translational' DBN (tDBN) method is shown to produce traces comparable to those made by human experts. An iterative bootstrapping procedure is presented for using the tDBN to assist a human expert in labeling a new data set. Tongue contour traces are compared with the Eigentongues method of (Hueber et al. 2007), and a Gabor Jet representation in a 6-class phoneme classification task using Support Vector Classifiers (SVC), with Gabor Jets performing the best. These SVC methods are compared to a tDBN classifier, which extracts features from raw images and classifies them with accuracy only slightly lower than the Gabor Jet SVC method.For EMA data, supervised binary SVC feature detectors are trained for each feature in three versions of Distinctive Feature Theory (DFT): Preliminaries (Jakobson et al. 1954), The Sound Pattern of English (Chomsky and Halle 1968), and Unified Feature Theory (Clements and Hume 1995). Each of these feature sets, together with a fourth unsupervised feature set learned using Independent Components Analysis (ICA), are compared on their usefulness in a 46-class phoneme recognition task. Phoneme recognition is performed using a linear-chain Conditional Random Field (CRF) (Lafferty et al. 2001), which takes advantage of the temporal nature of speech, by looking at observations adjacent in time. Results of the phoneme recognition task show that Unified Feature Theory performs slightly better than the other versions of DFT. Surprisingly, ICA actually performs worse than running the CRF on raw EMA data.
|
6 |
Deep neural networks and their implementation / Deep neural networks and their implementationVojt, Ján January 2016 (has links)
Deep neural networks represent an effective and universal model capable of solving a wide variety of tasks. This thesis is focused on three different types of deep neural networks - the multilayer perceptron, the convolutional neural network, and the deep belief network. All of the discussed network models are implemented on parallel hardware, and thoroughly tested for various choices of the network architecture and its parameters. The implemented system is accompanied by a detailed documentation of the architectural decisions and proposed optimizations. The efficiency of the implemented framework is confirmed by the results of the performed tests. A significant part of this thesis represents also additional testing of other existing frameworks which support deep neural networks. This comparison indicates superior performance to the tested rival frameworks of multilayer perceptrons and convolutional neural networks. The deep belief network implementation performs slightly better for RBM layers with up to 1000 hidden neurons, but has a noticeably inferior performance for more robust RBM layers when compared to the tested rival framework. Powered by TCPDF (www.tcpdf.org)
|
7 |
Deep neural networks and their application for image data processing / Deep neural networks and their application for image data processingGolovizin, Andrey January 2016 (has links)
In the area of image recognition, the so-called deep neural networks belong to the most promising models these days. They often achieve considerably better results than traditional techniques even without the necessity of any excessive task-oriented preprocessing. This thesis is devoted to the study and analysis of three basic variants of deep neural networks-namely the neocognitron, convolutional neural networks, and deep belief networks. Based on extensive testing of the described models on the standard task of handwritten digit recognition, the convolutional neural networks seem to be most suitable for the recognition of general image data. Therefore, we have used them also to classify images from two very large data sets-CIFAR-10 and ImageNet. In order to optimize the architecture of the applied networks, we have proposed a new pruning algorithm based on the Principal Component Analysis. Powered by TCPDF (www.tcpdf.org)
|
8 |
Robot semantic place recognition based on deep belief networks and a direct use of tiny imagesHasasneh, Ahmad 23 November 2012 (has links) (PDF)
Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ''semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
|
9 |
DRESS & GO: Deep belief networks and Rule Extraction Supported by Simple Genetic Optimization / DRESS & GO: Deep belief networks and Rule Extraction Supported by Simple Genetic OptimizationŠvaralová, Monika January 2018 (has links)
Recent developments in social media and web technologies offer new opportunities to access, analyze and process ever-increasing amounts of fashion-related data. In the appealing context of design and fashion, our main goal is to automatically suggest fashionable outfits based on the preferences extracted from real-world data provided either by individual users or gathered from the internet. In our case, the clothing items have the form of 2D-images. Especially for visual data processing tasks, recent models of deep neural networks are known to surpass human performance. This fact inspired us to apply the idea of transfer learning to understand the actual variability in clothing items. The principle of transfer learning consists in extracting the internal representa- tions formed in large convolutional networks pre-trained on general datasets, e.g., ImageNet, and visualizing its (similarity) structure. Together with transfer learn- ing, clustering algorithms and the image color schemes can be, namely, utilized when searching for related outfit items. Viable means applicable to generating new out- fits include deep belief networks and genetic algorithms enhanced by a convolutional network that models the outfit fitness. Although fashion-related recommendations remain highly subjective, the results we have achieved...
|
10 |
Automated sleep scoring using unsupervised learning of meta-features / Automatiserad sömnmätning med användning av oövervakad inlärning av meta-särdragOlsson, Sebastian January 2016 (has links)
Sleep is an important part of life as it affects the performance of one's activities during all awake hours. The study of sleep and wakefulness is therefore of great interest, particularly to the clinical and medical fields where sleep disorders are diagnosed. When studying sleep, it is common to talk about different types, or stages, of sleep. A common task in sleep research is to determine the sleep stage of the sleeping subject as a function of time. This process is known as sleep stage scoring. In this study, I seek to determine whether there is any benefit to using unsupervised feature learning in the context of electroencephalogram-based (EEG) sleep scoring. More specifically, the effect of generating and making use of new feature representations for hand-crafted features of sleep data – meta-features – is studied. For this purpose, two scoring algorithms have been implemented and compared. Both scoring algorithms involve segmentation of the EEG signal, feature extraction, feature selection and classification using a support vector machine (SVM). Unsupervised feature learning was implemented in the form of a dimensionality-reducing deep-belief network (DBN) which the feature space was processed through. Both scorers were shown to have a classification accuracy of about 76 %. The application of unsupervised feature learning did not affect the accuracy significantly. It is speculated that with a better choice of parameters for the DBN in a possible future work, the accuracy may improve significantly. / Sömnen är en viktig del av livet eftersom den påverkar ens prestation under alla vakna timmar. Forskning om sömn and vakenhet är därför av stort intresse, i synnerhet för de kliniska och medicinska områdena där sömnbesvär diagnostiseras. I forskning om sömn är det är vanligt att tala om olika typer av sömn, eller sömnstadium. En vanlig uppgift i sömnforskning är att avgöra sömnstadiet av den sovande exemplaret som en funktion av tiden. Den här processen kallas sömnmätning. I den här studien försöker jag avgöra om det finns någon fördel med att använda oövervakad inlärning av särdrag för att utföra elektroencephalogram-baserad (EEG) sömnmätning. Mer specifikt undersöker jag effekten av att generera och använda nya särdragsrepresentationer som härstammar från handgjorda särdrag av sömndata – meta-särdrag. Två sömnmätningsalgoritmer har implementerats och jämförts för det här syftet. Sömnmätningsalgoritmerna involverar segmentering av EEG-signalen, extraktion av särdragen, urval av särdrag och klassificering genom användning av en stödvektormaskin (SVM). Oövervakad inlärning av särdrag implementerades i form av ett dimensionskrympande djuptrosnätverk (DBN) som användes för att bearbetasärdragsrymden. Båda sömnmätarna visades ha en klassificeringsprecision av omkring 76 %. Användningen av oövervakad inlärning av särdrag hade ingen signifikant inverkan på precisionen. Det spekuleras att precisionen skulle kunna höjas med ett mer lämpligt val av parametrar för djuptrosnätverket.
|
Page generated in 0.065 seconds