• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multimodal Deep Learning for Multi-Label Classification and Ranking Problems

Dubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012]. On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC), (ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies. Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
2

Forexový automatický obchodní systém založený na neuronových sítích / Forex automated trading system based on neural networks

Kačer, Petr January 2015 (has links)
Main goal of this thesis is to create forex automated trading system with possibility to add trading strategies as modules and implementation of trading strategy module based on neural networks. Created trading system is composed of client part for MetaTrader 4 trading platform and server GUI application. Trading strategy modules are implemented as dynamic libraries. Proposed trading strategy uses multilayer neural networks for prediction of direction of 45 minute moving average of close prices in one hour time horizon. Neural networks were able to find relationship between inputs and output and predict drop or growth with success rate higher than 50%. In live demo trading, strategy displayed itself as profitable for currency pair EUR/USD, but it was losing for currency pair GBP/USD. In tests with historical data from year 2014, strategy was profitable for currency pair EUR/USD in case of trading in direction of long-term trend. In case of trading against direction of trend for pair EUR/USD and in case of trading in direction and against direction of trend for pair GBP/USD, strategy was losing.
3

Development of Neural Networks Using Deterministic Transforms

Grau Jurado, Pol January 2021 (has links)
Deep neural networks have been a leading research topic within the machine learning field for the past few years. The introduction of graphical processing units (GPUs) and hardware advances made possible the training of deep neural networks. Previously the training procedure was impossible due to the huge amount of training samples required. The new trained introduced architectures have outperformed the classical methods in different classification and regression problems. With the introduction of 5G technology, related to low-latency and online applications, the research on decreasing the computational cost of deep learning architectures while maintaining state-of-art performance has gained huge interest. This thesis focuses on the use of Self Size-estimating Feedforward Network (SSFN), a feedforward multilayer network. SSFN presents low complexity on the training procedure due to a random matrix instance used in its weights. Its weight matrices are trained using a layer-wise convex optimization approach (a supervised training) combined with a random matrix instance (an unsupervised training). The use of deterministic transforms is explored to replace random matrix instances on the SSFN weight matrices. The use of deterministic transforms automatically reduces the computational complexity, as its structure allows to compute them by fast algorithms. Several deterministic transforms such as discrete cosine transform, Hadamard transform and wavelet transform, among others, are investigated. To this end, two methods based on features’ statistical parameters are developed. The proposed methods are implemented on each layer to decide the deterministic transform to use. The effectiveness of the proposed approach is illustrated by SSFN for object classification tasks using several benchmark datasets. The results show a proper performance, similar to the original SSFN, and also consistency across the different datasets. Therefore, the possibility of introducing deterministic transformations in machine learning research is demonstrated. / Under de senaste åren har djupa neurala nätverk varit det huvudsakliga forskningsområdet inom maskininlärning. Införandet av grafiska processorenheter (GPU:er) och hårdvaruutveckling möjliggjorde träning av djupa neurala nätverk. Tidigare var träningsförfarandet omöjligt på grund av den enorma mängd datapunkter som krävs. De nya tränade arkitekturerna har överträffat de klassiska metoderna i olika klassificerings- och regressionsproblem. Med introduktionen av 5G-teknik, som hör samman med låg fördröjning och onlineapplikationer, har forskning om att minska beräkningskostnaderna för djupinlärningsarkitekturer utan att tappa prestandan, fått ökat intresset. Denna avhandling fokuserar på användningen av Self Size Estimating Feedforward Network (SSFN), ett feedforward multilayer-nätverk. SSFN har låg komplexitet i träningsproceduren på grund av en slumpmässig matrisinstans som används i dess vikter. Dess viktmatriser tränas med hjälp av en lagervis konvex optimeringsstrategi (en övervakad träning) i kombination med en slumpmässig matrisinstans (en oövervakad träning). Användningen av deterministiska transformationer undersöks för att ersätta slumpmässiga matrisinstanser på SSFN-viktmatriserna. Användningen av deterministiska transformationer ger automatiskt en minskning av beräkningskomplexiteten, eftersom dess struktur gör det möjligt att beräkna dem med snabba algoritmer. Flera deterministiska transformationer som diskret cosinustransformation, Hadamardtransformation och wavelettransformation undersöks bland andra. För detta ändamål utvecklas två metoder som baseras på statistiska parametrar i indatans olika dimensioner. De föreslagna metoderna implementeras på varje lager för att bestämma den deterministiska transform som ska användas. Effektiviteten av det föreslagna tillvägagångssättet illustreras med SSFN för objektklassificering med hjälp av flera dataset. Resultatet visar ett korrekt beteende, likt den ursprungliga SSFN, och konsistenta resultat över de olika dataseten. Därmed demonstreras möjligheten att införa deterministiska transformationer i maskininlärningsforskning.
4

Detekce fibrilace síní v EKG / ECG based atrial fibrillation detection

Prokopová, Ivona January 2020 (has links)
Atrial fibrillation is one of the most common cardiac rhythm disorders characterized by ever-increasing prevalence and incidence in the Czech Republic and abroad. The incidence of atrial fibrillation is reported at 2-4 % of the population, but due to the often asymptomatic course, the real prevalence is even higher. The aim of this work is to design an algorithm for automatic detection of atrial fibrillation in the ECG record. In the practical part of this work, an algorithm for the detection of atrial fibrillation is proposed. For the detection itself, the k-nearest neighbor method, the support vector method and the multilayer neural network were used to classify ECG signals using features indicating the variability of RR intervals and the presence of the P wave in the ECG recordings. The best detection was achieved by a model using a multilayer neural network classification with two hidden layers. Results of success indicators: Sensitivity 91.23 %, Specificity 99.20 %, PPV 91.23 %, F-measure 91.23 % and Accuracy 98.53 %.

Page generated in 0.0622 seconds