• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Temporal Encoder-Decoder Approach to Extracting Blood Volume Pulse Signal Morphology from Face Videos

Li, Fulan 05 July 2023 (has links)
This thesis considers methods for extracting blood volume pulse (BVP) representations from video of the human face. Whereas most previous systems have been concerned with estimating vital signs such as average heart rate, this thesis addresses the more difficult problem of recovering BVP signal morphology. We present a new approach that is inspired by temporal encoder-decoder architectures that have been used for audio signal separation. As input, this system accepts a temporal sequence of RGB (red, green, blue) values that have been spatially averaged over a small portion of the face. The output of the system is a temporal sequence that approximates a BVP signal. In order to reduce noise in the recovered signal, a separate processing step extracts individual pulses and performs normalization and outlier removal. After these steps, individual pulse shapes have been extracted that are sufficiently distinct to support biometric authentication. Our findings demonstrate the effectiveness of our approach in extracting BVP signal morphology from facial videos, which presents exciting opportunities for further research in this area. The source code is available at https://github.com/Adleof/CVPM-2023-Temporal-Encoder-Decoder-iPPG / Master of Science / This thesis considers methods for extracting blood volume pulse (BVP) representations from video of the human face. We present a new approach that is inspired by the method that has been used for audio signal separation. The output of our system is an approximation of the BVP signal of the person in the video. Our method can extract a signal that is sufficiently distinct to support biometric authentication. Our findings demonstrate the effectiveness of our approach in extracting BVP signal morphology from facial videos, which presents exciting opportunities for further research in this area.
2

Encoder-Decoder Networks for Cloud Resource Consumption Forecasting

Mejdi, Sami January 2020 (has links)
Excessive resource allocation in telecommunications networks can be prevented by forecasting the resource demand when dimensioning the networks and then allocating the necessary resources accordingly, which is an ongoing effort to achieve a more sustainable development. In this work, traffic data from cloud environments that host deployed virtualized network functions (VNFs) of an IP Multimedia Subsystem (IMS) has been collected along with the computational resource consumption of the VNFs. A supervised learning approach was adopted to address the forecasting problem by considering encoder-decoder networks. These networks were applied to forecast future resource consumption of the VNFs by regarding the problem as a time series forecasting problem, and recasting it as a sequence-to-sequence (seq2seq) problem. Different encoder-decoder network architectures were then utilized to forecast the resource consumption. The encoder-decoder networks were compared against a widely deployed classical time series forecasting model that served as a baseline model. The results show that while the considered encoder-decoder models failed to outperform the baseline model in overall Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), the forecasting capabilities were more resilient to degradation over time. This suggests that the encoder-decoder networks are more appropriate for long-term forecasting, which is in agreement with related literature. Furthermore, the encoder-decoder models achieved competitive performance when compared to the baseline, despite being treated with limited hyperparameter-tuning and the absence of more sophisticated functionality such as attention. This work has shown that there is indeed potential for deep learning applications in forecasting of cloud resource consumption. / Överflödig allokering av resurser i telekommunikationsnätverk kan förhindras genom att prognosera resursbehoven vid dimensionering av dessa nätverk. Detta görs i syfte att bidra till en mer hållbar utveckling. Infor  detta  projekt har  trafikdata från molnmiljon som hyser aktiva virtuella komponenter (VNFs) till ett  IP Multimedia Subsystem (IMS) samlats in tillsammans med resursförbrukningen  av dessa komponenter. Detta examensarbete avhandlar hur effektivt övervakad maskininlärning i form av encoder-decoder natverk kan användas för att prognosera resursbehovet hos ovan nämnda VNFs. Encoder-decoder nätverken appliceras genom att betrakta den samlade datan som en tidsserie. Problemet med att förutspå utvecklingen av tidsserien formuleras sedan som ett sequence-to-sequence (seq2seq) problem. I detta arbete användes en samling encoder-decoder nätverk med olika arkitekturer for att prognosera resursförbrukningen och dessa jämfördes med en populär modell hämtad från klassisk tidsserieanalys. Resultaten visar att encoder- decoder nätverken misslyckades med att överträffa den klassiska tidsseriemodellen med avseende på Root Mean Squared Error (RMSE) och Mean Absolute Error (MAE). Dock visade encoder-decoder nätverken en betydlig motståndskraft mot prestandaförfall över tid i jämförelse med den klassiska tidsseriemodellen. Detta indikerar att encoder-decoder nätverk är lämpliga för prognosering över en längre tidshorisont. Utöver detta visade encoder-decoder nätverken en konkurrenskraftig förmåga att förutspå det korrekta resursbehovet, trots en begränsad justering av disponeringsparametrarna och utan mer sofistikerad funktionalitet implementerad som exempelvis attention.
3

Semi Supervised Learning for Accurate Segmentation of Roughly Labeled Data

Rajan, Rachel 01 September 2020 (has links)
No description available.
4

Joint random linear network coding and convolutional code with interleaving for multihop wireless network

Susanto, Misfa, Hu, Yim Fun, Pillai, Prashant January 2013 (has links)
No / Abstract: Error control techniques are designed to ensure reliable data transfer over unreliable communication channels that are frequently subjected to channel errors. In this paper, the effect of applying a convolution code to the Scattered Random Network Coding (SRNC) scheme over a multi-hop wireless channel was studied. An interleaver was implemented for bit scattering in the SRNC with the purpose of dividing the encoded data into protected blocks and vulnerable blocks to achieve error diversity in one modulation symbol while randomising errored bits in both blocks. By combining the interleaver with the convolution encoder, the network decoder in the receiver would have enough number of correctly received network coded blocks to perform the decoding process efficiently. Extensive simulations were carried out to study the performance of three systems: 1) SRNC with convolutional encoding, 2) SRNC; and 3) A system without convolutional encoding nor interleaving. Simulation results in terms of block error rate for a 2-hop wireless transmission scenario over an Additive White Gaussian Noise (AWGN) channel were presented. Results showed that the system with interleaving and convolutional code achieved better performance with coding gain of at least 1.29 dB and 2.08 dB on average when the block error rate is 0.01 when compared with system II and system III respectively.
5

Monocular Depth Estimation Using Deep Convolutional Neural Networks

Larsson, Susanna January 2019 (has links)
For a long time stereo-cameras have been deployed in visual Simultaneous Localization And Mapping (SLAM) systems to gain 3D information. Even though stereo-cameras show good performance, the main disadvantage is the complex and expensive hardware setup it requires, which limits the use of the system. A simpler and cheaper alternative are monocular cameras, however monocular images lack the important depth information. Recent works have shown that having access to depth maps in monocular SLAM system is beneficial since they can be used to improve the 3D reconstruction. This work proposes a deep neural network that predicts dense high-resolution depth maps from monocular RGB images by casting the problem as a supervised regression task. The network architecture follows an encoder-decoder structure in which multi-scale information is captured and skip-connections are used to recover details. The network is trained and evaluated on the KITTI dataset achieving results comparable to state-of-the-art methods. With further development, this network shows good potential to be incorporated in a monocular SLAM system to improve the 3D reconstruction.
6

Generative, Discriminative, and Hybrid Approaches to Audio-to-Score Automatic Singing Transcription / 自動歌声採譜のための生成的・識別的・混成アプローチ

Nishikimi, Ryo 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23311号 / 情博第747号 / 新制||情||128(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)准教授 吉井 和佳, 教授 河原 達也, 教授 西野 恒, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
7

Building high-quality datasets for abstractive text summarization : A filtering‐based method applied on Swedish news articles

Monsen, Julius January 2021 (has links)
With an increasing amount of information on the internet, automatic text summarization could potentially make content more readily available for a larger variety of people. Training and evaluating text summarization models require datasets of sufficient size and quality. Today, most such datasets are in English, and for minor languages such as Swedish, it is not easy to obtain corresponding datasets with handwritten summaries. This thesis proposes methods for compiling high-quality datasets suitable for abstractive summarization from a large amount of noisy data through characterization and filtering. The data used consists of Swedish news articles and their preambles which are here used as summaries. Different filtering techniques are applied, yielding five different datasets. Furthermore, summarization models are implemented by warm-starting an encoder-decoder model with BERT checkpoints and fine-tuning it on the different datasets. The fine-tuned models are evaluated with ROUGE metrics and BERTScore. All models achieve significantly better results when evaluated on filtered test data than when evaluated on unfiltered test data. Moreover, models trained on the most filtered dataset with the smallest size achieves the best results on the filtered test data. The trade-off between dataset size and quality and other methodological implications of the data characterization, the filtering and the model implementation are discussed, leading to suggestions for future research.
8

Semantic Segmentation of Urban Scene Images Using Recurrent Neural Networks

Daliparthi, Venkata Satya Sai Ajay January 2020 (has links)
Background: In Autonomous Driving Vehicles, the vehicle receives pixel-wise sensor data from RGB cameras, point-wise depth information from the cameras, and sensors data as input. The computer present inside the Autonomous Driving vehicle processes the input data and provides the desired output, such as steering angle, torque, and brake. To make an accurate decision by the vehicle, the computer inside the vehicle should be completely aware of its surroundings and understand each pixel in the driving scene. Semantic Segmentation is the task of assigning a class label (Such as Car, Road, Pedestrian, or Sky) to each pixel in the given image. So, a better performing Semantic Segmentation algorithm will contribute to the advancement of the Autonomous Driving field. Research Gap: Traditional methods, such as handcrafted features and feature extraction methods, were mainly used to solve Semantic Segmentation. Since the rise of deep learning, most of the works are using deep learning to dealing with Semantic Segmentation. The most commonly used neural network architecture to deal with Semantic Segmentation was the Convolutional Neural Network (CNN). Even though some works made use of Recurrent Neural Network (RNN), the effect of RNN in dealing with Semantic Segmentation was not yet thoroughly studied. Our study addresses this research gap. Idea: After going through the existing literature, we came up with the idea of “Using RNNs as an add-on module, to augment the skip-connections in Semantic Segmentation Networks through residual connections.” Objectives and Method: The main objective of our work is to improve the Semantic Segmentation network’s performance by using RNNs. The Experiment was chosen as a methodology to conduct our study. In our work, We proposed three novel architectures called UR-Net, UAR-Net, and DLR-Net by implementing our idea to the existing networks U-Net, Attention U-Net, and DeepLabV3+ respectively. Results and Findings: We empirically showed that our proposed architectures have shown improvement in efficiently segmenting the edges and boundaries. Through our study, we found that there is a trade-off between using RNNs and Inference time of the model. Suppose we use RNNs to improve the performance of Semantic Segmentation Networks. In that case, we need to trade off some extra seconds during the inference of the model. Conclusion: Our findings will not contribute to the Autonomous driving field, where we need better performance in real-time. But, our findings will contribute to the advancement of Bio-medical Image segmentation, where doctors can trade-off those extra seconds during inference for better performance.
9

Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks

Nawaz, Sabeen January 2020 (has links)
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages. / Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.
10

Deep neural semantic parsing: translating from natural language into SPARQL / Análise semântica neural profunda: traduzindo de linguagem natural para SPARQL

Luz, Fabiano Ferreira 07 February 2019 (has links)
Semantic parsing is the process of mapping a natural-language sentence into a machine-readable, formal representation of its meaning. The LSTM Encoder-Decoder is a neural architecture with the ability to map a source language into a target one. We are interested in the problem of mapping natural language into SPARQL queries, and we seek to contribute with strategies that do not rely on handcrafted rules, high-quality lexicons, manually-built templates or other handmade complex structures. In this context, we present two contributions to the problem of semantic parsing departing from the LSTM encoder-decoder. While natural language has well defined vector representation methods that use a very large volume of texts, formal languages, like SPARQL queries, suffer from lack of suitable methods for vector representation. In the first contribution we improve the representation of SPARQL vectors. We start by obtaining an alignment matrix between the two vocabularies, natural language and SPARQL terms, which allows us to refine a vectorial representation of SPARQL items. With this refinement we obtained better results in the posterior training for the semantic parsing model. In the second contribution we propose a neural architecture, that we call Encoder CFG-Decoder, whose output conforms to a given context-free grammar. Unlike the traditional LSTM encoder-decoder, our model provides a grammatical guarantee for the mapping process, which is particularly important for practical cases where grammatical errors can cause critical failures. Results confirm that any output generated by our model obeys the given CFG, and we observe a translation accuracy improvement when compared with other results from the literature. / A análise semântica é o processo de mapear uma sentença em linguagem natural para uma representação formal, interpretável por máquina, do seu significado. O LSTM Encoder-Decoder é uma arquitetura de rede neural com a capacidade de mapear uma sequência de origem para uma sequência de destino. Estamos interessados no problema de mapear a linguagem natural em consultas SPARQL e procuramos contribuir com estratégias que não dependam de regras artesanais, léxico de alta qualidade, modelos construídos manualmente ou outras estruturas complexas feitas à mão. Neste contexto, apresentamos duas contribuições para o problema de análise semântica partindo da arquitetura LSTM Encoder-Decoder. Enquanto para a linguagem natural existem métodos de representação vetorial bem definidos que usam um volume muito grande de textos, as linguagens formais, como as consultas SPARQL, sofrem com a falta de métodos adequados para representação vetorial. Na primeira contribuição, melhoramos a representação dos vetores SPARQL. Começamos obtendo uma matriz de alinhamento entre os dois vocabulários, linguagem natural e termos SPARQL, o que nos permite refinar uma representação vetorial dos termos SPARQL. Com esse refinamento, obtivemos melhores resultados no treinamento posterior para o modelo de análise semântica. Na segunda contribuição, propomos uma arquitetura neural, que chamamos de Encoder CFG-Decoder, cuja saída está de acordo com uma determinada gramática livre de contexto. Ao contrário do modelo tradicional LSTM Encoder-Decoder, nosso modelo fornece uma garantia gramatical para o processo de mapeamento, o que é particularmente importante para casos práticos nos quais erros gramaticais podem causar falhas críticas em um compilador ou interpretador. Os resultados confirmam que qualquer resultado gerado pelo nosso modelo obedece à CFG dada, e observamos uma melhora na precisão da tradução quando comparada com outros resultados da literatura.

Page generated in 0.1065 seconds