• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1886
  • 58
  • 57
  • 38
  • 37
  • 37
  • 20
  • 14
  • 13
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 2738
  • 2738
  • 1124
  • 982
  • 848
  • 621
  • 581
  • 499
  • 496
  • 471
  • 450
  • 447
  • 420
  • 416
  • 386
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Towards a Unilateral Sensor Architecture for Detecting Person-to-Person Contacts

Amara, Pavan Kumar 12 1900 (has links)
The contact patterns among individuals can significantly affect the progress of an infectious outbreak within a population. Gathering data about these interaction and mixing patterns is essential to assess computational modeling of infectious diseases. Various self-report approaches have been designed in different studies to collect data about contact rates and patterns. Recent advances in sensing technology provide researchers with a bilateral automated data collection devices to facilitate contact gathering overcoming the disadvantages of previous approaches. In this study, a novel unilateral wearable sensing architecture has been proposed that overcome the limitations of the bi-lateral sensing. Our unilateral wearable sensing system gather contact data using hybrid sensor arrays embedded in wearable shirt. A smartphone application has been used to transfer the collected sensors data to the cloud and apply deep learning model to estimate the number of human contacts and the results are stored in the cloud database. The deep learning model has been developed on the hand labelled data over multiple experiments. This model has been tested and evaluated, and these results were reported in the study. Sensitivity analysis has been performed to choose the most suitable image resolution and format for the model to estimate contacts and to analyze the model's consumption of computer resources.
522

Improving Image Quality in Cardiac Computed Tomography using Deep Learning / Att förbättra bildkvalitet från datortomografier av hjärtat med djupinlärning

Wajngot, David January 2019 (has links)
Cardiovascular diseases are the largest mortality factor globally, and early diagnosis is essential for a proper medical response. Cardiac computed tomography can be used to acquire images for their diagnosis, but without radiation dose reduction the radiation emitted to the patient becomes a significant risk factor. By reducing the dose, the image quality is often compromised, and determining a diagnosis becomes difficult. This project proposes image quality enhancement with deep learning. A cycle-consistent generative adversarial neural network was fed low- and high-quality images with the purpose to learn to translate between them. By using a cycle-consistency cost it was possible to train the network without paired data. With this method, a low-quality image acquired from a computed tomography scan with dose reduction could be enhanced in post processing. The results were mixed but showed an increase of ventricular contrast and artifact mitigation. The technique comes with several problems that are yet to be solved, such as structure alterations, but it shows promise for continued development.
523

Deep learning for reading and understanding language

Kočiský, Tomáš January 2017 (has links)
This thesis presents novel tasks and deep learning methods for machine reading comprehension and question answering with the goal of achieving natural language understanding. First, we consider a semantic parsing task where the model understands sentences and translates them into a logical form or instructions. We present a novel semi-supervised sequential autoencoder that considers language as a discrete sequential latent variable and semantic parses as the observations. This model allows us to leverage synthetically generated unpaired logical forms, and thereby alleviate the lack of supervised training data. We show the semi-supervised model outperforms a supervised model when trained with the additional generated data. Second, reading comprehension requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess reading comprehension ability, in both artificial agents and children learning to read. We propose a new, challenging, supervised reading comprehension task. We gather a large-scale dataset of news stories from the CNN and Daily Mail websites with Cloze-style questions created from the highlights. This dataset allows for the first time training deep learning models for reading comprehension. We also introduce novel attention-based models for this task and present qualitative analysis of the attention mechanism. Finally, following the recent advances in reading comprehension in both models and task design, we further propose a new task for understanding complex narratives, NarrativeQA, consisting of full texts of books and movie scripts. We collect human written questions and answers based on high-level plot summaries. This task is designed to encourage development of models for language understanding; it is designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard reading comprehension models struggle on the tasks presented here.
524

Automatic Eye-Gaze Following from 2-D Static Images: Application to Classroom Observation Video Analysis

Aung, Arkar Min 23 April 2018 (has links)
In this work, we develop an end-to-end neural network-based computer vision system to automatically identify where each person within a 2-D image of a school classroom is looking (“gaze following�), as well as who she/he is looking at. Automatic gaze following could help facilitate data-mining of large datasets of classroom observation videos that are collected routinely in schools around the world in order to understand social interactions between teachers and students. Our network is based on the architecture by Recasens, et al. (2015) but is extended to (1) predict not only where, but who the person is looking at; and (2) predict whether each person is looking at a target inside or outside the image. Since our focus is on classroom observation videos, we collect gaze dataset (48,907 gaze annotations over 2,263 classroom images) for students and teachers in classrooms. Results of our experiments indicate that the proposed neural network can estimate the gaze target - either the spatial location or the face of a person - with substantially higher accuracy compared to several baselines.
525

Aplicação de Deep Learning em dados refinados para Mineração de Opiniões

Jost, Ingo 26 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-12T19:13:14Z No. of bitstreams: 1 Ingo Jost.pdf: 1217467 bytes, checksum: bf67cd6724b1cd182a12a3cd7b5af1eb (MD5) / Made available in DSpace on 2015-06-12T19:13:14Z (GMT). No. of bitstreams: 1 Ingo Jost.pdf: 1217467 bytes, checksum: bf67cd6724b1cd182a12a3cd7b5af1eb (MD5) Previous issue date: 2015-02-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Deep Learning é uma sub-área de Aprendizado de Máquina que tem obtido resultados sa- tisfatórios em várias áreas de aplicação, implementada por diferentes algoritmos, como Stacked Auto-encoders ou Deep Belief Networks. Este trabalho propõe uma modelagem que aplica uma implementação de um classificador que aborda técnicas de Deep Learning em Mineração de Opiniões, área que tem sido alvo de constantes estudos, dada a necessidade das corporações buscarem a compreensão que clientes possuem de seus produtos ou serviços. O favorecimento do crescimento de Mineração de Opiniões também se dá pelo ambiente colaborativo da Web 2.0, em que várias ferramentas propiciam a emissão de opiniões. Os dados utilizados passaram por um refinamento na etapa de pré-processamento com o intuito de aplicar Deep Learning, da qual uma das principais atribuições é a seleção de características, em dados refinados em vez de dados mais brutos. A promissora tecnologia de Deep Learning combinada com a estratégia de refinamento demonstrou nos experimentos a obtenção de resultados competitivos com outros estudos relacionados e abrem perspectiva de extensão deste trabalho. / Deep Learning is a Machine Learning’s sub-area that have achieved satisfactory results in different application areas, implemented by different algorithms, such as Stacked Auto- encoders or Deep Belief Networks. This work proposes a research that applies a classifier that implements Deep Learning concepts in Opinion Mining, area has been approached by con- stant researches, due the need of corporations seeking the understanding that customers have of your products or services. The Opinion Mining’s growth is favored also by the collaborative Web 2.0 environment, where multiple tools provide issuing opinions. The data used for exper- iments were refined in preprocessing step in order to apply Deep Learning, which it one of the main tasks the feature selection, in refined data, instead of applying Deep Learning in more raw data. The refinement strategy combined with the promising technology of Deep Learning has demonstrated in preliminary experiments the achievement of competitive results with other studies and opens the perspective for extension of this work.
526

Convolutional neural network reliability on an APSoC platform a traffic-sign recognition case study / Confiabilidade de uma rede neural convolucional em uma plataforma APSoC: um estudo para reconhecimento de placas de trânsito

Lopes, Israel da Costa January 2017 (has links)
O aprendizado profundo tem inúmeras aplicações na visão computacional, reconhecimento de fala, processamento de linguagem natural e outras aplicações de interesse comercial. A visão computacional, por sua vez, possui muitas aplicações em áreas distintas, indo desde o entretenimento à aplicações relevantes e críticas. O reconhecimento e manipulação de faces (Snapchat), e a descrição de objetos em fotos (OneDrive) são exemplos de aplicações no entretenimento. Ao passo que, a inspeção industrial, o diagnóstico médico, o reconhecimento de objetos em imagens capturadas por satélites (usadas em missões de resgate e defesa), os carros autônomos e o Sistema Avançado de Auxílio ao Motorista (SAAM) são exemplos de aplicações relevantes e críticas. Algumas das empresas de circuitos integrados mais importantes do mundo, como Xilinx, Intel e Nvidia estão apostando em plataformas dedicadas para acelerar o treinamento e a implementação de algoritmos de aprendizado profundo e outras alternativas de visão computacional para carros autônomos e SAAM devido às suas altas necessidades computacionais. Assim, implementar sistemas de aprendizado profundo que alcançam alto desempenho com o custo de baixa utilização de área e dissipação de potência é um grande desafio. Além do mais, os circuitos eletrônicos para a indústria automotiva devem ser confiáveis mesmo sob efeitos da radiação, defeitos de fabricação e efeitos do envelhecimento. Assim, um gerador automático de VHSIC (Very High Speed Integrated Circuit) Hardware Description Language (VHDL) para Redes Neurais Convolucionais (RNC) foi desenvolvido para reduzir o tempo associado a implementação de algoritmos de aprendizado profundo em hardware. Como estudo de caso, uma RNC foi treinada pela ferramenta Convolutional Architecture for Fast Feature Embedding (Caffe), de modo a classificar 6 classes de placas de trânsito, alcançando uma precisão de cerca de 89,8% no conjunto de dados German Traffic-Sign Recognition Benchmark (GTSRB), que contém imagens de placas de trânsito em cenários complexos. Essa RNC foi implementada num All-Programmable System-on- Chip (APSoC) Zynq-7000, resultando em 313 Frames Por Segundo (FPS) em imagens normalizadas para 32x32, com o APSoC dissipando uma potência de somente 2.057 W, enquanto uma Graphics Processing Unit (GPU) embarcada, em seu modo de operação mínimo, dissipa 10 W. A confiabilidade da RNC proposta foi investigada por injeções de falhas acumuladas e aleatórias por emulação nos bits de configuração da Lógica Programável (LP) do APSoC, alcançando uma confiabilidade de 80,5% sob Single-Bit-Upset (SBU) onde foram considerados ambos os Dados Corrompidos Silenciosos (DCSs) críticos e os casos em que o sistema não respondeu no tempo esperado (time-outs). Em relação às falhas múltiplas, a confiabilidade da RNC decresce exponencialmente com o número de falhas acumuladas. Em vista disso, a confiabilidade da RNC proposta deve ser aumentada através do uso de técnicas de proteção durante o fluxo de projeto. / Deep learning has a plethora of applications in computer vision, speech recognition, natural language processing and other applications of commercial interest. Computer vision, in turn, has many applications in distinct areas, ranging from entertainment applications to relevant and critical applications. Face recognition and manipulation (Snapchat), and object description in pictures (OneDrive) are examples of entertainment applications. Industrial inspection, medical diagnostics, object recognition in images captured by satellites (used in rescue and defense missions), autonomous cars and Advanced Driver-Assistance System (ADAS) are examples of relevant and critical applications. Some of the most important integrated circuit companies around the world, such as Xilinx, Intel and Nvidia are waging in dedicated platforms for accelerating the training and deployment of deep learning and other computer vision algorithms for autonomous cars and ADAS due to their high computational requirement. Thus, implementing a deep learning system that achieves high performance with low area utilization and power consumption costs is a big challenge. Besides, electronic equipment for automotive industry must be reliable even under radiation effects, manufacturing defects and aging effects, inasmuch as if a system failure occurs, a car accident can happen. Thus, a Convolutional Neural Network (CNN) VHSIC (Very High Speed Integrated Circuit) Hardware Description Language (VHDL) automatic generator was developed to reduce the design time associated to the implementation of deep learning algorithms in hardware. As a case study, a CNN was trained by the Convolutional Architecture for Fast Feature Embedding (Caffe) framework, in order to classify 6 traffic-sign classes, achieving an average accuracy of about 89.8% on the German Traffic-Sign Recognition Benchmark (GTSRB) dataset, which contains trafficsigns images in complex scenarios. This CNN was implemented on a Zynq-7000 All- Programmable System-on-Chip (APSoC), achieving about 313 Frames Per Second (FPS) on 32x32-normalized images, with the APSoC consuming only 2.057W, while an embedded Graphics Processing Unit (GPU), in its minimum operation mode, consumes 10W. The proposed CNN reliability was investigated by random piled-up fault injection by emulation in the Programming Logic (PL) configuration bits of the APSoC, achieving 80.5% of reliability under Single-Bit-Upset (SBU) where both critical Silent Data Corruptions (SDCs) and time-outs were considered. Regarding the multiple faults, the proposed CNN reliability exponentially decreases with the number of piled-up faults. Hence, the proposed CNN reliability must be increased by using hardening techniques during the design flow.
527

Machine Learning-driven Intrusion Detection Techniques in Critical Infrastructures Monitored by Sensor Networks

Otoum, Safa 23 April 2019 (has links)
In most of critical infrastructures, Wireless Sensor Networks (WSNs) are deployed due to their low-cost, flexibility and efficiency as well as their wide usage in several infrastructures. Regardless of these advantages, WSNs introduce various security vulnerabilities such as different types of attacks and intruders due to the open nature of sensor nodes and unreliable wireless links. Therefore, the implementation of an efficient Intrusion Detection System (IDS) that achieves an acceptable security level is a stimulating issue that gained vital importance. In this thesis, we investigate the problem of security provisioning in WSNs based critical monitoring infrastructures. We propose a trust based hierarchical model for malicious nodes detection specially for Black-hole attacks. We also present various Machine Learning (ML)-driven IDSs schemes for wirelessly connected sensors that track critical infrastructures. In this thesis, we present an in-depth analysis of the use of machine learning, deep learning, adaptive machine learning, and reinforcement learning solutions to recognize intrusive behaviours in the monitored network. We evaluate the proposed schemes by using KDD'99 as real attacks data-sets in our simulations. To this end, we present the performance metrics for four different IDSs schemes namely the Clustered Hierarchical Hybrid IDS (CHH-IDS), Adaptively Supervised and Clustered Hybrid IDS (ASCH-IDS), Restricted Boltzmann Machine-based Clustered IDS (RBC-IDS) and Q-learning based IDS (QL-IDS) to detect malicious behaviours in a sensor network. Through simulations, we analyzed all presented schemes in terms of Accuracy Rates (ARs), Detection Rates (DRs), False Negative Rates (FNRs), Precision-recall ratios, F_1 scores and, the area under curves (ROC curves) which are the key performance parameters for all IDSs. To this end, we show that QL-IDS performs with ~ 100% detection and accuracy rates.
528

ESTIMATION OF DEPTH FROM DEFOCUS BLUR IN VIRTUAL ENVIRONMENTS COMPARING GRAPH CUTS AND CONVOLUTIONAL NEURAL NETWORK

Prodipto Chowdhury (5931032) 17 January 2019 (has links)
Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
529

Forced Attention for Image Captioning

Hemanth Devarapalli (5930603) 17 January 2019 (has links)
<div> <div> <div> <p>Automatic generation of captions for a given image is an active research area in Artificial Intelligence. The architectures have evolved from using metadata of the images on which classical machine learning was employed to neural networks. Two different styles of architectures evolved in the neural network space for image captioning: Encoder-Attention-Decoder architecture, and the transformer architecture. This study is an attempt to modify the attention to allow any object to be specified. An archetypical Encoder-Attention-Decoder architecture (Show, Attend, and Tell (Xu et al., 2015)) is employed as a baseline for this study, and a modification of the Show, Attend, and Tell architecture is proposed. Both the architectures are evaluated on the MSCOCO (Lin et al., 2014) dataset, and seven metrics: BLEU – 1, 2, 3, 4 (Papineni, Roukos, Ward & Zhu, 2002), METEOR (Banerjee & Lavie, 2005), ROGUE L (Lin, 2004), and CIDer (Vedantam, Lawrence & Parikh, 2015) are calculated. Finally, the statistical significance of the results is evaluated by performing paired t tests. </p> </div> </div> </div>
530

Model-Based Iterative Reconstruction and Direct Deep Learning for One-Sided Ultrasonic Non-Destructive Evaluation

Hani A. Almansouri (5929469) 16 January 2019 (has links)
<p></p><p>One-sided ultrasonic non-destructive evaluation (UNDE) is extensively used to characterize structures that need to be inspected and maintained from defects and flaws that could affect the performance of power plants, such as nuclear power plants. Most UNDE systems send acoustic pulses into the structure of interest, measure the received waveform and use an algorithm to reconstruct the quantity of interest. The most widely used algorithm in UNDE systems is the synthetic aperture focusing technique (SAFT) because it produces acceptable results in real time. A few regularized inversion techniques with linear models have been proposed which can improve on SAFT, but they tend to make simplifying assumptions that show artifacts and do not address how to obtain reconstructions from large real data sets. In this thesis, we present two studies. The first study covers the model-based iterative reconstruction (MBIR) technique which is used to resolve some of the issues in SAFT and the current linear regularized inversion techniques, and the second study covers the direct deep learning (DDL) technique which is used to further resolve issues related to non-linear interactions between the ultrasound signal and the specimen.</p> <p>In the first study, we propose a model-based iterative reconstruction (MBIR) algorithm designed for scanning UNDE systems. MBIR reconstructs the image by optimizing a cost function that contains two terms: the forward model that models the measurements and the prior model that models the object. To further reduce some of the artifacts in the results, we enhance the forward model of MBIR to account for the direct arrival artifacts and the isotropic artifacts. The direct arrival signals are the signals received directly from the transmitter without being reflected. These signals contain no useful information about the specimen and produce high amplitude artifacts in regions close to the transducers. We resolve this issue by modeling these direct arrival signals in the forward model to reduce their artifacts while maintaining information from reflections of other objects. Next, the isotropic artifacts appear when the transmitted signal is assumed to propagate in all directions equally. Therefore, we modify our forward model to resolve this issue by modeling the anisotropic propagation. Next, because of the significant attenuation of the transmitted signal as it propagates through deeper regions, the reconstruction of deeper regions tends to be much dimmer than closer regions. Therefore, we combine the forward model with a spatially variant prior model to account for the attenuation by reducing the regularization as the pixel gets deeper. Next, for scanning large structures, multiple scans are required to cover the whole field of view. Typically, these scans are performed in raster order which makes adjacent scans share some useful correlations. Reconstructing each scan individually and performing a conventional stitching method is not an efficient way because this could produce stitching artifacts and ignore extra information from adjacent scans. We present an algorithm to jointly reconstruct measurements from large data sets that reduces the stitching artifacts and exploits useful information from adjacent scans. Next, using simulated and extensive experimental data, we show MBIR results and demonstrate how we can improve over SAFT as well as existing regularized inversion techniques. However, even with this improvement, MBIR still results in some artifacts caused by the inherent non-linearity of the interaction between the ultrasound signal and the specimen.</p> <p>In the second study, we propose DDL, a non-iterative model-based reconstruction method for inverting measurements that are based on non-linear forward models for ultrasound imaging. Our approach involves obtaining an approximate estimate of the reconstruction using a simple linear back-projection and training a deep neural network to refine this to the actual reconstruction. While the technique we are proposing can show significant enhancement compared to the current techniques with simulated data, one issue appears with the performance of this technique when applied to experimental data. The issue is a modeling mismatch between the simulated training data and the real data. We propose an effective solution that can reduce the effect of this modeling mismatch by adding noise to the simulation input of the training set before simulation. This solution trains the neural network on the general features of the system rather than specific features of the simulator and can act as a regularization to the neural network. Another issue appears similar to the issue in MBIR caused by the attenuation of deeper reflections. Therefore, we propose a spatially variant amplification technique applied to the back-projection to amplify deeper regions. Next, to reconstruct from a large field of view that requires multiple scans, we propose a joint deep neural network technique to jointly reconstruct an image from these multiple scans. Finally, we apply DDL to simulated and experimental ultrasound data to demonstrate significant improvements in image quality compared to the delay-and-sum approach and the linear model-based reconstruction approach.</p><br><p></p>

Page generated in 0.0955 seconds