• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 330
  • 31
  • 18
  • 11
  • 8
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 473
  • 239
  • 197
  • 184
  • 158
  • 135
  • 125
  • 111
  • 103
  • 102
  • 86
  • 84
  • 83
  • 81
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Towards an Accurate ECG Biometric Authentication System with Low Acquisition Time

Arteaga Falconi, Juan Sebastian 31 January 2020 (has links)
Biometrics is the study of physical or behavioral traits that establishes the identity of a person. Forensics, physical security and cyber security are some of the main fields that use biometrics. Unlike traditional authentication systems—such as password based—biometrics cannot be lost, forgotten or shared. This is possible because biometrics establishes the identity of a person based on a physiological/behavioural characteristic rather than what the person possess or remembers. Biometrics has two modes of operation: identification and authentication. Identification finds the identity of a person among a group of persons. Authentication determines if the claimed identity of a person is truthful. Biometric person authentication is an alternative to passwords or graphical patterns. It prevents shoulder surfing attacks, i.e., people watching from a short distance. Nevertheless, biometric traits of conventional authentication techniques like fingerprints, face—and to some extend iris—are easy to capture and duplicate. This denotes a security risk for modern and future applications such as digital twins, where an attacker can copy and duplicate a biometric trait in order to spoof a biometric system. Researchers have proposed ECG as biometric authentication to solve this problem. ECG authentication conceals the biometric traits and reduces the risk of an attack by duplication of the biometric trait. However, current ECG authentication solutions require 10 or more seconds of an ECG signal in order to have accurate results. The accuracy is directly proportional to the ECG signal time-length for authentication. This is inconvenient to implement ECG authentication in an end-user product because a user cannot wait 10 or more seconds to gain access in a secure manner to their device. This thesis addresses the problem of spoofing by proposing an accurate and secure ECG biometric authentication system with relatively short ECG signal length for authentication. The system consists of an ECG acquisition from lead I (two electrodes), signal processing approaches for filtration and R-peak detection, a feature extractor and an authentication process. To evaluate this system, we developed a method to calculate the Equal Error Rate—EER—with non-normal distributed data. In the authentication process, we propose an approach based on Support Vector Machine—SVM—and achieve 4.5% EER with 4 seconds of ECG signal length for authentication. This approach opens the door for a deeper understanding of the signal and hence we enhanced it by applying a hybrid approach of Convolutional Neural Networks—CNN—combined with SVM. The purpose of this hybrid approach is to improve accuracy by automatically detect and extract features with Deep Learning—in this case CNN—and then take the output into a one-class SVM classifier—Authentication; which proved to outperform accuracy for one-class ECG classification. This hybrid approach reduces the EER to 2.84% with 4 seconds of ECG signal length for authentication. Furthermore, we investigated the combination of two different biometrics techniques and we improved the accuracy to 0.46% EER, while maintaining a short ECG signal length for authentication of 4 seconds. We fuse Fingerprint with ECG at the decision level. Decision level fusion requires information that is available from any biometric technique. Fusion at different levels—such as feature level fusion—requires information about features that are incompatible or hidden. Fingerprint minutiae are composed of information that differs from ECG peaks and valleys. Therefore fusion at the feature level is not possible unless the fusion algorithm provides a compatible conversion scheme. Proprietary biometric hardware does not provide information about the features or the algorithms; therefore, features are hidden and not accessible for feature level fusion; however, the result is always available for a decision level fusion.
52

FPGA Acceleration of CNNs Using OpenCL

January 2020 (has links)
abstract: Convolutional Neural Network (CNN) has achieved state-of-the-art performance in numerous applications like computer vision, natural language processing, robotics etc. The advancement of High-Performance Computing systems equipped with dedicated hardware accelerators has also paved the way towards the success of compute intensive CNNs. Graphics Processing Units (GPUs), with massive processing capability, have been of general interest for the acceleration of CNNs. Recently, Field Programmable Gate Arrays (FPGAs) have been promising in CNN acceleration since they offer high performance while also being re-configurable to support the evolution of CNNs. This work focuses on a design methodology to accelerate CNNs on FPGA with low inference latency and high-throughput which are crucial for scenarios like self-driving cars, video surveillance etc. It also includes optimizations which reduce the resource utilization by a large margin with a small degradation in performance thus making the design suitable for low-end FPGA devices as well. FPGA accelerators often suffer due to the limited main memory bandwidth. Also, highly parallel designs with large resource utilization often end up achieving low operating frequency due to poor routing. This work employs data fetch and buffer mechanisms, designed specifically for the memory access pattern of CNNs, that overlap computation with memory access. This work proposes a novel arrangement of the systolic processing element array to achieve high frequency and consume less resources than the existing works. Also, support has been extended to more complicated CNNs to do video processing. On Intel Arria 10 GX1150, the design operates at a frequency as high as 258MHz and performs single inference of VGG-16 and C3D in 23.5ms and 45.6ms respectively. For VGG-16 and C3D the design offers a throughput of 66.1 and 23.98 inferences/s respectively. This design can outperform other FPGA 2D CNN accelerators by up to 9.7 times and 3D CNN accelerators by up to 2.7 times. / Dissertation/Thesis / Masters Thesis Computer Science 2020
53

Automatická detekce událostí ve fotbalových zápasech / An automatic football match event detection

Dvonč, Tomáš January 2020 (has links)
This diploma thesis describes methods suitable for automatic detection of events from video sequences focused on football matches. The first part of the work is focused on the analysis and creation of procedures for extracting informations from available data. The second part deals with the implementation of selected methods and neural network algorithm for corner kick detection. Two experiments were performed in this work. The first captures static information from one image and the second is focused on detection from spatio-temporal data. The output of this work is a program for automatic event detection, which can be used to interpret the results of the experiments. This work may figure as a basis to gain new knowledge about the issue and also to the further development of detection events from football.
54

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Alammari, Ali 05 1900 (has links)
Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
55

Squeeze-and-Excitation SqueezeNext: An Efficient DNN for Hardware Deployment

Naga Venkata Sai Ravi Teja Chappa (8742342) 22 April 2020 (has links)
<div>Convolution neural network is being used in field of autonomous driving vehicles or driver assistance systems (ADAS), and has achieved great success. Before the convolution neural network, traditional machine learning algorithms helped the driver assistance systems. Currently, there is a great exploration being done in architectures like MobileNet, SqueezeNext & SqueezeNet. It improved the CNN architectures and made it more suitable to implement on real-time embedded systems. </div><div> </div><div> This thesis proposes an efficient and a compact CNN to ameliorate the performance of existing CNN architectures. The intuition behind this proposed architecture is to supplant convolution layers with a more sophisticated block module and to develop a compact architecture with a competitive accuracy. Further, explores the bottleneck module and squeezenext basic block structure. The state-of-the-art squeezenext baseline architecture is used as a foundation to recreate and propose a high performance squeezenext architecture. The proposed architecture is further trained on the CIFAR-10 dataset from scratch. All the training and testing results are visualized with live loss and accuracy graphs. Focus of this thesis is to make an adaptable and a flexible model for efficient CNN performance which can perform better with the minimum tradeoff between model accuracy, size, and speed. Having a model size of 0.595MB along with accuracy of 92.60% and with a satisfactory training and validating speed of 9 seconds, this model can be deployed on real-time autonomous system platform such as Bluebox 2.0 by NXP.</div>
56

Todo el día, todos los días. La aparición de los canales de noticias 24 horas en Chile.

Fuentes Muñoz, Pascale January 2010 (has links)
Memoria para optar al titulo de Periodista
57

Applying machine learning to detect structural faults in microscopic images of inserts

Fröjd, Emil January 2020 (has links)
Today the quality control of inserts at Sandvik is done manually by looking at their crosssections through a microscope. The purpose of this project was to automate the quality control of inserts by exploring machine Learning technique to automatically detect structural faults in microscopic images of the insert. To detect these faults an image processing program was first created to extractevery possible fault feature, and then a convolutional neural network (CNN) wasimplemented and applied to the verification of the faults. The error rate (ER) ofextracting the correct faults in the Image Processing was 11% and from thesepossible faults extracted the CNN could then with a 4% ER identify the actualfaults. The dataset was limited in size and had a lack of systematic consistency inhow the images were taken. As a consequence, the model could not be trainedeffectively and therefore, the system did not perform adequate enough for directimplementation. However, the system shows great potential in automating thequality control, considering that the dataset can be improved by standardizing the way for taking the images and the amount of data can be increased with the time.
58

Objektklassificering med Djup Maskininlärning : med CNN (Convolutional Neural Network)

Lindell, Linus, Medlock, Samuel, Norling, Markus January 2022 (has links)
Digitaliseringen medför ett allt större utbud av datoriserad teknik, med maskininlärning i framkanten. Allt ifrån industrirobotar till självkörande bilar kan dra nytta av maskininlärning för att fungera, men även andra komplexa problem kan lösas med maskininlärning. Ett problem med maskininlärning är dock energikostnaden av att träna stora modeller, varför effektivisering av modellerna och deras träning är aktuellt. I detta projekt utvecklas maskininlärningsmodeller av typen Convolutional Neural Network, som sedan används för att utföra objektklassificering på datasetet CIFAR-10, vilket består av 60 000 bilder i storleken 32x32 pixlar, tillhörande tio olika kategorier. Åtta olika modeller konstruerades med varierande antal konvolutionerande lager och maxbredd på de konvolutionerande lagerna, och olika aktiveringsfunktioner testades. Den modell som valdes ut som projektets slutprodukt består av åtta konvolutionerande lager med mellan 64 och 512 kanaler, vilket ger totalt 5,7 miljoner parametrar. Detta nätverk åstadkom en noggrannhet på 91% på 10 000 testbilder efter att det tränats i 120 epoker på datasetets 50 000 träningsbilder. Därefter kunde träningen av denna modell effektiviseras genom att tränas på endast hälften av träningsdatan, vilket resulterade i att träningstiden minskade från ca. 1 timme och 12 minuter till 40 minuter, samtidigt som accuracy sjönk med endast fyra procentenheter, till 87%.
59

SUPERVISED MACHINE LEARNING (SML) IN SIMULATED ENVIRONMENTS

Rexby, Mattias January 2021 (has links)
Artificial intelligence has made a big impact on the world in recent years, and more knowledge inthe subject seems to be of vital importance as the possibilities seems endless. Is it possible to teacha computer to drive a car in a virtual environment, by training a neural network to act intelligentlythrough the usage of supervised machine learning? With less than 2 hours of data collected whenpersonally driving the car, I show that yes, it is indeed possible. This is done by applying thetechniques of supervised machine learning combined in conjunction with a deep convolutional neuralnetwork. This were applied through software developed to interact between the network and the agentinside the virtual environment. I believe the dataset could have been cut down to about 10 percentof the size and still achieve the research goal. This shows not just the possibility of teaching aneural network a good policy in stochastic environments with supervised machine learning, but alsothat it can draw accurate (enough) conclusions to imitate human behavior when driving a car.
60

Automatic Classification of Snow Particles

Axebrink, Emma January 2021 (has links)
The simplest form of a snow particle is a hexagonal prism which can grow into a stellar crystal by growing branches from the six corners of the prism. The snow particle is affected by the temperature and supersaturation in the air, giving its unique form. Manual classification of snow particles based on shape is tedious work. Convolutional Neural Network (CNN) can therefor be of great assistance and are common in automatic image processing. From a data set consisting of 3165 images sorted into 15 shape classes, a sub set of 2193 images and 7 classes was used. The selected classes had the highest number of snow particle images and were used to train, validate and test on. Four data sets were constructed and eight models were used to classify the snow particles into seven classes. To reduce the amount of training data needed pretrained versions of neural networks AlexNet and ResNet50 were used with a technique called transfer learning. The 2193 images make up the first data set, Data set 1. To handle unbalanced classes in the first data set Synthetic Minority Oversampling Technique (SMOTE) was used to increase the number of snow particles in classes with few examples, creating Data set 2. A third data set was constructed to mimic a real world application. The data for training and validation was increased with SMOTE, while the test data only consisted of real snow particles. The performance of both ResNet50 and AlexNet on the data met the requirements for a practical application. However, ResNet50 had a higher overall accuracy, 72%, compared to AlexNet 69% on the evaluated data set. A t-test was conducted with a significance of p &lt; 1·10−8. To enhance the shape of the snow particles a Euclidean Distance Transform (EDT) was used, creating Data set 4. However, this did not increase the accuracy of the trained model. To increase the accuracy of the models more training data of snow particles is needed, especially for classes with few examples. A larger data set would also allow more classes to be included in the classification.

Page generated in 0.0594 seconds