• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 332
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 476
  • 242
  • 198
  • 186
  • 160
  • 136
  • 127
  • 112
  • 104
  • 102
  • 86
  • 85
  • 84
  • 81
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

FPGA Acceleration of CNNs Using OpenCL

January 2020 (has links)
abstract: Convolutional Neural Network (CNN) has achieved state-of-the-art performance in numerous applications like computer vision, natural language processing, robotics etc. The advancement of High-Performance Computing systems equipped with dedicated hardware accelerators has also paved the way towards the success of compute intensive CNNs. Graphics Processing Units (GPUs), with massive processing capability, have been of general interest for the acceleration of CNNs. Recently, Field Programmable Gate Arrays (FPGAs) have been promising in CNN acceleration since they offer high performance while also being re-configurable to support the evolution of CNNs. This work focuses on a design methodology to accelerate CNNs on FPGA with low inference latency and high-throughput which are crucial for scenarios like self-driving cars, video surveillance etc. It also includes optimizations which reduce the resource utilization by a large margin with a small degradation in performance thus making the design suitable for low-end FPGA devices as well. FPGA accelerators often suffer due to the limited main memory bandwidth. Also, highly parallel designs with large resource utilization often end up achieving low operating frequency due to poor routing. This work employs data fetch and buffer mechanisms, designed specifically for the memory access pattern of CNNs, that overlap computation with memory access. This work proposes a novel arrangement of the systolic processing element array to achieve high frequency and consume less resources than the existing works. Also, support has been extended to more complicated CNNs to do video processing. On Intel Arria 10 GX1150, the design operates at a frequency as high as 258MHz and performs single inference of VGG-16 and C3D in 23.5ms and 45.6ms respectively. For VGG-16 and C3D the design offers a throughput of 66.1 and 23.98 inferences/s respectively. This design can outperform other FPGA 2D CNN accelerators by up to 9.7 times and 3D CNN accelerators by up to 2.7 times. / Dissertation/Thesis / Masters Thesis Computer Science 2020
52

Automatická detekce událostí ve fotbalových zápasech / An automatic football match event detection

Dvonč, Tomáš January 2020 (has links)
This diploma thesis describes methods suitable for automatic detection of events from video sequences focused on football matches. The first part of the work is focused on the analysis and creation of procedures for extracting informations from available data. The second part deals with the implementation of selected methods and neural network algorithm for corner kick detection. Two experiments were performed in this work. The first captures static information from one image and the second is focused on detection from spatio-temporal data. The output of this work is a program for automatic event detection, which can be used to interpret the results of the experiments. This work may figure as a basis to gain new knowledge about the issue and also to the further development of detection events from football.
53

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Alammari, Ali 05 1900 (has links)
Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
54

Squeeze-and-Excitation SqueezeNext: An Efficient DNN for Hardware Deployment

Naga Venkata Sai Ravi Teja Chappa (8742342) 22 April 2020 (has links)
<div>Convolution neural network is being used in field of autonomous driving vehicles or driver assistance systems (ADAS), and has achieved great success. Before the convolution neural network, traditional machine learning algorithms helped the driver assistance systems. Currently, there is a great exploration being done in architectures like MobileNet, SqueezeNext & SqueezeNet. It improved the CNN architectures and made it more suitable to implement on real-time embedded systems. </div><div> </div><div> This thesis proposes an efficient and a compact CNN to ameliorate the performance of existing CNN architectures. The intuition behind this proposed architecture is to supplant convolution layers with a more sophisticated block module and to develop a compact architecture with a competitive accuracy. Further, explores the bottleneck module and squeezenext basic block structure. The state-of-the-art squeezenext baseline architecture is used as a foundation to recreate and propose a high performance squeezenext architecture. The proposed architecture is further trained on the CIFAR-10 dataset from scratch. All the training and testing results are visualized with live loss and accuracy graphs. Focus of this thesis is to make an adaptable and a flexible model for efficient CNN performance which can perform better with the minimum tradeoff between model accuracy, size, and speed. Having a model size of 0.595MB along with accuracy of 92.60% and with a satisfactory training and validating speed of 9 seconds, this model can be deployed on real-time autonomous system platform such as Bluebox 2.0 by NXP.</div>
55

Todo el día, todos los días. La aparición de los canales de noticias 24 horas en Chile.

Fuentes Muñoz, Pascale January 2010 (has links)
Memoria para optar al titulo de Periodista
56

Applying machine learning to detect structural faults in microscopic images of inserts

Fröjd, Emil January 2020 (has links)
Today the quality control of inserts at Sandvik is done manually by looking at their crosssections through a microscope. The purpose of this project was to automate the quality control of inserts by exploring machine Learning technique to automatically detect structural faults in microscopic images of the insert. To detect these faults an image processing program was first created to extractevery possible fault feature, and then a convolutional neural network (CNN) wasimplemented and applied to the verification of the faults. The error rate (ER) ofextracting the correct faults in the Image Processing was 11% and from thesepossible faults extracted the CNN could then with a 4% ER identify the actualfaults. The dataset was limited in size and had a lack of systematic consistency inhow the images were taken. As a consequence, the model could not be trainedeffectively and therefore, the system did not perform adequate enough for directimplementation. However, the system shows great potential in automating thequality control, considering that the dataset can be improved by standardizing the way for taking the images and the amount of data can be increased with the time.
57

Objektklassificering med Djup Maskininlärning : med CNN (Convolutional Neural Network)

Lindell, Linus, Medlock, Samuel, Norling, Markus January 2022 (has links)
Digitaliseringen medför ett allt större utbud av datoriserad teknik, med maskininlärning i framkanten. Allt ifrån industrirobotar till självkörande bilar kan dra nytta av maskininlärning för att fungera, men även andra komplexa problem kan lösas med maskininlärning. Ett problem med maskininlärning är dock energikostnaden av att träna stora modeller, varför effektivisering av modellerna och deras träning är aktuellt. I detta projekt utvecklas maskininlärningsmodeller av typen Convolutional Neural Network, som sedan används för att utföra objektklassificering på datasetet CIFAR-10, vilket består av 60 000 bilder i storleken 32x32 pixlar, tillhörande tio olika kategorier. Åtta olika modeller konstruerades med varierande antal konvolutionerande lager och maxbredd på de konvolutionerande lagerna, och olika aktiveringsfunktioner testades. Den modell som valdes ut som projektets slutprodukt består av åtta konvolutionerande lager med mellan 64 och 512 kanaler, vilket ger totalt 5,7 miljoner parametrar. Detta nätverk åstadkom en noggrannhet på 91% på 10 000 testbilder efter att det tränats i 120 epoker på datasetets 50 000 träningsbilder. Därefter kunde träningen av denna modell effektiviseras genom att tränas på endast hälften av träningsdatan, vilket resulterade i att träningstiden minskade från ca. 1 timme och 12 minuter till 40 minuter, samtidigt som accuracy sjönk med endast fyra procentenheter, till 87%.
58

SUPERVISED MACHINE LEARNING (SML) IN SIMULATED ENVIRONMENTS

Rexby, Mattias January 2021 (has links)
Artificial intelligence has made a big impact on the world in recent years, and more knowledge inthe subject seems to be of vital importance as the possibilities seems endless. Is it possible to teacha computer to drive a car in a virtual environment, by training a neural network to act intelligentlythrough the usage of supervised machine learning? With less than 2 hours of data collected whenpersonally driving the car, I show that yes, it is indeed possible. This is done by applying thetechniques of supervised machine learning combined in conjunction with a deep convolutional neuralnetwork. This were applied through software developed to interact between the network and the agentinside the virtual environment. I believe the dataset could have been cut down to about 10 percentof the size and still achieve the research goal. This shows not just the possibility of teaching aneural network a good policy in stochastic environments with supervised machine learning, but alsothat it can draw accurate (enough) conclusions to imitate human behavior when driving a car.
59

Automatic Classification of Snow Particles

Axebrink, Emma January 2021 (has links)
The simplest form of a snow particle is a hexagonal prism which can grow into a stellar crystal by growing branches from the six corners of the prism. The snow particle is affected by the temperature and supersaturation in the air, giving its unique form. Manual classification of snow particles based on shape is tedious work. Convolutional Neural Network (CNN) can therefor be of great assistance and are common in automatic image processing. From a data set consisting of 3165 images sorted into 15 shape classes, a sub set of 2193 images and 7 classes was used. The selected classes had the highest number of snow particle images and were used to train, validate and test on. Four data sets were constructed and eight models were used to classify the snow particles into seven classes. To reduce the amount of training data needed pretrained versions of neural networks AlexNet and ResNet50 were used with a technique called transfer learning. The 2193 images make up the first data set, Data set 1. To handle unbalanced classes in the first data set Synthetic Minority Oversampling Technique (SMOTE) was used to increase the number of snow particles in classes with few examples, creating Data set 2. A third data set was constructed to mimic a real world application. The data for training and validation was increased with SMOTE, while the test data only consisted of real snow particles. The performance of both ResNet50 and AlexNet on the data met the requirements for a practical application. However, ResNet50 had a higher overall accuracy, 72%, compared to AlexNet 69% on the evaluated data set. A t-test was conducted with a significance of p &lt; 1·10−8. To enhance the shape of the snow particles a Euclidean Distance Transform (EDT) was used, creating Data set 4. However, this did not increase the accuracy of the trained model. To increase the accuracy of the models more training data of snow particles is needed, especially for classes with few examples. A larger data set would also allow more classes to be included in the classification.
60

Dynamic Hand Gesture Recognition Using Ultrasonic Sonar Sensors and Deep Learning

Lin, Chiao-Shing 03 March 2022 (has links)
The space of hand gesture recognition using radar and sonar is dominated mostly by radar applications. In addition, the machine learning algorithms used by these systems are typically based on convolutional neural networks with some applications exploring the use of long short term memory networks. The goal of this study was to build and design a Sonar system that can classify hand gestures using a machine learning approach. Secondly, the study aims to compare convolutional neural networks to long short term memory networks as a means to classify hand gestures using sonar. A Doppler Sonar system was designed and built to be able to sense hand gestures. The Sonar system is a multi-static system containing one transmitter and three receivers. The sonar system can measure the Doppler frequency shifts caused by dynamic hand gestures. Since the system uses three receivers, three different Doppler frequency channels are measured. Three additional differential frequency channels are formed by computing the differences between the frequency of each of the receivers. These six channels are used as inputs to the deep learning models. Two different deep learning algorithms were used to classify the hand gestures; a Doppler biLSTM network [1] and a CNN [2]. Six basic hand gestures, two in each x- y- and z-axis, and two rotational hand gestures are recorded using both left and right hand at different distances. The gestures were also recorded using both left and right hands. Ten-Fold cross-validation is used to evaluate the networks' performance and classification accuracy. The LSTM was able to classify the six basic gestures with an accuracy of at least 96% but with the addition of the two rotational gestures, the accuracy drops to 47%. This result is acceptable since the basic gestures are more commonly used gestures than rotational gestures. The CNN was able to classify all the gestures with an accuracy of at least 98%. Additionally, The LSTM network is also able to classify separate left and right-hand gestures with an accuracy of 80% and The CNN with an accuracy of 83%. The study shows that CNN is the most widely used algorithm for hand gesture recognition as it can consistently classify gestures with various degrees of complexity. The study also shows that the LSTM network can also classify hand gestures with a high degree of accuracy. More experimentation, however, needs to be done in order to increase the complexity of recognisable gestures.

Page generated in 0.0303 seconds