• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 334
  • 42
  • 19
  • 13
  • 10
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 522
  • 522
  • 239
  • 201
  • 162
  • 128
  • 109
  • 108
  • 104
  • 86
  • 84
  • 76
  • 73
  • 72
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

EXPLORATION OF DEEP LEARNING APPLICATIONS ON AN AUTONOMOUS EMBEDDED PLATFORM (BLUEBOX 2.0)

Dewant Katare (8082806) 06 December 2019 (has links)
<div>An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles.</div><div><br></div><div>This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform.</div><div><br></div><div><div>This thesis proposes the following: </div><div>1. A machine learning based approach for the forward collision warning system in an autonomous vehicle.<br></div><div>2.3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds. </div><div><br></div><div>The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose.</div><div><br></div><div>The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects.</div></div>
152

Using active learning for semi-automatically labeling a dataset of fisheye distorted images for object detection

Bourghardt, Olof January 2022 (has links)
Self-driving vehicles has become a hot topic in today's industry during the past years and companies all around the globe are attempting to solve the complex task of developing vehicles that can safely navigate roads and traffic without the assistance of a driver.  As deep learning and computer vision becomes more streamlined and with the possibility of using fisheye cameras as a cheap alternative to external sensors some companies have begun researching the possibility for assisted driving on vehicles such as electrical scooters to prevent injuries and accidents by detecting dangerous situations as well as promoting a sustainable infrastructure. However training such a model requires gathering large amounts of data which needs to be labeled by a human annotator. This process is expensive, time consuming, and requires extensive quality checking which can be difficult for companies to afford. This thesis presents an application that allows for semi-automatically labeling a dataset with the help of a human annotator and an object detector. The application trains an object detector together with an active learning framework on a small part of labeled data sampled from the woodscape dataset of fisheye distorted images and uses the knowledge of the trained model as well as using a human annotator as assistance to label more data. This thesis examines the labeled data produced by using the application described in this thesis and compares them with the quality of the annotations in the woodscape dataset. Results show that the model can't make any quality annotations compared to the woodscape dataset and resulted in the human annotator having to label all of the data, and the model achieved an accuracy of 0.00099 mAP.
153

East, West, South, North, and Center- Live Electronic Music based on Neural Network, Board Game, and Data-driven Instrument

Mu, Yunze 24 May 2022 (has links)
No description available.
154

Target Recognition and Following in Small Scale UAVs

Lindgren, Ellen January 2022 (has links)
The industry of UAVs has experienced a boost in recent years, and developments on both the hardware and algorithmic side have enabled smaller and more accessible drones with increased functionality. This thesis investigates the possibilities of autonomous target recognition and tracking in small, low-cost drones that are commercially available today. The design and deployment of an object recognition and tracking algorithm on a Crazyflie 2.1, a palm-sized quadcopter with a weight of a few tens of grams, is presented. The hardware is extended with an expansion board called the AI-deck featuring a fixed, front-facing camera and a GAP8 processor for machine learning inference. The aim is to create a vision-based autonomous control system for target recognition and following, with all computations being executed onboard and without any dependence on external input. A MobileNet-SSD object detector trained for detecting human bodies is used for detecting a person in images from the onboard camera. Proportional controllers are implemented for motion control of the Crazyflie, that process the output from the detection algorithm to move the drone to the desired position. The final implementation is tested indoors and proved to be able to detect a target and follow simple movements of a human moving in front of the drone. However, the reliability and speed of the detection need to be improved to achieve a satisfactory result.
155

Computer Vision and Machine Learning for a Spoon-feeding Robot : A prototype solution based on ABB YuMi and an Intel RealSense camera

Loffreno, Michele January 2021 (has links)
A lot of people worldwide are affected by limitations and disabilities that make it hard to do even essential actions and everyday tasks, such as eating. The impact of robotics on the lives of elder people or people having any kind of inability, which makes it hard everyday actions as to eat, was considered. The aim of this thesis is to study the implementation of a robotic system in order to achieve an automatic feeding process. Different kinds of robots and solutions were taken into account, for instance, the Obi and the prototype realized by the Washington University. The system considered uses an RGBD camera, an Intel RealSense D400 series camera, to detect pieces of cutlery and food on a table and a robotic arm, an ABB-YuMi, to pick up the identified objects. The spoon detection is based on the pre-trained convolutional neural network AlexNet provided by MATLAB. Two detectors were implemented. The first one can detect up to four different objects (spoon, plate, fork and knife), the second one can detect only spoon and plate. Different algorithms based on morphology were tested in order to compute the pose of the objects detected. RobotStudio was used to establish a connection between MATLAB and the robot. The goal was to make the whole process as automated as possible. The neural network trained on two objects reached 100% of accuracy during the training test. The detector based on it was tested on the real system. It was possible to detect the spoon and the plate and to draw a good centered boundary box. The accuracy reached can be considered satisfying since it has been possible to grasp a spoon using the YuMi based on a picture of the table. It was noticed that the lighting condition is the key factor to get a satisfying result or to miss the detection of the spoon. The best result was archived when the light is uniform and there are no reflections and shadows on the objects. The pictures which get a better result for the detection were taken in an apartment. Despite the limitations of the interface between MATLAB and the controller of the YuMi, a good level of automation was reached. The influence of lighting conditions in this setting was discussed and some practical suggestions and considerations were made. / No
156

NOVEL ENTROPY FUNCTION BASED MULTI-SENSOR FUSION IN SPACE AND TIME DOMAIN: APPLICATION IN AUTONOMOUS AGRICULTURAL ROBOT

Md Nazmuzzaman Khan (10581479) 07 May 2021 (has links)
<div><div><div> How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. <br></div></div></div><div><br></div><div> First, we propose a solution for real-time crop row detection from autonomous navigation of agricultural vehicle using domain knowledge and unsupervised machine learning based approach. We implement projective transformation to transform camera image plane to an image plane exactly at the top of the crop rows, so that parallel crop rows remain parallel. Then we use color based segmentation to differentiate crop and weed pixels from background. We implement hierarchical density-based spatial clustering of applications with noise (HDBSCAN) clustering algorithm to differentiate between the crop row clusters and weed clusters. <br></div><div><br></div><div> Finally we use Random sample consensus (RANSAC) for robust line fitting through the detected crop row clusters. We test our algorithm against four different well established methods for crop row detection in-terms of processing time and accuracy. Our proposed method, Clustering Algorithm based RObust LIne Fitting (CAROLIF), shows significantly better accuracy compared to three other methods with average intersect over union (IoU) value of 73%. We also test our algorithm on a video taken from an agricultural vehicle at a corn field in Indiana. CAROLIF shows promising results under lighting variation, vibration and unusual crop-weed growth. <br></div><div><br></div><div><div> Then we propose a robust weed classification system based on convolutional neural network (CNN) and novel decision-level evidence-based multi-sensor fusion algorithm. We create a small dataset of three different weeds (Giant ragweed, Pigweed and Cocklebur) commonly available in corn fields. We train three different CNN architectures on our dataset. Based on classification accuracy and inference time, we choose VGG16 with transfer learning architecture for real-time weed classification.</div><div> </div><div> To create a robust and stable weed classification pipeline, a multi-sensor fusion algorithm based on Dempster-Shafer (DS) evidence theory with a novel entropy function is proposed. The proposed novel entropy function is inspired from Shannon and Deng entropy but it shows better results at understanding uncertainties in certain scenarios, compared to Shannon and Deng entropy, under DS framework. Our proposed algorithm has two advantages compared to other sensor fusion algorithms. First, it can be applied to both space and time domain to fuse results from multiple sensors and create more robust results. Secondly, it can detect which sensor is faulty in the sensors array and compensate for the faulty sensor by giving it lower weight at real-time. Our proposed algorithm calculates the evidence distance from each sensor and determines if one sensor agrees or disagrees with another. Then it rewards the sensors which agrees with another according to their information quality which is calculated using our novel entropy function. The proposed algorithm can combine highly conflicting evidences from multiple sensors and overcomes the limitation of original DS combination rule. After testing our algorithm with real and simulation data, it shows better convergence rate, anti-disturbing ability and transition property compared to other methods available from open literature.</div></div><div><br></div><div><div> Finally, we present a fuzzy-logic based approach to measure the confidence</div><div> of the detected object's bounding-box (BB) position from a CNN detector. The CNN detector gives us the position of BB with percentage accuracy of the object inside the BB on each image plane. But how do we know for sure that the position of the BB is correct? When we are detecting an object using multiple cameras, the position of the BB on the camera image plane may appear in different places based on the detection accuracy and the position of the cameras. But in 3D space, the object is at the exact same position for both cameras. We use this relation between the camera image planes to create a fuzzy-fusion system which will calculate the confidence value of detection. Based on the fuzzy-rules and accuracy of BB position, this system gives us confidence values at three different stages (`Low', `OK' and `High'). This proposed system is successful at giving correct confidence score for scenarios where objects are correctly detected, objects are partially detected and objects are incorrectly detected. </div></div>
157

Validation of a real-time automated production-monitoring system

Dimovski, David, Hammargren Andersson, Johan January 2021 (has links)
In today’s industry, companies are, to an increasing degree, beginning to embrace the concept of industry 4.0. One of these companies is Diab which has a factory located in Laholm where they manufacture composite material. Some of the machines at the factory are older with outdated control systems and require a way to log data in real-time. The goal of the project is to create a working prototype system that can monitor the production flow in real-time by using sensors to collect data about the work efficiency of the machine, measuring the idle time when the machine is working and when it’s not and storing this data in a database which can be accessible by a Graphical User Interface (GUI). The purpose is to investigate the requirements to get a fully operatable system and what it takes to maintain it to get an idea if the system should be self-developed by the company or buy/license from a third party. The system was built by using a NodeMCU ESP32, a Raspberry Pi 4B and a SparkFun DistanceSensor Breakout VL53L1X, and for the software to program the NodeMCU ESP32, Arduino IDE was used; Java language was used to develop the server on the Raspberry Pi and, together with MariaDB, to store the data. The tests that were conducted showed that the data could be displayed within a second in the created GUI but could not guarantee a reading of a passing block; however, it gave a good overview of the workflow of the machine. An improvement of the system is suggested by using visual-based object detection. An overview of the production in real-time can allow for future possibilities of optimising the production flow and, with an improvement of the system, can increase the automation of the production, which can bring the company closer to the concept of industry 4.0.
158

Genre-based Video Clustering using Deep Learning : By Extraction feature using Object Detection and Action Recognition

Vellala, Abhinay January 2021 (has links)
Social media has become an integral part of the Internet. There have been users across the world sharing content like images, texts, videos, and so on. There is a huge amount of data being generated and it has become a challenge to the social media platforms to group the content for further usage like recommending a video. Especially, grouping videos based on similarity requires extracting features. This thesis investigates potential approaches to extract features that can help in determining the similarity between videos. Features of given videos are extracted using Object Detection and Action Recognition. Bag-of-features representation is used to build the vocabulary of all the features and transform data that can be useful in clustering videos. Probabilistic model-based clustering, Multinomial Mixture model is used to determine the underlying clusters within the data by maximizing the expected log-likelihood and estimating the parameters of data as well as probabilities of clusters. Analysis of clusters is done to understand the genre based on dominant actions and objects. Bayesian Information Criterion(BIC) and Akaike Information Criterion(AIC) are used to determine the optimal number of clusters within the given videos. AIC/BIC scores achieved minimum scores at 32 clusters which are chosen to be the optimal number of clusters. The data is labeled with the genres and Logistic regression is performed to check the cluster performance on test data and has achieved 96% accuracy
159

Use of Thermal Imagery for Robust Moving Object Detection

Bergenroth, Hannah January 2021 (has links)
This work proposes a system that utilizes both infrared and visual imagery to create a more robust object detection and classification system. The system consists of two main parts: a moving object detector and a target classifier. The first stage detects moving objects in visible and infrared spectrum using background subtraction based on Gaussian Mixture Models. Low-level fusion is performed to combine the foreground regions in the respective domain. For the second stage, a Convolutional Neural Network (CNN), pre-trained on the ImageNet dataset is used to classify the detected targets into one of the pre-defined classes; human and vehicle. The performance of the proposed object detector is evaluated using multiple video streams recorded in different areas and under various weather conditions, which form a broad basis for testing the suggested method. The accuracy of the classifier is evaluated from experimentally generated images from the moving object detection stage supplemented with publicly available CIFAR-10 and CIFAR-100 datasets. The low-level fusion method shows to be more effective than using either domain separately in terms of detection results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
160

Intelligent Collision Prevention System For SPECT Detectors by Implementing Deep Learning Based Real-Time Object Detection

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
<p>The SPECT-CT machines manufactured by Siemens consists of two heavy detector heads(~1500lbs each) that are moved into various configurations for radionuclide imaging. These detectors are driven by large torque powered by motors in the gantry that enable linear and rotational motion. If the detectors collide with large objects – stools, tables, patient extremities, etc. – they are very likely to damage the objects and get damaged as well. <a>This research work proposes an intelligent real-time object detection system to prevent collisions</a> between detector heads and external objects in the path of the detector’s motion by implementing an end-to-end deep learning object detector. The research extensively documents all the work done in identifying the most suitable object detection framework for this use case, collecting, and processing the image dataset of target objects, training the deep neural net to detect target objects, deploying the trained deep neural net in live demos by implementing a real-time object detection application written in Python, improving the model’s performance, and finally investigating methods to stop detector motion upon detecting external objects in the collision region. We successfully demonstrated that a <i>Caffe</i> version of <i>MobileNet-SSD </i>can be trained and deployed to detect target objects entering the collision region in real-time by following the methodologies outlined in this paper. We then laid out the future work that must be done in order to bring this system into production, such as training the model to detect all possible objects that may be found in the collision region, controlling the activation of the RTOD application, and efficiently stopping the detector motion.</p>

Page generated in 0.0905 seconds