• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Intelligent Collision Prevention System For SPECT Detectors by Implementing Deep Learning Based Real-Time Object Detection

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
<p>The SPECT-CT machines manufactured by Siemens consists of two heavy detector heads(~1500lbs each) that are moved into various configurations for radionuclide imaging. These detectors are driven by large torque powered by motors in the gantry that enable linear and rotational motion. If the detectors collide with large objects – stools, tables, patient extremities, etc. – they are very likely to damage the objects and get damaged as well. <a>This research work proposes an intelligent real-time object detection system to prevent collisions</a> between detector heads and external objects in the path of the detector’s motion by implementing an end-to-end deep learning object detector. The research extensively documents all the work done in identifying the most suitable object detection framework for this use case, collecting, and processing the image dataset of target objects, training the deep neural net to detect target objects, deploying the trained deep neural net in live demos by implementing a real-time object detection application written in Python, improving the model’s performance, and finally investigating methods to stop detector motion upon detecting external objects in the collision region. We successfully demonstrated that a <i>Caffe</i> version of <i>MobileNet-SSD </i>can be trained and deployed to detect target objects entering the collision region in real-time by following the methodologies outlined in this paper. We then laid out the future work that must be done in order to bring this system into production, such as training the model to detect all possible objects that may be found in the collision region, controlling the activation of the RTOD application, and efficiently stopping the detector motion.</p>
2

Code Files

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
1) real_time_object_detection.py: Python script for deploying trained deep neural network in live stream.<br>2) augmentation.py: Python script for augmenting Detector images.<div>3) tcp_send_command.py: Python script for sending system stop CPI command to Gateway as a CPI message.</div>
3

Demos after First Training Run

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying caffemodel trained for 16000 iterations after the initial training session in the three scenarios outlined in the paper and a minimum confidence score of 30% for detections.
4

Combo 5 and Combo 15 Demos

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying combo 5 caffemodel trained for 18000 iterations and combo 15 caffemodel trained for 25000 iterations.
5

Object Detection in Paddy Field for Robotic Combine Harvester Based on Semantic Segmentation / セマンティックセグメンテーションに基づくロボットコンバインのための物体検出

Zhu, Jiajun 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(農学) / 甲第24913号 / 農博第2576号 / 新制||農||1103(附属図書館) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 飯田 訓久, 教授 近藤 直, 教授 野口 良造 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
6

Machine vision for automation of earth-moving machines : Transfer learning experiments with YOLOv3

Borngrund, Carl January 2019 (has links)
This master thesis investigates the possibility to create a machine vision solution for the automation of earth-moving machines. This research was done as without some type of vision system it will not be possible to create a fully autonomous earth moving machine that can safely be used around humans or other machines. Cameras were used as the primary sensors as they are cheap, provide high resolution and is the type of sensor that most closely mimic the human vision system. The purpose of this master thesis was to use existing real time object detectors together with transfer learning and examine if they can successfully be used to extract information in environments such as construction, forestry and mining. The amount of data needed to successfully train a real time object detector was also investigated. Furthermore, the thesis examines if there are specifically difficult situations for the defined object detector, how reliable the object detector is and finally how to use service-oriented architecture principles can be used to create deep learning systems. To investigate the questions formulated above, three data sets were created where different properties were varied. These properties were light conditions, ground material and dump truck orientation. The data sets were created using a toy dump truck together with a similarly sized wheel loader with a camera mounted on the roof of its cab. The first data set contained only indoor images where the dump truck was placed in different orientations but neither the light nor the ground material changed. The second data set contained images were the light source was kept constant, but the dump truck orientation and ground materials changed. The last data set contained images where all property were varied. The real time object detector YOLOv3 was used to examine how a real time object detector would perform depending on which one of the three data sets it was trained using. No matter the data set, it was possible to train a model to perform real time object detection. Using a Nvidia 980 TI the inference time of the model was around 22 ms, which is more than enough to be able to classify videos running at 30 fps. All three data sets converged to a training loss of around 0.10. The data set which contained more varied data, such as the data set where all properties were changed, performed considerably better reaching a validation loss of 0.164 compared to the indoor data set, containing the least varied data, only reached a validation loss of 0.257. The size of the data set was also a factor in the performance, however it was not as important as having varied data. The result also showed that all three data sets could reach a mAP score of around 0.98 using transfer learning.
7

VISUAL DETECTION OF PERSONAL PROTECTIVE EQUIPMENT &amp; SAFETY GEAR ON INDUSTRY WORKERS

Strand, Fredrik, Karlsson, Jonathan January 2022 (has links)
Workplace injuries are common in today's society due to a lack of adequately worn safety equipment. A system that only admits appropriately equipped personnel can be created to improve working conditions and worker safety. The goal is thus to develop a system that will improve construction workers' safety. Building such a system necessitates computer vision, which entails object recognition, facial recognition, and human recognition, among other things. The basic idea is first to detect the human and remove the background to speed up the process and avoid potential interferences. After that, the cropped image is subjected to facial and object recognition. The code is written in Python and includes libraries such as OpenCV, face_recognition, and CVZone. Some of the different algorithms chosen were YOLOv4 and Histogram of Oriented Gradients. The results were measured at three respectively five-meter distances. As a result of the system’s pipeline, algorithms, and software, a mean average precision of 99% and 89% was achieved at the respective distances. At three and five meters, the model achieved a precision rate of 100%. The recall rates were 96% - 100% at 3m and 54% - 100% at 5m. Finally, the fps was measured at 1.2 on a system without GPU. / Skador på arbetsplatsen är vanliga i dagens samhälle på grund av att skyddsutrustning inte används eller används felaktigt. Målet är därför att bygga ett robust system som ska förbättra säkerhet. Ett system som endast ger tillträde till personal med rätt skyddsutrustning kan skapas för att förbättra arbetsförhållandena och arbetarsäkerheten. Att bygga ett sådant system kräver datorseende, vilket bland annat innebär objektigenkänning, ansiktsigenkänning och mänsklig igenkänning. Grundidén är att först upptäcka människan och ta bort bakgrunden för att göra processen mer effektiv och undvika potentiella störningar. Därefter appliceras ansikts- och objektigenkänning på den beskurna bilden. Koden är skriven i Python och inkluderar bland annat bibliotek som: OpenCV, face_recognition och CVZone. Några av de algoritmer som valdes var YOLOv4 och Histogram of Oriented Gradients. Resultatet mättes på tre, respektive fem meters avstånd. Systemets pipeline, algoritmer och mjukvara gav en medelprecision för alla klasser på 99%, och 89% för respektive avstånd. För tre och fem meters avstånd uppnådde modellen en precision på 100%. Recall uppnådde värden mellan 96% - 100% vid 3 meters avstånd och 54% - 100% vid 5 meters avstånd. Avslutningsvis uppmättes antalet bilder per sekund till 1,2 på ett system utan GPU.

Page generated in 0.0964 seconds