• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 21
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 56
  • 44
  • 44
  • 36
  • 27
  • 27
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Synchronizace pohybu průmyslového robotu s pohybem pásového dopravníku / Synchronization of the robot motion with a moving conveyor belt

Nagy, Marek January 2014 (has links)
Diploma thesis is focused on the solution of synchronization of the robot motion with a moving conveyor belt. It informs about basic principles and possibilities of using similar applications. It describes individual elements used in the application, their importance and function. It provides an overview of proposed program codes for the programmable logic controller, the smart camera and the robot. The result is the creation of a functional illustrative application with KUKA robot.
122

Visual Communication Console : Sharing Safety-Critical Information between Heavy Vehicles and Vulnerable Road Users

Gomli, Dastan, Lindström, Erik January 2019 (has links)
Background. Over the years, between 2013 and 2017, accidents between HeavyGoods Vehicles and Pedestrians have come to increase. Leading causes stem frominattentiveness and lack of communication between driver and pedestrians. Withthe advent of Autonomous vehicles, set to be able to reduce accidents, uncertainties in how communication and trust between humans and machines will be formed re-mains. Objectives. The research aim has been to understand the difficulties and problemssurrounding heavy vehicles, and the problems that today’s heavy vehicle operators faces, from which a technical solution that addressees the uncovered needs, is devel-oped. Methods. Design Research Methodology and MSPI Innovation Process has beenused in combination for acquiring and validating information around the problem.Shadowing sessions, unstructured interviews has been used for acquiring information.Literature reviews have been done to find academic validation in hypotheses statedthroughout the research. From the information gathered, iterative prototypes havebeen built. Results. From the needfinding that was conducted, safety around trucks was thefield on which the scope of the research was focused around. Due the larger size oftruck, decision-making through eye contact and intention determining is set to beharder when dealing with heavy vehicles, leading to an uncertainty around heavyvehicles residing with pedestrians in how to act around these. With the operatorsof these vehicles finding the unpredictable nature of pedestrians and cyclist in trafficto be troublesome and safety imposing, the research aim was set around addressing these needs. A communication console was developed, that is able to communi-cate safety-critical information between heavy vehicle operators and vulnerable road users, as means of reducing front collisions between said parts. Conclusions. The console has been developed through iterative prototyping andtesting, with design requirements being acquired through learnings and feedbackgathered from each iteration. The resulting communication console being presentedis able to share critical information being sought by pedestrians for decision-making,primarily that of eye contact and intentions of oncoming vehicles. The system servesas a proof of concept, that could through extensive traffic safety testing, help reducefront collisions between Heavy Goods Vehicles and Vulnerable Road Users, as well as, through further development, become the central communication console for au-tonomous vehicles to ensure partnership and intuitive communication between these and the surroundings.
123

Neural network based fault detection on painted surface

Augustian, Midhumol January 2017 (has links)
Machine vision systems combined with classification algorithms are being increasingly used for different applications in the age of automation. One such application would be the quality control of the painted automobile parts. The fundamental elements of the machine vision system include camera, illumination, image acquisition software and computer vision algorithms. Traditional way of thinking puts too much importance on camera systems and ignores other elements while designing a machine vision system. In this thesis work, it is shown that selecting an appropriate illumination for illuminating the surface being examined is equally important in case of machine vision system for examining specular surface. Knowledge about the nature of the surface, type and properties of the defect to be detected and classified are important factors while choosing the illumination system for the machine vision system. The main illumination system tested were bright field, dark field and structured illumination and out of the three, dark field and structured illumination gave best results. This thesis work proposes a dark field illumination based machine vision system for fault detection on specular painted surface. A single layer Artificial Neural Network model is employed for the classification of defects in intensity images of painted surface acquired with this machine vision system. The results of this research work proved that the quality of the images and size of data set used for training the Neural Network model play a vital role in the performance of the classifier algorithm.
124

Computer Vision and Machine Learning for a Spoon-feeding Robot : A prototype solution based on ABB YuMi and an Intel RealSense camera

Loffreno, Michele January 2021 (has links)
A lot of people worldwide are affected by limitations and disabilities that make it hard to do even essential actions and everyday tasks, such as eating. The impact of robotics on the lives of elder people or people having any kind of inability, which makes it hard everyday actions as to eat, was considered. The aim of this thesis is to study the implementation of a robotic system in order to achieve an automatic feeding process. Different kinds of robots and solutions were taken into account, for instance, the Obi and the prototype realized by the Washington University. The system considered uses an RGBD camera, an Intel RealSense D400 series camera, to detect pieces of cutlery and food on a table and a robotic arm, an ABB-YuMi, to pick up the identified objects. The spoon detection is based on the pre-trained convolutional neural network AlexNet provided by MATLAB. Two detectors were implemented. The first one can detect up to four different objects (spoon, plate, fork and knife), the second one can detect only spoon and plate. Different algorithms based on morphology were tested in order to compute the pose of the objects detected. RobotStudio was used to establish a connection between MATLAB and the robot. The goal was to make the whole process as automated as possible. The neural network trained on two objects reached 100% of accuracy during the training test. The detector based on it was tested on the real system. It was possible to detect the spoon and the plate and to draw a good centered boundary box. The accuracy reached can be considered satisfying since it has been possible to grasp a spoon using the YuMi based on a picture of the table. It was noticed that the lighting condition is the key factor to get a satisfying result or to miss the detection of the spoon. The best result was archived when the light is uniform and there are no reflections and shadows on the objects. The pictures which get a better result for the detection were taken in an apartment. Despite the limitations of the interface between MATLAB and the controller of the YuMi, a good level of automation was reached. The influence of lighting conditions in this setting was discussed and some practical suggestions and considerations were made. / No
125

Machine vision diagnosis of eyes for vitamin A conditions in Japanese black cattle / 黒毛和牛のビタミンA計測のためのマシンビジョンによる眼球診断

Han, Shuqing 24 March 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(農学) / 甲第18322号 / 農博第2047号 / 新制||農||1021(附属図書館) / 学位論文||H26||N4829(農学部図書室) / 31180 / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 教授 松井 徹, 准教授 小川 雄一 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
126

Double Lighting Machine Vision System for Rice Quality Evaluation / コメの品質評価のためのダブルライティングマシンビジョンシステム

Mahirah, Binti Jahari 24 November 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(農学) / 甲第20767号 / 農博第2250号 / 新制||農||1054(附属図書館) / 学位論文||H29||N5087(農学部図書室) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 教授 清水 浩, 教授 飯田 訓久 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DGAM
127

Potato Shape Grading Using Depth Imaging / 深度イメージングを用いたジャガイモの形状評価

Su, Qinghua 23 May 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(農学) / 甲第21278号 / 農博第2294号 / 新制||農||1062(附属図書館) / 学位論文||H30||N5142(農学部図書室) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 教授 清水 浩, 教授 飯田 訓久 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DGAM
128

Deep Convolutional Neural Network's Applicability and Interpretability for Agricultural Machine Vision Systems / 深層畳み込みニューラルネットワークの農業用マシンビジョンシステムへの適用性と説明力

Harshana, Habaragamuwa 26 November 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(農学) / 甲第21429号 / 農博第2307号 / 学位論文||H30||N5157(農学部図書室) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 准教授 小川 雄一, 教授 飯田 訓久 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DGAM
129

A Data Augmentation Methodology for Class-imbalanced Image Processing in Prognostic and Health Management

Yang, Shaojie January 2020 (has links)
No description available.
130

Quadcopter stabilization based on IMU and Monocamera Fusion

Pérez Rodríguez, Arturo January 2023 (has links)
Unmanned aerial vehicles (UAVs), commonly known as drones, have revolutionized numerous fields ranging from aerial photography to surveillance and logistics. Achieving stable flight is essential for their successful operation, ensuring accurate data acquisition, reliable manoeuvring, and safe operation. This thesis explores the feasibility of employing a frontal mono camera and sensor fusion techniques to enhance drone stability during flight. The objective of this research is to investigate whether a frontal mono camera, combined with sensor fusion algorithms, can be used to effectively stabilize a drone in various flight scenarios. By leveraging machine vision techniques and integrating data from onboard gyroscopes, the proposed approach aims to provide real-time feedback for controlling the drone. The methodology for this study involves the Crazyflie 2.1 drone platform equipped with a frontal camera and an Inertial Measurement Unit (IMU). The drone’s flight data, including position, orientation, and velocity, is continuously monitored and analyzed using Kalman Filter (KF). This algorithm processes the data from the camera and the IMU to estimate the drone’s state accurately. Based on these estimates, corrective commands are generated and sent to the drone’s control system to maintain stability. To evaluate the effectiveness of the proposed system, a series of flight tests are conducted under different environmental conditions and flight manoeuvres. Performance metrics such as drift, level of oscillations, and overall flight stability are analyzed and compared against baseline experiments with conventional stabilization methods. Additional simulated tests are carried out to study the effect of the communication delay. The expected outcomes of this research will contribute to the advancement of drone stability systems. If successful, the implementation of a frontal camera and sensor fusion can provide a cost-effective and lightweight solution for stabilizing drones.

Page generated in 0.217 seconds