• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 65
  • 18
  • 16
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 243
  • 84
  • 79
  • 74
  • 70
  • 53
  • 45
  • 43
  • 35
  • 35
  • 32
  • 29
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Detekce překážek / The obstacle detection

Hradiský, Marek January 2016 (has links)
Diploma thesis deals with design and the processing of video captured by Raspberry PI and the possibility of object detection in video. In this thesis is described Raspberry PI as well as video capturing by PiCamera module. There is also section about video processing by OpenCV.
102

Trasování pohybu objektů s pomocí počítačového vidění / Object tracking using computer vision

Klapal, Matěj January 2017 (has links)
This diploma thesis deals with posibilities of tracking object movement using computer vision algorithms. First chapters contain review of methods used for background subtraction, there are also listed basic detection approaches and thesis also mentions algorithms which allows tracking and movement prediction. Next part of this work informs about algoritms implemented in resulting software and its graphical user interface. Evaluation and comparison of original and modified algorithms is stationed at the end of this text.
103

Camera pose estimation with moving Aruco-board. : Retrieving camera pose in a stereo camera tolling system application. / Kamerapositionskalibrering med Aruco-tavla i rörelse.

Isaksson, Jakob, Magnusson, Lucas January 2020 (has links)
Stereo camera systems can be utilized for different applications such as position estimation,distance measuring, and 3d modelling. However, this requires the cameras to be calibrated.This paper proposes a traditional calibration solution with Aruco-markers mounted on avehicle to estimate the pose of a stereo camera system in a tolling environment. Our method isbased on Perspective N Point which presumes the intrinsic matrix to be already known. Thegoal is to find each camera’s pose by identifying the marker corners in pixel coordinates aswell as in world coordinates. Our tests show a worst-case error of 21.5 cm and a potential forcentimetre accuracy. It also verifies validity by testing the obtained pose estimation live in thecamera system. The paper concludes that the method has potential for higher accuracy notobtained in our experiment due to several factors. Further work would focus on enlarging themarkers and widening the distance between the markers.
104

Automatiserad symboltestning i det grafiska gränssnittet på Gripen NG / Automated testing of symbols in the graphical interface in Gripen NG

Forsberg, Martin, Lindroth, Linnea January 2020 (has links)
Saab är i slutfasen av utvecklingen av Gripen Next Generation och söker därför en metod för att autonomt kunna regressionstesta symboler i det grafiska gränssnittet i cockpit. Detta examensarbete fokuserar på att utvärdera befintliga metoder av automatiserad GUI-baserad testning för att automatiskt kunna jämföra det grafiska gränssnittet på en skärm och implementera den mest lämpade metod för att ta fram ett bevis av koncept. Målet med implementationen är att kunna visa att metoden kan hitta de problem som kan uppstå vid mjukvaruuppdateringar och försäkra att ingen uppdatering av mjukvara påverkar symbolerna i det grafiska gränssnittet på ett negativt sätt. Implementationen är en metod av Visuell GUI Testning vilket är den tredje generationen av automatiserad GUI-baserad testning och har gjorts i C++ med OpenCV bibliotek för bildhantering och bildjämförelse. Det testmaterial som använts är i form av bilder och en video som insamlats med en intern funktion för skärminspelning i det grafiska gränssnittet. Slutsatsen av resultatet från testerna som genomförts visar att det är möjligt att verifiera statiska symboler i det grafiska gränssnittet men att verifiering av dynamiska symboler kräver vidare arbete då förutsättningarna för identiska tester inte är möjligt i dagsläget. / Saab is in the final phase of development of the new Gripen Next Generation and is therefore looking for a method to autonomously test the graphical interface in the cockpit for visual regressions. This thesis evaluates existing methods of automated GUI-based testing to automatically compare the graphical interface on a screen and implement the most suitable method to produce a proof of concept. The goal of the implementation is to show that the method can solve the problems that can arise in software updates and ensure that no software updates adversely affect the symbols in the graphical interface. The implementation is a method of Visual GUI Testing which is the third generation of the automated GUI-based testing and has been made in C++ with OpenCV-library for image recognition and comparison. The test material that has been used is images and videos collected with an internal screen recording function in the graphical interface. The results indicate that it is possible to verify static symbols in the graphical interface, yet that verification of dynamic symbols require further work as the conditions for identical test are not possible in the current situation.
105

Desenvolvimento de Estrutura Robótica para Aquisição e Classificação de Imagens (ERACI) de Lavoura de Cana-de-Açúcar /

Cardoso, José Ricardo Ferreira January 2020 (has links)
Orientador: Carlos Eduardo Angeli Furlani / Resumo: A agricultura digital tem contribuído com a melhoria da eficiência na aplicação de insumos ou no plantio em local pré-determinado, resultando no aumento da produtividade. Nesta realidade a aplicação de técnicas de Processamento de Imagens Digitais, bem como a utilização de sistemas que utilizam a Inteligência Artificial, tem ganhado cada vez mais a atenção de pesquisadores que buscam a sua aplicação nos mais diversos meios. Com o objetivo de desenvolver um sistema robótico que utiliza um sistema de visão computacional capaz analisar uma imagem e, detectar basicamente a presença de cana-de-açúcar e planta daninha, bem como a ausência de qualquer planta, o projeto desenvolvido unificou conhecimentos sobre estas duas áreas da ciência da computação com a área de robótica e agricultura que, culminou no desenvolvimento de uma estrutura robótica com ferramentas gratuitas, como é o caso dos softwares e hardwares modulares voltados para o ensino de informática em escolas. A união de tudo isso resultou em uma estrutura de software e hardware que captura e armazena imagens em um banco de dados; além de possibilitar a classificação de imagens pelos usuários habilitados por meio de aplicativo Android. Por meio da verificação da acurácia entregue pelos algoritmos de Machine Learning, com injeção cíclica e, pela análise do tempo de resposta, foi constatado que o sistema é capaz, munido destas informações, de gerar classificadores que, remotamente são carregados pelo DRR (Dispositivo Robótic... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Digital agriculture has contributed to improving efficiency in the application of inputs or planting in a predetermined location, resulting in increased productivity. In this reality, the application of Digital Image Processing techniques, as well as the use of systems that use Artificial Intelligence, has increasingly gained the attention of researchers who seek their application in the most diverse media. In order to develop a robotic system capable of creating a computer vision system capable of analyzing an image and basically detecting the presence of sugarcane and weed, as well as the absence of any plant, the project developed unified knowledge on these two areas of computer science with the area of robotics and agriculture, which culminated in the development of a robotic structure with free tools, such as software and modular hardware aimed at teaching computer science in schools. The combination of all this resulted in a software and hardware structure capable of allowing the capture and storage of images in a database; in addition to enabling the classification of images by users enabled through the Android application. By checking the accuracy delivered by the Machine Learning algorithms with cyclic injection and analyzing the response time, it was found that the system was able, with this information, to generate classifiers that are remotely loaded by the RRD and these, in turn, were able to classify images in sugarcane fields in real time. / Mestre
106

Determine the time of an analogue watch using Computer Vision

Tell, Amanda, Hägred, Carl January 2022 (has links)
This paper explores various approaches to determine the time of an analogue watch by developing two systems using the design and creation method. The aim is to see to what extent a computer can determine the time, by comparing the two systems, as well as how it can deal with contextual set-up variations such as design, orientation and lighting conditions. The first system uses OpenCV to find the watch hands and geometry to calculate the time. The second system uses Machine Learning by building a Neural Network to classify images in Tensorflow using a multi–labelling approach. The results show that in a set environment the geometric system performs better than the Machine Learning model. The geometric system predicted correctly with an accuracy of 80% whereas the best Machine Learning model got 74%. The accuracy of the model did increase when adding data augmentation, however there was no significant difference when further adding synthetic data. When using contextual set-up variations, the model performed poorly with 21%.
107

Öppen källkodslösning för datorseende : Skapande av testmiljö och utvärdering av OpenCV / Open source solution for computer vision : Creating a test environment and evaluating OpenCV

Lokkin, Caj, Bragd, Sebastian January 2021 (has links)
Datorseende är ett område inom datavetenskap som har utvecklats i många år och funktionaliteten är mer tillgängligt nu än någonsin.  Det kan bland annat användas för beröringfri mätning så som att hitta, verifiera och identifiera defekter för objekt. Frågan är om det går att utforma en öppen källkodslösning för datorseende som ger motsvarande prestanda som tillgängliga kommersiella. Med andra ord, kan ett företag som använder ett kommersiellt program med stängd källkod istället använda ett gratis öppet källkodsblibotek och få motsvarande resultat? I denna rapport beskriver vi designen av en prototyp som använder det öppna källkodsbiblioteket för datorseende, OpenCV.  I syfte att utvärdera vår prototyp låter vi den identifiera block i ett torn på en bild i en serie testfall. Vi jämför resultaten från prototypen med de resultat som erhålls med en kommersiell lösning, skapad med programmet 'Vision Builder for Automated Inspection'. Resultat av det som testats visar att OpenCV tycks ha prestanda och funktionalitet som motsvarar den kommersiella lösningen men har begränsningar. Då OpenCV är fokus är på programmatisk utveckling av datorseenden lösningar är resultatet av lösningar som skapas beroende på användarens kompetens inom programmering och programmdesign. Utifrån de tester som genomfördes anser vi att OpenCV kan ersätta ett licensierat kommersiellt program men licensenkostnaderna kan komma att ersättas av andra utvecklingskostnader. / Computer vision is a subject in computer science that have evolved over many years and the functionality is more accessible then ever. Among other things, it can be used for non-contact measurement to locate, verify, and detect defects of objects. The question is if it is possible to create an open source solution for computer vision equivalent to a closed source solution. In other words, can a company using a closed source commercial program instead use a free open source code library and produce equivalent results?  In this report we describe the design of a prototype that uses the open source library for computer vision, OpenCV.  In order to evaluate our prototype, we let it identify block in a tower on image in a series of test cases.  We compare the results from the prototype with the results obtained with a commercial solution, created with the program ''Vision Builder for Automated Inspection''.  Results from the cases tested show that OpenCV seems to have performance and functionality equivalent to the commercial solution but has some limitations.  As OpenCV's focus is on programmatic development of computer vision solutions, the result is dependent on the user's skills in programming and program design.  Based on the tests that we have performed, we believe that OpenCV can replace a licensed commerical program, but the license cost may come to be replaced by other development costs.
108

Intelligent Collision Prevention System For SPECT Detectors by Implementing Deep Learning Based Real-Time Object Detection

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
<p>The SPECT-CT machines manufactured by Siemens consists of two heavy detector heads(~1500lbs each) that are moved into various configurations for radionuclide imaging. These detectors are driven by large torque powered by motors in the gantry that enable linear and rotational motion. If the detectors collide with large objects – stools, tables, patient extremities, etc. – they are very likely to damage the objects and get damaged as well. <a>This research work proposes an intelligent real-time object detection system to prevent collisions</a> between detector heads and external objects in the path of the detector’s motion by implementing an end-to-end deep learning object detector. The research extensively documents all the work done in identifying the most suitable object detection framework for this use case, collecting, and processing the image dataset of target objects, training the deep neural net to detect target objects, deploying the trained deep neural net in live demos by implementing a real-time object detection application written in Python, improving the model’s performance, and finally investigating methods to stop detector motion upon detecting external objects in the collision region. We successfully demonstrated that a <i>Caffe</i> version of <i>MobileNet-SSD </i>can be trained and deployed to detect target objects entering the collision region in real-time by following the methodologies outlined in this paper. We then laid out the future work that must be done in order to bring this system into production, such as training the model to detect all possible objects that may be found in the collision region, controlling the activation of the RTOD application, and efficiently stopping the detector motion.</p>
109

Code Files

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
1) real_time_object_detection.py: Python script for deploying trained deep neural network in live stream.<br>2) augmentation.py: Python script for augmenting Detector images.<div>3) tcp_send_command.py: Python script for sending system stop CPI command to Gateway as a CPI message.</div>
110

Demos after First Training Run

Tahrir Ibraq Siddiqui (11173185) 23 July 2021 (has links)
Demos of deploying caffemodel trained for 16000 iterations after the initial training session in the three scenarios outlined in the paper and a minimum confidence score of 30% for detections.

Page generated in 0.0281 seconds