Spelling suggestions: "subject:"opencv"" "subject:"opencvs""
71 |
Datainsamling till simulering med hjälp av videokamera och bildbehandling / Data acquisition for simulation using video camera and image processingSaiti, Adel, Ringbom, Jonas January 2019 (has links)
Syftet med studien är att undersöka möjligheten att använda en videokamera och bildbehandlings-algoritmer för att inhämta data till simulering genom att spåra personers rörelse. Den teoretiska referensramen och litteraturstudie används för att få en fördjupad kunskap om simulering och hur personer spåras med spårningsalgoritmer. För att undersöka möjligheten har en kod skapats som använder bildbehandlingsalgoritmer från OpenCv. Algoritmerna som används har utvärderats med fyra experiment i två olika miljöer, en affärsmiljö och en industrimiljö. Experimenten har använts till att spela in videomaterial på personernas förflyttning i miljöerna. Videomaterialen har använts med den skapade koden och bildbehandlingsalgoritmer för att analysera spårnings-algoritmernas prestanda och om tider kan erhållas. Resultatet från analysen påvisar att tider kan erhållas om en person spåras i videoscenen. De erhållna tiderna har jämfört med manuella tidsstudier och påvisar att medelfelet är 0,1 sekunder och standardavvikelsen är 0,27 sekunder. När det är flera personer som spåras i videoscenen visar resultatet att de inte är möjligt att erhålla tider till simulering. Detta beror på att algoritmerna misslyckas att spåra, faktorer som samman-fogning, färg, riktning, ocklusion och förflyttning av statiska objekt påverkar spårningen på algoritmerna. Detta bidrar till att tiderna som erhålls inte är tillförlitliga och därmed har inte tiderna jämfört med manuella tidsstudier. / The purpose of the study is to investigate the possibility of using a video camera and the image processing algorithms to obtain data for simulation through tracking people’s movement. The theoretical frame of reference and literature studies are used to get an in-depth knowledge about simulation and how people are tracked with tracking algorithms. To investigate the possibility, a code has been created that uses image processing algorithms from OpenCv. These algorithms that have been used have been evaluated with four experiments in two different environments, one store environment and one industrial environment. The experiments have been used to record video on people’s movements in these two environments. The video recordings have been made with the created code and the image processing algorithms to analyze the performance of the tracking algorithms and if time can be obtained. The result of the analysis shows that the time can be obtained if a person is used in the video scene. The obtained times have been compared with manual time studies. The result shows that the average error is 0, 1 seconds and the standard deviation is 0, 27 seconds. When there are more people that are being tracked in the video scene, the result shows that they are not possible to obtain times for simulation. This is because the algorithms are failing to track, factors such as joining, color, direction, occlusion and movement of static objects affect the tracking of the algorithms. This contributes that the times obtained are not reliable and thereby have not been compared with manual time’s studies.
|
72 |
Automated system tests with image recognition : focused on text detection and recognition / Automatiserat systemtest med bildigenkänning : fokuserat på text detektering och igenkänningOlsson, Oskar, Eriksson, Moa January 2019 (has links)
Today’s airplanes and modern cars are equipped with displays to communicate important information to the pilot or driver. These displays needs to be tested for safety reasons; displays that fail can be a huge safety risk and lead to catastrophic events. Today displays are tested by checking the output signals or with the help of a person who validates the physical display manually. However this technique is very inefficient and can lead to important errors being unnoticed. MindRoad AB is searching for a solution where validation of the display is made from a camera pointed at it, text and numbers will then be recognized using a computer vision algorithm and validated in a time efficient and accurate way. This thesis compares the three different text detection algorithms, EAST, SWT and Tesseract to determine the most suitable for continued work. The chosen algorithm is then optimized and the possibility to develop a program which meets MindRoad ABs expectations is investigated. As a result several algorithms were combined to a fully working program to detect and recognize text in industrial displays.
|
73 |
Machine Vision and Autonomous Integration Into an Unmanned Aircraft SystemAlexander, Josh, Blake, Sam, Clasby, Brendan, Shah, Anshul Jatin, Van Horne, Chris, Van Horne, Justin 10 1900 (has links)
The University of Arizona's Aerial Robotics Club (ARC) sponsored two senior design teams to compete in the 2011 AUVSI Student Unmanned Aerial Systems (SUAS) competition. These teams successfully design and built a UAV platform in-house that was capable of autonomous flight, capturing aerial imagery, and filtering for target recognition but required excessive computational hardware and software bugs that limited the systems capability. A new multi-discipline team of undergrads was recruited to completely redesign and optimize the system in an attempt to reach true autonomous real-time target recognition with reasonable COTS hardware.
|
74 |
Σχεδιασμός - υλοποίηση ενσωματωμένου συστήματος για κίνηση και έλεγχο οχήματος με τη βοήθεια ασύρματης επικοινωνίαςΓιαννόπουλος, Ευθύμιος 13 January 2015 (has links)
H συγκεκριμένη διπλωματική εργασία αποτελεί συνέχεια μιας σειράς διπλωματικών εργασιών οι οποίες είχαν ως πηγή έμπνευσης το ρομποτικό όχημα (robot rover) Sojourner το οποίο είχε σταλθεί στον πλανήτη Άρη από τη NASA το 1997.
Συγκεκριμένα είχε υλοποιηθεί παλαιότερα μια διπλωματική εργασία στην οποία σκοπός ήταν η κατασκευή ενός ενσωματωμένου συστήματος και του αντίστοιχου οχήματος που θα ελέγχεται στα πρότυπα του ρομποτικού οχήματος Sojourner, όσο αυτό είναι εφικτό. Πιο συγκεκριμένα είχε κατασκευαστεί ένα όχημα και είχε γίνει ο σχεδιασμός και η υλοποίηση ενός ενσωματωμένου συστήματος βασισμένο στον επεξεργαστή της Intel 8086.
Σκοπός στη συγκεκριμένη διπλωματική εργασία, είναι ο σχεδιασμός και η υλοποίηση ενός ενσωματωμένου συστήματος, το οποίο θα στηρίζεται σε ένα πιο σύγχρονο υπολογιστικό σύστημα και ταυτόχρονα η υλοποίηση ενός μηχανισμού, με τον οποίο το όχημα θα μπορεί να ανιχνεύει στοιχειωδώς την ύπαρξη κάποιου πιθανού εμποδίου.
Αρχικά επιλέχθηκε να υλοποιηθεί μηχανισμός στερεοσκοπικής όρασης προκειμένου να πραγματοποιείται η αναγνώριση πιθανού εμποδίου. Η επιλογή αυτή αποτέλεσε σημαντικό παράγοντα για τον μετέπειτα προγραμματισμό του συστήματος. Στη συνέχεια έγινε ο σχεδιασμός του ενσωματωμένου συστήματος και η επιλογή του hardware που το απαρτίζει. Οι επιλογές που έγιναν προσπάθησαν να αντισταθμίσουν τους παράγοντες κόστους και ταχύτητας.
Το επόμενο βήμα αποτέλεσε η software υλοποίηση, δηλαδή ο προγραμματισμός του ενσωματωμένου συστήματος. Αρχικά υλοποιήθηκαν οι ρουτίνες για την κίνηση. Κατόπιν αναπτύχθηκε το πρόγραμμα που αφορά την στερεοσκοπική όραση και στο τέλος έγινε ο προγραμματισμός του radio module. Στη συνέχεια συνδυάστηκαν όλα τα παραπάνω μαζί ώστε να αποτελέσουν την τελική εφαρμογή.
Στο τέλος πραγματοποιήθηκε η αξιολόγηση της επιλογής του υπολογιστικού συστήματος που χρησιμοποιήθηκε, καθώς και εξήχθησαν συμπεράσματα για πιθανές μελλοντικές προσθήκες και δυνατότητες. / This diploma thesis belongs to a series of diploma theses, which were inspired by NASA’s robot rover Sojourner which had been sent on Mars in 1997.
Specifically, in an older diploma thesis, an embedded system was designed and implemented in order to move and control a vehicle, like it had been done with Sojourner, in a feasible degree. More specifically, a vehicle was constructed and an embedded system which was based on Intel 8086 processor was designed and implemented.
The goal of this diploma thesis is the design and implementation of an embedded system which is based on a modern computer system. Moreover a mechanism for elementary locating a possible obstacle was implemented.
In the first stage, stereo vision was chosen to be the process in which the vehicle would recognize a possible obstacle. That choice was a significant factor for the later system programming. Afterwards, the hardware that is used on the vehicle was chosen. Those choices tried to have a trade-off between cost and performance.
The next step was the programming of the embedded system. This step consisted of three more steps. In the first step, movement functions which were responsible for moving the vehicle were programmed. Secondly, the program responsible for stereo vision was also written and the third step was the programming of the radio module which was used. Finally, all those three programs were combined in order to produce the final application.
In the end, we assessed the choice of the computer system we used as well as the whole system and we reached on some conclusions. Also, some suggestions for future additions were introduced.
|
75 |
Δημιουργία χάρτη βάθους σκηνής σε υπολογιστικά συστήματα παράλληλης επεξεργασίαςΠαπαϊωάννου, Μαγδαληνή 12 June 2015 (has links)
Σκοπός της παρούσας εργασίας ήταν η μελέτη της μεθόδου κατασκευής του χάρτη βάθους μιας σκηνής από δύο εικόνες της, οι οποίες προσομοιάζουν την ανθρώπινη διοφθαλμική όραση. Η μέθοδος αναλύθηκε στους βασικούς της αλγορίθμους, και εξετάστηκε κατά πόσο και με ποιόν τρόπο θα μπορούσαν αυτοί να παραλληλοποιηθούν. Το OpenCL framework και η OpenCV βιβλιοθήκη μελετήθηκαν, και βρέθηκαν κατάλληλες και ικανές για την παραλληλοποίηση ενός αλγορίθμου υπολογιστικής όρασης. Με χρήση των παραπάνω υλοποιήθηκαν ενδεικτικά κάποιοι αλγόριθμοι και υπολογίστηκε το σχετικό βάθος των χαρακτηριστικών σημείων των εικόνων. Τέλος έγινε αξιολόγηση των αλγορίθμων ως προς την ταχύτητα και την ποιότητα των αποτελεσμάτων. / The goal of the present thesis was to study a scene's depthmap creation, using a pair of images simulating human binocular vision. At first the whole method was cut down to its elementary algorithms. Then it was examined wether and how could these algorithms be parallelized. OpenCL framework and OpenCV library were found adequate and capable of parallelizing computer vision algorithms, so they were used to implement some indicative algorithms. Finally, the relative depth of image features was calculated via various algorithm combinations, that were then evaluated according to speed and accuracy.
|
76 |
Uma interface para o controle de rob?s m?veis por interm?dio de gestosAlbuquerque, Oto Emerson de 27 April 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-08-16T20:26:43Z
No. of bitstreams: 1
OtoEmersonDeAlbuquerque_DISSERT.pdf: 17507163 bytes, checksum: 867588f4230e7727922cb32ba8c2b9d5 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-08-18T22:15:11Z (GMT) No. of bitstreams: 1
OtoEmersonDeAlbuquerque_DISSERT.pdf: 17507163 bytes, checksum: 867588f4230e7727922cb32ba8c2b9d5 (MD5) / Made available in DSpace on 2016-08-18T22:15:11Z (GMT). No. of bitstreams: 1
OtoEmersonDeAlbuquerque_DISSERT.pdf: 17507163 bytes, checksum: 867588f4230e7727922cb32ba8c2b9d5 (MD5)
Previous issue date: 2016-04-27 / Ag?ncia Nacional do Petr?leo - ANP / Esse trabalho tem como objetivo o estudo de t?cnicas de vis?o computacional para o reconhecimento de gestos humanos, bem como sua aplica??o ao controle de rob?s m?veis. S?o analisados os desafios e propostas algumas solu??es para o desenvolvimento de uma interface simples, a qual permita o reconhecimento e a interpreta??o dos gestos gerados pela m?o humana, no intuito de controlar os movimentos de um rob?. Neste contexto, foi desenvolvido um programa em C++ para captar as imagens geradas por uma c?mera de alta defini??o e, a partir da informa??o extra?da das imagens, definir qual movimento seria realizado pelo rob?. Para auxiliar a tarefa de reconhecimento e processamento da imagem captada foram utilizadas fun??es da biblioteca OpenCV. Os resultados obtidos com a interface desenvolvida confirmam e demonstram a viabilidade de aplica??o desta tecnologia. / The main objective of this work was to enable the recognition of human gestures
through the development of a computer program. The program created captures the
gestures executed by the user through a camera attached to the computer and sends
it to the robot command referring to the gesture. They were interpreted in total ve
gestures made by human hand. The software (developed in C ++) widely used the
computer vision concepts and open source library OpenCV that directly impact the
overall e ciency of the control of mobile robots. The computer vision concepts take into
account the use of lters to smooth/blur the image noise reduction, color space to better
suit the developer's desktop as well as useful information for manipulating digital images.
The OpenCV library was essential in creating the project because it was possible to use
various functions/procedures for complete control lters, image borders, image area, the
geometric center of borders, exchange of color spaces, convex hull and convexity defect,
plus all the necessary means for the characterization of imaged features. During the
development of the software was the appearance of several problems, as false positives
(noise), underperforming the insertion of various lters with sizes oversized masks, as
well as problems arising from the choice of color space for processing human skin tones.
However, after the development of seven versions of the control software, it was possible
to minimize the occurrence of false positives due to a better use of lters combined with
a well-dimensioned mask size (tested at run time) all associated with a programming
logic that has been perfected over the construction of the seven versions. After all the
development is managed software that met the established requirements. After the
completion of the control software, it was observed that the overall e ectiveness of the
various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00
% and VII with: 94.67 % showed that the nal program performed well in interpreting
gestures, proving that it was possible the mobile robot control through human gestures
without the need for external accessories to give it a better mobility and cost savings
for maintain such a system. The great merit of the program was to assist capacity in
demystifying the man set/machine therefore uses an easy and intuitive interface for
control of mobile robots. Another important feature observed is that to control the mobile
robot is not necessary to be close to the same, as to control the equipment is necessary
to receive only the address that the Robotino passes to the program via network or Wi-Fi.
|
77 |
Localização e mapeamento simultâneos (SLAM) visual usando sensor RGB-D para ambientes internos e representação de características /Guapacha, Jovanny Bedoya January 2017 (has links)
Orientador: Suely Cunha Amaro Mantovani / Resumo: A criação de robôs que podem operar autonomamente em ambientes controlados e não controlados tem sido, um dos principais objetivos da robótica móvel. Para que um robô possa navegar em um ambiente interno desconhecido, ele deve se localizar e ao mesmo tempo construir um mapa do ambiente que o rodeia, a este problema dá-se o nome de Localização e Mapeamento Simultâneos- SLAM. Tem-se como proposta neste trabalho para solucionar o problema do SLAM, o uso de um sensor RGB-D, com 6 graus de liberdade para perceber o ambiente, o qual é embarcado em um robô. O problema do SLAM pode ser solucionado estimando a pose - posição e orientação, e a trajetória do sensor no ambiente, de forma precisa, justificando a construção de um mapa em três dimensões (3D). Esta estimação envolve a captura consecutiva de frames do ambiente fornecidos pelo sensor RGB-D, onde são determinados os pontos mais acentuados das imagens através do uso de características visuais dadas pelo algoritmo ORB. Em seguida, a comparação entre frames consecutivos e o cálculo das transformações geométricas são realizadas, mediante o algoritmo de eliminação de correspondências atípicas, bPROSAC. Por fim, uma correção de inconsistências é efetuada para a reconstrução do mapa 3D e a estimação mais precisa da trajetória do robô, utilizando técnicas de otimização não lineares. Experimentos são realizados para mostrar a construção do mapa e o desempenho da proposta. / Doutor
|
78 |
Localização e mapeamento simultâneos (SLAM) visual usando sensor RGB-D para ambientes internos e representação de características / Simultaneous location and mapping (SLAM) visual using RGB-D sensor for indoor environments and characteristics representationGuapacha, Jovanny Bedoya [UNESP] 04 September 2017 (has links)
Submitted by JOVANNY BEDOYA GUAPACHA null (jovan@utp.edu.co) on 2017-11-02T14:40:57Z
No. of bitstreams: 1
TESE _JBG_verf__02_11_2017_repositorio.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T16:46:44Z (GMT) No. of bitstreams: 1
guapacha_jb_dr_ilha.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5) / Made available in DSpace on 2017-11-13T16:46:44Z (GMT). No. of bitstreams: 1
guapacha_jb_dr_ilha.pdf: 4463035 bytes, checksum: a4e99464884d8580fc971b9f062337d4 (MD5)
Previous issue date: 2017-09-04 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A criação de robôs que podem operar autonomamente em ambientes controlados e não controlados tem sido, um dos principais objetivos da robótica móvel. Para que um robô possa navegar em um ambiente interno desconhecido, ele deve se localizar e ao mesmo tempo construir um mapa do ambiente que o rodeia, a este problema dá-se o nome de Localização e Mapeamento Simultâneos- SLAM. Tem-se como proposta neste trabalho para solucionar o problema do SLAM, o uso de um sensor RGB-D, com 6 graus de liberdade para perceber o ambiente, o qual é embarcado em um robô. O problema do SLAM pode ser solucionado estimando a pose - posição e orientação, e a trajetória do sensor no ambiente, de forma precisa, justificando a construção de um mapa em três dimensões (3D). Esta estimação envolve a captura consecutiva de frames do ambiente fornecidos pelo sensor RGB-D, onde são determinados os pontos mais acentuados das imagens através do uso de características visuais dadas pelo algoritmo ORB. Em seguida, a comparação entre frames consecutivos e o cálculo das transformações geométricas são realizadas, mediante o algoritmo de eliminação de correspondências atípicas, bPROSAC. Por fim, uma correção de inconsistências é efetuada para a reconstrução do mapa 3D e a estimação mais precisa da trajetória do robô, utilizando técnicas de otimização não lineares. Experimentos são realizados para mostrar a construção do mapa e o desempenho da proposta. / The robots creation that can operate autonomously in controlled and uncontrolled environments has been, one of the main objectives of mobile robotics. In order for a robot to navigate in an unknown internal environment, it must locate yourself and at the same time construct a map of the surrounding environment this problem is called Simultaneous Location and Mapping - SLAM. The purpose of this work for solution to SLAM’s problem is to use an RGB-D sensor with 6 degrees of freedom to perceive the environment, which is embedded onto a robot.The SLAM's problem can be solved by estimating the position and orientation, and the path of the sensor/robot in the environment, in precise form, justifying the construction of a 3D map. This estimation involves the consecutive capture of the environment's frames provided by the RGB-D sensor, where the pronounced points of the images are determined through the use of visual characteristics given by the ORB algorithm. Then, the comparison between consecutive frames and the calculation of the geometric transformations are performed using the algorithm of elimination of atypical correspondences, bPROSAC. Finally, a correction of inconsistencies is made for the reconstruction of the 3D map and the more accurate estimation of the robot trajectory, using non-linear optimization techniques. Experiments are carried out to show the construction of the map and the performance of the proposal.
|
79 |
Framställning av mätmetod för att upptäcka defekta luftmunstycken : Framställa en säker och tillförlitlig mätmetod för att mäta mängd vatten i 50 provrörPotros, Bashar January 2018 (has links)
För att upptäcka defekta luftmunstycken har Ecco FinishingABi Skara tagit fram en ny provutrustning som ska ersätta en otillförlitlig och osäker befintlig provmaskin. Ecco Fi-nishing AB vill hitta en tillförlitlig och säker mätmetod som ska mäta mängd vatten i 50 provrör. Examensarbetets övergripande mål är att hitta en noggrann och repeterbarmätmetod för nivåmätning av vätska i provrören. Två mätmetoder utvärderades som är mest lämpliga för nivåmätningen, visionsystem och mätning genom vägning. Anledningen till att valet ham-nade på dessa två mätmetoder är provutrustningens provrör, dels att det är många mätpunkter och för att det är små provrör. Det gjordes tjugo experiment för visionsystem och tjugoex-periment för vägningsmetodför att utvärdera och beskriva för -och nackdelar. Experimenten av visionsystem och vägning gjordes först i laborationsfas för att sedan testas på företagets befintliga provutrustning. Resultaten av mätningar sparades i ett Excel-ark som användes för att utvärdera insamlade data. Utvärderingarna jämfördes mot uppsatta mål, tillförlitlighet, noggrannhet, repeterbarhet, automatisk rapportering av resultat och tid för mätningen. Vis-ionsystem rekommenderas för fortsatt arbete och implementation på den befintliga provut-rustningen. / To detect defective air nozzles, Ecco Finishing AB in Skara has developed a new test equip-ment to replace an unreliable and uncertain existing test machine. Ecco Finishing AB wants to find a reliable and safe measurement method that will measure the amount of water in 50 test tubes. The overall goal of the thesis is to find a precise and repeatable measurement method for level measurement of fluid in the test tubes. Two measurement methods were evaluated that are most suitable for level measurement, vision systems and measurement by weighing. The reason for the choice of these two measurement methods is the test tubes of the test equipment, and that there are many measuring points and because of the small test tubes. Twenty experiments for vision systems and twenty experiments for weighingmethodwere made to evaluate and describe pros and cons. The experiments of vision systems and weighing were first made in the laboratory phase and then tested on the company's existing test equipment. The results of measurements were saved in an Excel sheet used to evaluate collected data. The evaluations were compared to set goals, reliability, accuracy, repeatabil-ity, automatic reporting of results and time of measurement. Vision systems are recom-mended for continued work and implementation on the existing test equipment
|
80 |
Detecting Sitting People : Image classification on a small device to detect sitting people in real-time videoOlsson, Jonathan January 2017 (has links)
The area of computer vision has been making big improvements in the latest decades, equally so has the area of electronics and small computers improved. These areas together have made it more available to build small, standalone systems for object detection in live video. This project's main objective is to examine whether a small device, e.g. Raspberry Pi 3, can manage an implementation of an object detection algorithm, called Viola-Jones, to count the occupancy of sitting people in a room with a camera. This study is done by creating an application with the library OpenCV, together with the language C+ +, and then test if the application can run on the small device. Whether or not the application will detect people depends on the models used, therefore three are tested: Haar Face, Haar Upper body and Haar Upper body MCS. The library's object detection function takes some parameters that works like settings for the detection algorithm. With that, the parameters needs to be tailored for each model and use case, for an optimal performance. A function was created to find the accuracy of different parameters by brute-force. The test showed that the Haar Face model was the most accurate. All the models, with their most optimal parameters, are then speed-tested with a FPS test on the raspberry pi. The result shows whether or not the raspberry pi can manage the application with the models. All models could be run and the Haar face model was fastest. As the system uses cameras, some ethical aspects are discussed about what people might think of top-corner cameras.
|
Page generated in 0.0363 seconds