121 |
Framställning av mätmetod för att upptäcka defekta luftmunstycken : Framställa en säker och tillförlitlig mätmetod för att mäta mängd vatten i 50 provrör / Preparation of measurement method to detect defective air nozzles : Produce a safe and reliable measurement method for measuring the amount of water in 50 test tubesPotros, Bashar January 2018 (has links)
För att upptäcka defekta luftmunstycken har Ecco Finishing AB i Skara tagit fram en ny provutrustning som ska ersätta en otillförlitlig och osäker befintlig provmaskin. Ecco Fi-nishing AB vill hitta en tillförlitlig och säker mätmetod som ska mäta mängd vatten i 50 provrör. Examensarbetets övergripande mål är att hitta en noggrann och repeterbar mätmetod för nivåmätning av vätska i provrören. Två mätmetoder utvärderades som är mest lämpliga för nivåmätningen, visionsystem och mätning genom vägning. Anledningen till att valet ham-nade på dessa två mätmetoder är provutrustningens provrör, dels att det är många mätpunkter och för att det är små provrör. Det gjordes tjugo experiment för visionsystem och tjugo ex-periment för vägningsmetod för att utvärdera och beskriva för - och nackdelar. Experimenten av visionsystem och vägning gjordes först i laborationsfas för att sedan testas på företagets befintliga provutrustning. Resultaten av mätningar sparades i ett Excel-ark som användes för att utvärdera insamlade data. Utvärderingarna jämfördes mot uppsatta mål, tillförlitlighet, noggrannhet, repeterbarhet, automatisk rapportering av resultat och tid för mätningen. Vis-ionsystem rekommenderas för fortsatt arbete och implementation på den befintliga provut-rustningen. / To detect defective air nozzles, Ecco Finishing AB in Skara has developed a new test equip-ment to replace an unreliable and uncertain existing test machine. Ecco Finishing AB wants to find a reliable and safe measurement method that will measure the amount of water in 50 test tubes. The overall goal of the thesis is to find a precise and repeatable measurement method for level measurement of fluid in the test tubes. Two measurement methods were evaluated that are most suitable for level measurement, vision systems and measurement by weighing. The reason for the choice of these two measurement methods is the test tubes of the test equipment, and that there are many measuring points and because of the small test tubes. Twenty experiments for vision systems and twenty experiments for weighing method were made to evaluate and describe pros and cons. The experiments of vision systems and weighing were first made in the laboratory phase and then tested on the company's existing test equipment. The results of measurements were saved in an Excel sheet used to evaluate collected data. The evaluations were compared to set goals, reliability, accuracy, repeatabil-ity, automatic reporting of results and time of measurement. Vision systems are recom-mended for continued work and implementation on the existing test equipment.
|
122 |
Kontrola průmyslové montáže pomocí kamery / Automated Camera Measurement in the Industrial ProcessSedlář, Martin January 2012 (has links)
This master's thesis deals with questions about contactless camera detection of presence and right direction of assembled parts in the industry process. The main aim of this work is design and realization of graphical user interface and algorithm for automated camera measurement system in the industrial process.
|
123 |
Detekce ochranných pomůcek v obrazovém signálu / Detection of security aids in image signalBurdík, Vojtěch January 2014 (has links)
This work is devoted to the relatively new field of computer – computer vision. It focuses on the recognition of people, positioning and colour detection of clothing placed on person. The aim is to build an algorithm that would be able to locate the person in the picture and would make colours tests of clothing and helmets. For image processing were used OpenCV library functions and from algorithms was compiled program solving this problem. The output of the program is the answer, what colour is person at stated locations wearing, and if clothing and helmet are the same colour, the person is evaluated as properly dressed. The resulting program is then disassembled and parts of the code are in detail described in this work. There is explained how to use correctly each OpenCV function used in program.
|
124 |
Klasifikace a rozpoznávání patologických nálezů v obrazech sítnice oka / Classification and Recognition of Pathologic Foundings in Eye Retina ImagesMacek, Ján January 2016 (has links)
Diabetic retinopathy and age-related macular degeneration are two of the most common retinal diseases in these days, which can lead to partial or full loss of sight. Due to it, it is necessary to create new approaches enabling to detect these diseases and inform the patient about his condition in advance. The main objective of this work is to design and to implement an algorithm for retinal diseases classification based on images of the patient's retina of previously mentioned diseases. In the first part of this work, there is described in detail each stage of each disease and its the most frequent symptoms. In this thesis, there is also a chapter about fundus camera, which is a tool for image creation of human eye retina. In the second part of this thesis, there is proposed an approach for classification of diabetic retinopathy and age-related macular degeneration. There is also a chapter about algorithmic methods which can be used for image processing and object detection in image. The last part of this thesis contains the test results and their evaluation. Assessment of success of proposed and implemented methods is also part of this chapter.
|
125 |
Detecting and comparing Kanban boards using Computer Vision / Detektering och jämförelse av Kanbantavlor med hjälp av datorseendeBehnam, Humam January 2022 (has links)
This thesis investigates the problem of detecting and tracking sticky notes on Kanban boards using classical computer vision techniques. Currently, there exists some alternatives for digitizing sticky notes, but none keep track of notes that have already been digitized, allowing for duplicate notes to be created when scanning multiple images of the same Kanban board. Kanban boards are widely used in various industries, and being able to recognize, and possibly in the future even digitize entire Kanban boards could provide users with extended functionality. The implementation presented in this thesis is able to, given two images, detect the Kanban boards in each image and rectify them. The rectified images are then sent to the Google Cloud Vision API for text detection. Then, the rectified images are used to detect all the sticky notes. The positional information of the notes and columns of the Kanban boards are then used to filter the text detection to find the text inside each note as well as the header text for each column. Between the two images, the columns are compared and matched, as well as notes of the same color. If columns or notes in one image do not have a match in the second image, it is concluded that the boards are different, and the user is informed of why. If all columns and notes in one image have matches in the second image but some notes have moved, the user is informed of which notes that have moved, and how they have moved as well. The different experiments conducted in this thesis on the implementation show that it works well, but it is very confined to strict requirements, making it unsuitable for commercial use. The biggest problem to solve is to make the implementation more general, i.e. the Kanban board layout, sticky note shapes and colors as well as their actual content. / Denna avhandling undersöker problemet med att upptäcka och spåra klisterlappar och Kanban-tavlor med hjälp av klassiska datorseendetekniker. För närvarande finns det några alternativ för att digitalisera klisterlappar, men ingen håller reda på anteckningar som redan har digitaliserats, vilket gör att duplicerade anteckningar kan skapas när du skannar flera bilder av samma Kanban-kort. Kanban-kort används flitigt i olika branscher och att kunna känna igen, och eventuellt i framtiden även digitalisera hela Kanban-tavlor, skulle kunna ge användarna utökad funktionalitet. Implementeringen som presenteras i denna avhandling kan, givet två bilder, upptäcka Kanban-brädorna i varje bild och korrigera dem. De korrigerade bilderna skickas sedan till Google Cloud Vision API för textidentifiering. Sedan används de korrigerade bilderna för att upptäcka alla klisterlappar. Positionsinformationen för anteckningarna och kolumnerna på Kanban-tavlan används sedan för att filtrera textdetekteringen för att hitta texten i varje anteckning såväl som rubriktexten för varje kolumn. Mellan de två bilderna jämförs och matchas kolumnerna, samt anteckningar av samma färg. Om kolumner eller anteckningar i en bild inte har en matchning i den andra bilden dras slutsatsen att brädorna är olika och användaren informeras om varför. Om alla kolumner och anteckningar i en bild har matchningar i den andra bilden men några anteckningar har flyttats, informeras användaren om vilka anteckningar som har flyttats och hur de har flyttats. De olika experiment som genomförs i denna avhandling om implementering visar att den fungerar bra, men den är mycket begränsad till strikta krav, vilket gör den olämplig för kommersiellt bruk. Det största problemet att lösa är att göra implementeringen mer generell, d.v.s. Kanban-tavlans layout, klisterlapparnas former och färger samt deras faktiska innehåll.
|
126 |
Utilizing machine learning in wildlife camera traps for automatic classification of animal species : An application of machine learning on edge devicesErlandsson, Niklas January 2021 (has links)
A rapid global decline in biodiversity has been observed in the past few decades, especially in large vertebrates and the habitats supporting these animal populations. This widely accepted fact has made it very important to understand how animals respond to modern ecological threats and to understand the ecosystems functions. The motion activated camera (also known as a camera trap) is a common tool for research in this field, being well-suited for non-invasive observation of wildlife. The images captured by camera traps in biological studies need to be classified to extract information, a traditionally manual process that is time intensive. Recent studies have shown that the use of machine learning (ML) can automate this process while maintaining high accuracy. Until recently the use of machine learning has required significant computing power, relying on data being processed after collection or transmitted to the cloud. This need for connectivity introduces potentially unsustainable overheads that can be addressed by placing computational resources on the camera trap and processing data locally, known as edge computing. Including more computational power in edge and IoT devices makes it possible to keep the computation and data storage on the edge, commonly referred to as edge computing. Applying edge computing to the camera traps enables the use of ML in environments with slow or non-existent network accesss since their functionality does not rely on the need for connectivity. This project shows the feasibility of running machine learning algorithms for the purpose of species identification on low-cost hardware with similar power to what is commonly found in edge and IoT devices, achieving real-time performance and maintaining high energy efficiency sufficient for more than 12 hours of runtime on battery power. Accuracy results were mixed, indicating the need for more tailor-made network models for performing this task and the importance of high quality images for classification.
|
127 |
Investigations of stereo setup for KinectManuylova, Ekaterina January 2012 (has links)
The main purpose of this work is to investigate the behavior of the recently released by Microsoft company the Kinect sensor, which contains the properties that go beyond ordinary cameras. Normally, in order to create a 3D reconstruction of the scene two cameras are required. Whereas, the Kinect device, due to the properties of the Infrared projector and sensor allows to create the same type of the reconstruction using only one device. However, the depth images, which are generated by the Infrared laser projector and monochrome sensor in Kinect can contain undefined values. Therefore, in addition to other investigations this project contains an idea how to improve the quality of the depth images. However, the base aim of this work is to perform a reconstruction of the scene based on the color images using pair of Kinects which will be compared with the results generated by using depth information from one Kinect. In addition, the report contains the information how to check that all the performed calculations were done correctly. All the algorithms which were used in the project as well as the achieved results will be described and discussed in the separate chapters in the current report.
|
128 |
Face Tracking Using Optical Flow : Real-Time Optical Flow Enhanced AdaBoost Cascade Face TrackerRanftl, Andreas January 2014 (has links)
This master thesis deals with real-time algorithms and techniques for face detection and facetracking in videos. A new approach is presented where optical flow information is incorporatedinto the Viola-Jones face detection algorithm, allowing the algorithm to update the expectedposition of detected faces in the next frame. This continuity between video frames is not exploitedby the original algorithm from Viola and Jones, in which face detection is static asinformation from previous frames is not considered.In contrast to the Viola-Jones face detector and also to the Kanade-Lucas-Tomasi tracker, theproposed face tracker preserves information about near-positives.In general terms the developed algorithm builds a likelihood map from results of the Viola-Jones algorithm, then computes the optical flow between two consecutive frames and finallyinterpolates the likelihood map in the next frame by the computed flow map. Faces get extractedfrom the likelihood map using image segmentation techniques. Compared to the Viola-Jonesalgorithm an increase in stability as well as an improvement of the detection rate is achieved.Firstly, the real-time face detection algorithm from Viola and Jones is discussed. Secondly theauthor presents methods which are suitable for tracking faces. The theoretical overview leadsto the description of the proposed face tracking algorithm. Both principle and implementationare discussed in detail. The software is written in C++ using the Open Computer Vision Libraryas well as the Matlab MEX interface.The resulting face tracker was tested on the Boston Head Tracking Database for which groundtruth information is available. The proposed face tracking algorithm outperforms the Viola-Jones face detector in terms of average detection rate and temporal consistency.
|
129 |
Raising Awareness of Computer Vision : How can a single purpose focused CV solution be improved?Zukas, Paulius January 2018 (has links)
The concept of Computer Vision is not new or fresh. On contrary ideas have been shared and worked on for almost 60 years. Many use cases have been found throughout the years and various systems developed, but there is always a place for improvement. An observation was made, that methods used today are generally focused on a single purpose and implement expensive technology, which could be improved. In this report, we are going to go through an extensive research to find out if a professionally sold, expensive software, can be replaced by an off the shelf, low-cost solution entirely designed and developed in-house. To do that we are going to look at the history of Computer Vision, examples of applications, algorithms, and find general scenarios or computer vision problems which can be solved. We are then going take a step further and define solid use cases for each of the scenarios found. Finally, a prototype solution is going to be designed and presented. After analysing the results gathered we are going to reach out to the reader convincing him/her that such application can be developed and work efficiently in various areas saving investments to businesses.
|
130 |
[en] GENERATING SUPERRESOLVED DEPTH MAPS USING LOW COST SENSORS AND RGB IMAGES / [pt] GERAÇÃOO DE MAPAS DE PROFUNDIDADE SUPER-RESOLVIDOS A PARTIR DE SENSORES DE BAIXO CUSTO E IMAGENS RGBLEANDRO TAVARES ARAGAO DOS SANTOS 11 January 2017 (has links)
[pt] As aplicações da reconstrução em três dimensões de uma cena real são as mais diversas. O surgimento de sensores de profundidade de baixo custo, tal qual o Kinect, sugere o desenvolvimento de sistemas de reconstrução mais baratos que aqueles já existentes. Contudo, os dados disponibilizados por este dispositivo ainda carecem em muito quando comparados àqueles providos por sistemas mais sofisticados. No mundo acadêmico e comercial, algumas iniciativas, como aquelas de Tong et al. [1] e de Cui et al. [2], se propõem a solucionar tal problema. A partir do estudo das mesmas, este trabalho propôs a modificação do algoritmo de super-resolução descrito por Mitzel et al. [3] no intuito de considerar em seus cálculos as imagens coloridas também fornecidas pelo dispositivo, conforme abordagem de Cui et al. [2]. Tal alteração melhorou os mapas de profundidade super-resolvidos fornecidos, mitigando interferências geradas por movimentações repentinas
na cena captada. Os testes realizados comprovam a melhoria dos mapas gerados, bem como analisam o impacto da implementação em CPU e GPU dos algoritmos nesta etapa da super-resolução. O trabalho se restringe a esta etapa. As etapas seguintes da reconstrução 3D não foram implementadas. / [en] There are a lot of three dimensions reconstruction applications of real scenes. The rise of low cost sensors, like the Kinect, suggests the development of systems cheaper than the existing ones. Nevertheless, data
provided by this device are worse than that provided by more sophisticated sensors. In the academic and commercial world, some initiatives, described in Tong et al. [1] and in Cui et al. [2], try to solve that problem. Studying that attempts, this work suggests the modification of super-resolution algorithm described for Mitzel et al. [3] in order to consider in its calculations coloured images provided by Kinect, like the approach of Cui et al. [2]. This change improved the super resolved depth maps provided, mitigating interference caused by sudden changes of captured scenes. The tests proved the improvement of generated maps and analysed the impact of CPU and GPU algorithms implementation in the superresolution step. This work is restricted to this step. The next stages of 3D reconstruction have not been implemented.
|
Page generated in 0.0501 seconds