Spelling suggestions: "subject:"opencv"" "subject:"opencvs""
201 |
Eismo dalyvių kelyje atpažinimas naudojant dirbtinius neuroninius tinklus ir grafikos procesorių / On - road vehicle recognition using neural networks and graphics processing unitKinderis, Povilas 27 June 2014 (has links)
Kasmet daugybė žmonių būna sužalojami autoįvykiuose, iš kurių dalis sužalojimų būna rimti arba pasibaigia mirtimi. Dedama vis daugiau pastangų kuriant įvairias sistemas, kurios padėtų mažinti nelaimių skaičių kelyje. Tokios sistemos gebėtų perspėti vairuotojus apie galimus pavojus, atpažindamos eismo dalyvius ir sekdamos jų padėtį kelyje. Eismo dalyvių kelyje atpažinimas iš vaizdo yra pakankamai sudėtinga, daug skaičiavimų reikalaujanti problema. Šiame darbe šiai problemai spręsti pasitelkti stereo vaizdai, nesugretinamumo žemėlapis bei konvoliuciniai neuroniniai tinklai. Konvoliuciniai neuroniniai tinklai reikalauja daug skaičiavimų, todėl jie optimizuoti pasitelkus grafikos procesorių ir OpenCL. Gautas iki 33,4% spartos pagerėjimas lyginant su centriniu procesoriumi. Stereo vaizdai ir nesugretinamumo žemėlapis leidžia atmesti didelius kadro regionus, kurių nereikia klasifikuoti su konvoliuciniu neuroniniu tinklu. Priklausomai nuo scenos vaizde, reikalingų klasifikavimo operacijų skaičius sumažėja vidutiniškai apie 70-95% ir tai leidžia kadrą apdoroti atitinkamai greičiau. / Many people are injured during auto accidents each year, some injures are serious or end in death. Many efforts are being put in developing various systems, which could help to reduce accidents on the road. Such systems could warn drivers of a potential danger, while recognizing on-road vehicles and tracking their position on the road. On-road vehicle recognition on image is a complex and computationally very intensive problem. In this paper, to solve this problem, stereo images, disparity map and convolutional neural networks are used. Convolutional neural networks are very computational intensive, so to optimize it GPU and OpenCL are used. 33.4% speed improvement was achieved compared to the central processor. Stereo images and disparity map allows to discard large areas of the image, which are not needed to be classified using convolutional neural networks. Depending on the scene of the image, the number of the required classification operations decreases on average by 70-95% and this allows to process the image accordingly faster.
|
202 |
Development of Star Tracker Attitude and Position Determination System for Spacecraft Maneuvering and Docking FacilityDikmen, Serkan January 2016 (has links)
Attitude and position determination systems in satellites are absolutely necessary to keep the desired trajectory. A very accurate, reliable and most used sensor for attitude determination is the star tracker, which orient itself in space by observing and comparing star constellations with known star patterns. For on earth tests of movements and docking maneuvers of spacecrafts, the new Spacecraft Maneuvering and Docking (SMD) facility at the chair of Aerospace Information Technology at the University of Würzburg has been built. Air bearing systems on the space ve- hicles help to create micro gravity environment on a smooth surface and simulate an artificial space-like surrounding. A new star tracker based optical sensor for indoor application need to be developed in order to get the attitude and position of the vehicles. The main objective of this thesis is to research on feasible star tracking algorithms for the SMD facility first and later to implement a star detection software framework with new developed voting methods to give the star tracker system its fully autonomous function of attitude determination and position tracking. Furthermore, together with image processing techniques, the software framework is embedded into a controller board. This thesis proposes also a wireless network system for the facility, where all the devices on the vehicles can uniquely communicate within the same network and a devel- opment of a ground station to monitor the star tracker process has also been introduced. Multiple test results with different scenarios on position tracking and attitude determination, discussions and suggestions on improvements complete the entire thesis work.
|
203 |
Real-time Detection And Tracking Of Human Eyes In Video SequencesSavas, Zafer 01 September 2005 (has links) (PDF)
Robust, non-intrusive human eye detection problem has been a fundamental and challenging problem for computer vision area. Not only it is a problem of its own, it can be used to ease the problem of finding the locations of other facial features for recognition tasks and human-computer interaction purposes as well. Many previous works have the capability of determining the locations of the human eyes but the main task in this thesis is not only a vision system with eye detection capability / Our aim is to design a real-time, robust, scale-invariant eye tracker system with human eye movement indication property using the movements of eye pupil. Our eye tracker algorithm is implemented using the Continuously Adaptive Mean Shift (CAMSHIFT) algorithm proposed by Bradski and the EigenFace method proposed by Turk & / Pentland. Previous works for scale invariant object detection using Eigenface method are mostly dependent on limited number of user predefined scales which causes speed problems / so in order to avoid this problem an adaptive eigenface method using the information extracted from CAMSHIFT algorithm is implemented to have a fast and scale invariant eye tracking.
First of all / human face in the input image captured by the camera is detected using the CAMSHIFT algorithm which tracks the outline of an irregular shaped object that may change size and shape during the tracking process based on the color of the object. Face area is passed through a number of preprocessing steps such as color space conversion and thresholding to obtain better results during the eye search process. After these preprocessing steps, search areas for left and right eyes are determined using the geometrical properties of the human face and in order to locate each eye indivually the training images are resized by the width information supplied by the CAMSHIFT algortihm. Search regions for left and right eyes are individually passed to the eye detection algortihm to determine the exact locations of each eye. After the detection of eyes, eye areas are individually passed to the pupil detection and eye area detection algorithms which are based on the Active Contours method to indicate the pupil and eye area. Finally, by comparing the geometrical locations of pupil with the eye area, human gaze information is extracted.
As a result of this thesis a software named &ldquo / TrackEye&rdquo / with an user interface having indicators for the location of eye areas and pupils, various output screens for human computer interaction and controls for allowing to test the effects of color space conversions and thresholding types during object tracking has been built.
|
204 |
Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor controlAdeboye, Taiyelolu January 2018 (has links)
This report shows a robust and efficient implementation of a speed-optimized algorithm for object recognition, 3D real world location and tracking in real time. It details a design that was focused on detecting and following objects in flight as applied to a football in motion. An overall goal of the design was to develop a system capable of recognizing an object and its present and near future location while also actuating a robotic arm in response to the motion of the ball in flight. The implementation made use of image processing functions in C++, NVIDIA Jetson TX1, Sterolabs’ ZED stereoscopic camera setup in connection to an embedded system controller for the robot arm. The image processing was done with a textured background and the 3D location coordinates were applied to the correction of a Kalman filter model that was used for estimating and predicting the ball location. A capture and processing speed of 59.4 frames per second was obtained with good accuracy in depth detection while the ball was well tracked in the tests carried out.
|
205 |
REGTEST - an Automatic & Adaptive GUI Regression Testing Tool.Forsgren, Robert, Petersson Vasquez, Erik January 2018 (has links)
Software testing is something that is very common and is done to increase the quality of and confidence in a software. In this report, an idea is proposed to create a software for GUI regression testing which uses image recognition to perform steps from test cases. The problem that exists with such a solution is that if a GUI has had changes made to it, then many test cases might break. For this reason, REGTEST was created which is a GUI regression testing tool that is able to handle one type of change that has been made to the GUI component, such as a change in color, shape, location or text. This type of solution is interesting because setting up tests with such a tool can be very fast and easy, but one previously big drawback of using image recognition for GUI testing is that it has not been able to handle changes well. It can be compared to tools that use IDs to perform a test where the actual visualization of a GUI component does not matter; It only matters that the ID stays the same; however, when using such tools, it either requires underlying knowledge of the GUI component naming conventions or the use of tools which automatically constructs XPath queries for the components. To verify that REGTEST can work as well as existing tools a comparison was made against two professional tools called Ranorex and Kantu. In those tests, REGTEST proved very successful and performed close to, or better than the other software.
|
206 |
Automatic Volume Estimation Using Structure-from-Motion Fused with a Cellphone's Inertial SensorsFallqvist, Marcus January 2017 (has links)
The thesis work evaluates a method to estimate the volume of stone and gravelpiles using only a cellphone to collect video and sensor data from the gyroscopesand accelerometers. The project is commissioned by Escenda Engineering withthe motivation to replace more complex and resource demanding systems with acheaper and easy to use handheld device. The implementation features popularcomputer vision methods such as KLT-tracking, Structure-from-Motion, SpaceCarving together with some Sensor Fusion. The results imply that it is possible toestimate volumes up to a certain accuracy which is limited by the sensor qualityand with a bias. / I rapporten framgår hur volymen av storskaliga objekt, nämligen grus-och stenhögar,kan bestämmas i utomhusmiljö med hjälp av en mobiltelefons kamerasamt interna sensorer som gyroskop och accelerometer. Projektet är beställt avEscenda Engineering med motivering att ersätta mer komplexa och resurskrävandesystem med ett enkelt handhållet instrument. Implementationen använderbland annat de vanligt förekommande datorseendemetoderna Kanade-Lucas-Tommasi-punktspårning, Struktur-från-rörelse och 3D-karvning tillsammans medenklare sensorfusion. I rapporten framgår att volymestimering är möjligt mennoggrannheten begränsas av sensorkvalitet och en bias.
|
207 |
Mobile Real-Time License Plate RecognitionLiaqat, Ahmad Gull January 2011 (has links)
License plate recognition (LPR) system plays an important role in numerous applications, such as parking accounting systems, traffic law enforcement, road monitoring, expressway toll system, electronic-police system, and security systems. In recent years, there has been a lot of research in license plate recognition, and many recognition systems have been proposed and used. But these systems have been developed for computers. In this project, we developed a mobile LPR system for Android Operating System (OS). LPR involves three main components: license plate detection, character segmentation and Optical Character Recognition (OCR). For License Plate Detection and character segmentation, we used JavaCV and OpenCV libraries. And for OCR, we used tesseract-ocr. We obtained very good results by using these libraries. We also stored records of license numbers in database and for that purpose SQLite has been used.
|
208 |
Evaluating Vivado High-Level Synthesis on OpenCV Functions for the Zynq-7000 FPGAJohansson, Henrik January 2015 (has links)
More complex and intricate Computer Vision algorithms combined with higher resolution image streams put bigger and bigger demands on processing power. CPU clock frequencies are now pushing the limits of possible speeds, and have instead started growing in number of cores. Most Computer Vision algorithms' performance respond well to parallel solutions. Dividing the algorithm over 4-8 CPU cores can give a good speed-up, but using chips with Programmable Logic (PL) such as FPGA's can give even more. An interesting recent addition to the FPGA family is a System on Chip (SoC) that combines a CPU and an FPGA in one chip, such as the Zynq-7000 series from Xilinx. This tight integration between the Programmable Logic and Processing System (PS) opens up for designs where C programs can use the programmable logic to accelerate selected parts of the algorithm, while still behaving like a C program. On that subject, Xilinx has introduced a new High-Level Synthesis Tool (HLST) called Vivado HLS, which has the power to accelerate C code by synthesizing it to Hardware Description Language (HDL) code. This potentially bridges two otherwise very separate worlds; the ever popular OpenCV library and FPGAs. This thesis will focus on evaluating Vivado HLS from Xilinx primarily with image processing in mind for potential use on GIMME-2; a system with a Zynq-7020 SoC and two high resolution image sensors, tailored for stereo vision.
|
209 |
Kontrola zobrazení textu ve formulářích / Quality Check of Text in FormsMoravec, Zbyněk January 2017 (has links)
Purpose of this thesis is the quality check of correct button text display on photographed monitors. These photographs contain a variety of image distortions which complicates the following image graphic element recognition. This paper outlines several possibilities to detect buttons on forms and further elaborates on the implemented detection based on contour shapes description. After buttons are found, their defects are detected subsequently. Additionally, this thesis describes an automatic identification of picture with the highest quality for documentation purposes.
|
210 |
Detekce nánosu UV lepidla / UV adhesive coating detectionPavelka, Radek January 2018 (has links)
This diploma thesis focuses on a design of camera control system used for detecting defects, appearing during a UV luminescent glue application on the bottom of a paper bag. As a part of this thesis, an application was developed, using Baumer VCXG-53C industrial camera, implementing two dierent control methods - 2D cross correlation image pattern matching based on previously user defined pattern and glue area size measuring based on binary segmented image. The result of this work is a fully developed control system, prepared to be put into operation at the customer’s production line.
|
Page generated in 0.0371 seconds