Spelling suggestions: "subject:"openai"" "subject:"open64""
21 |
Vaizdo atpažinimas dirbtiniais neuroniniais tinklais / Image recognition with artificial neural networksTamošiūnas, Darius 24 July 2014 (has links)
Darbe aprašoma tyrimas, kurio metu buvo sukurta programa, naudojantis OpenCV ir DNT klaidos skleidimo atgal algoritmu, gebanti aptikti ir bandanti klasifikuoti veidus. Darbo eigoje: • Įsigilinta į OpenCV funkcijų biblioteką; • Išanalizuota DNT teorinė medžiaga; • Sukurta programinė įranga, kuri, naudojantis „webcam“, geba aptikti ir bando klasifikuoti veidus; • Atliktas eksperimentinis tyrimas; • Nustatyti programos trūkumai; • Pateikti kiti sprendimo būdai; Realizuota programinė įranga gali būti naudojama edukaciniais tikslais. / The work describes an experiment,in which progress was created a software,by using OpenCV and ANN error back propagation algorithm capable of detecting and attempting to classify the faces. Workflow: • Delved deeply into the OpenCV library functions; • Analyzed the theoretical material of ANN • Developed the software, which, using webcam, is capable of detecting and trying to classify the faces; • Made an experimental study; • Determined the weaknesses of the program; • The other methods; created software can be used for educational purposes.
|
22 |
Detekce rychlosti přibližování automobilu na základě zpracování obrazu kamery / Detection of car approach speed using camera imge processingKovář, Jan January 2010 (has links)
The thesis deals with digital image processing, from the initial acquisition of digital picture frames, subsequent processing segmentation and algorithms to detect visual shapes on the scene. Image processing is a very broad topic, so here are analyzed for more understanding of the fundamental principles of perception and processing of video signals, image representation, his starting shooting through filters governing digital image processing methods to detect the objects in an image. It is also demonstrated by the size dependence of the object in the image on the distance from the camera, whereby we can determine the speed of approaching or moving away from the object. We will show you the specific determination of the distance we need to know the actual result size of the object. This is because the ratio between the size of the object depending on the distance is the same for each object. Finally, this work presents the resulting image frames for implementation using OpenCV library.
|
23 |
Mozart2000 : Music reading and piano playing robot / Notläsande och pianospelande robotMalm, Lukas, Phan, Anna January 2019 (has links)
Many industries have been transformed to better perform in today’s digital age. In this project a solution for digitalizing printed sheet music as well as automating piano playing is researched, developed and built. The project was divided into three sub-systems, the first focusing on the digitalizing of sheet music, the second on identifying and classifying the notes and the third on playing the piano. These were later combined to form a demonstrator called Mozart2000, or M2k. The result was a robot which could determine the note pitch of an arbitrary note, or note combination, written in common music notation, and furthermore play these on the piano. The algorithm is based of off finding coordinates for stafflines and notes using image processing. Programming was done in Python with some functions extracted from the library OpenCV (Open Source Computer Vision). The piano playing mechanism uses solenoids and lever arms, controlled by electrical signals from a Raspberry Pi. Due to scope in budget and time some restrictions were made. The note range for the robot was limited to one octave, meaning 8 piano keys. Moreover, other musical information such as rhythmical and coloring were overlooked and set to a predetermined value. For the digitalizing part, a camera was used, taking a snapshot of one musical bar. The final solution however can be expanded to include additional keys and music segments by replicating the existing mechanism. / I detta projekt söks en lösning för digitalisering av tryckt notskrift, mot bakgrund av en ökande efterfrågan på digitala lösningar. Parallellt undersöks möjligheterna till att automatisera pianospel. För att underlätta arbetet delades projektet in i tre delsystem; det första fokuserade på digitalisering av notpapper, det andra på att hitta och identifiera noter, och det tredje på pianospelet. Delsystemen kunde därefter integreras och resulterade då i Mozart2000, M2k. Den slutgiltiga lösningen är en robot som kan bestämma tonhöjden från ett notpapper och spela dessa på ett piano. Den framtagna algoritmen bygger på att hitta koordinater för notlinjer och noter, jämföra dessa sinsemellan och tilldela dessa utgångar på en Raspberry Pi. Från dessa skickas elektriska signaler till en krets bestående av bland annat transistorer och frihjulsdioder, som i sin tur är kopplade till solenoider. Dessa solenoider kopplade till egentillverkade fingrar kommer sedan att slå an tangenterna på pianot. Eftersom projektet var begränsat i tid och budget gjordes ett antal förenklingar. Till exempel skulle Mozart2000 hålla sig till en oktav, det vill säga åtta toner. Vidare skulle rytmen vara en konstant och endast en takt skulle analyseras och spelas åt gången. Det bedöms dock möjligt att duplicera systemet för att täcka ett större notomfång och/eller fler takter.
|
24 |
Benchmarking of Vision-Based Prototyping and Testing ToolsBalasubramanian, ArunKumar 21 September 2017 (has links)
The demand for Advanced Driver Assistance System (ADAS) applications is increasing day by day and their development requires efficient prototyping and real time testing. ADTF (Automotive Data and Time Triggered Framework) is a software tool from Elektrobit which is used for Development, Validation and Visualization of Vision based applications, mainly for ADAS and Autonomous driving. With the help of ADTF tool, Image or Video data can be recorded and visualized and also the testing of data can be processed both on-line and off-line. The development of ADAS applications needs image and video processing and the algorithm has to be highly efficient and must satisfy Real-time requirements. The main objective of this research would be to integrate OpenCV library with ADTF cross platform. OpenCV libraries provide efficient image processing algorithms which can be used with ADTF for quick benchmarking and testing. An ADTF filter framework has been developed where the OpenCV algorithms can be directly used and the testing of the framework is carried out with .DAT and image files with a modular approach. CMake is also explained in this thesis to build the system with ease of use. The ADTF filters are developed in Microsoft Visual Studio 2010 in C++ and OpenMP API are used for Parallel programming approach.
|
25 |
Application of an automated labour performance measuring system at a confectionery companyVan Blommestein, D.L., Matope, S., Ruthven, G., Van der Merwe, A.F. January 2013 (has links)
Published Article / This paper focuses on the implementation of a labour performance measuring system at a confectionery company. The computer vision based system is based on the work sampling methodology. It consists of four cameras linked to a central computer via USB extenders. The computer uses a random function in C++ in order to determine when measurements are to be taken. OpenCV is used to track the movement of a target worker's dominant hand at a given work station. Tracking is accomplished through the use of a bandwidth colour filter. The speed of the worker's hand is used to identify whether the worker is busy, idle or out of the frame over the course of the sampling period. Data collected by the system is written into a number of text files. The stored data is then exported to a Microsoft Excel 2007 spread sheet where it is analysed and a report on the labour utilisation is generated.
|
26 |
Visual control of multi-rotor UAVsDuncan, Stuart Johann Maxwell January 2014 (has links)
Recent miniaturization of computer hardware, MEMs sensors, and high energy density
batteries have enabled highly capable mobile robots to become available at low cost.
This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles.
Another area which has expanded simultaneously is small powerful computers, in the
form of smartphones, which nearly always have a camera attached, many of which now
contain a OpenCL compatible graphics processing units. By combining the results of
those two developments a low-cost multi-rotor UAV can be produced with a low-power
onboard computer capable of real-time computer vision. The system should also use
general purpose computer vision software to facilitate a variety of experiments.
To demonstrate this I have built a quadrotor UAV based on control hardware from
the Pixhawk project, and paired it with an ARM based single board computer, similar
those in high-end smartphones. The quadrotor weights 980 g and has a flight time of
10 minutes. The onboard computer capable of running a pose estimation algorithm
above the 10 Hz requirement for stable visual control of a quadrotor.
A feature tracking algorithm was developed for efficient pose estimation, which relaxed
the requirement for outlier rejection during matching. Compared with a RANSAC-
only algorithm the pose estimates were less variable with a Z-axis standard deviation
0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster
with tracking, with 95 % confidence that tracking would process the frame within 50 ms,
while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the
algorithm with a total system load of less than 25 %. All computer vision software uses
the OpenCV library for common computer vision algorithms, fulfilling the requirement
for running general purpose software.
The tracking algorithm was used to demonstrate the capability of the system by per-
forming visual servoing of the quadrotor (after manual takeoff). Response to external
perturbations was poor however, requiring manual intervention to avoid crashing. This
was due to poor visual controller tuning, and to variations in image acquisition and
attitude estimate timing due to using free running image acquisition.
The system, and the tracking algorithm, serve as proof of concept that visual control of
a quadrotor is possible using small low-power computers and general purpose computer
vision software.
|
27 |
The Design and Implementation of an Effective Vision-Based Leader-Follower Tracking Algorithm Using PI CameraLi, Songwei 08 1900 (has links)
The thesis implements a vision-based leader-follower tracking algorithm on a ground robot system. One camera is the only sensor installed the leader-follower system and is mounted on the follower. One sphere is the only feature installed on the leader. The camera identifies the sphere in the openCV Library and calculates the relative position between the follower and leader using the area and position of the sphere in the camera frame. A P controller for the follower and a P controller for the camera heading are built. The vision-based leader-follower tracking algorithm is verified according to the simulation and implementation.
|
28 |
Utveckling av Mobilapplikation för Rörelseanalys med Kaskadklassificerare / Development of a Mobile Application for Motion Analysis with Cascade ClassifiersOrö, Anton, Basa, Alexander, Andersson, Alexander, Loborg, Markus, Lindstén, Andreas January 2019 (has links)
Denna rapport behandlar projektarbetet som utfördes av fem studenter inom datateknik och mjukvaruteknik vid Linköpings Universitet. Projektarbetet utfördes som en del av kursen TDDD96 Kandidatprojekt i mjukvaruutveckling under vårterminen 2019. Syftet med rapporten är att utvärdera arbetsgången för framtagningen av en produkt. Projektet behandlar implementering av bildanalys för mobila Android-enheter och har gjorts på uppdrag av Image Systems Nordic AB. Applikationens ändamål var att genom kameran på en mobiltelefon kunna spåra objekt och analysera dess positioner. Resultatet av projektarbetet är applikationen TrackApp som genom maskininlärning kunde spåra objekt i realtid och på video. Utöver produkten bearbetar rapporten hur projektgruppen arbetade samt individuella fördjupningsområden gruppmedlemmarna studerat.
|
29 |
Lane-Based Front Vehicle Detection and Its AccelerationChen, Jie-Qi 02 January 2013 (has links)
Based on .Net Framework4.0 development platform and Visual C# language, this thesis presents various methods of performing lane detection and preceding vehicle detection/tracking with code optimization and acceleration to reduce the execution time. The thesis consists of two major parts: vehicle detection and tracking. In the part of detection, driving lanes are identified first and then the preceding vehicles between the left lane and right lane are detected using the shadow information beneath vehicles. In vehicle tracking, three-pass search method is used to find the matched vehicles based on the detection results in the previous frames. According to our experiments, the preprocessing (including color-intensity conversion) takes a significant portion of total execution time. We propose different methods to optimize the code and speed up the software execution using pure C # pointers, OPENCV, and OPENCL etc. Experimental results show that the fastest detection/tracking speed can reach more than 30 frames per second (fps) using PC with i7-2600 3.4Ghz CPU. Except for OPENCV with execution rate of 18 fps, the rest of methods have up to 28 fps processing rate of almost the real-time speed. We also add the auxiliary vehicle information, such as preceding vehicle distance and vehicle offset warning.
|
30 |
An Energy Efficient FPGA Hardware Architecture for the Acceleration of OpenCV Object DetectionBrousseau, Braiden 21 November 2012 (has links)
The use of Computer Vision in programmable mobile devices could lead to novel and creative applications. However, the computational demands of Computer Vision are ill-suited to low performance mobile processors. Also the evolving algorithms, due to active research in this fi eld, are ill-suited to dedicated digital circuits. This thesis proposes the inclusion of an FPGA co-processor in smartphones as a means of efficiently computing
tasks such as Computer Vision. An open source object detection algorithm is run on a mobile device and implemented on an FPGA to motivate this proposal. Our hardware implementation presents a novel memory architecture and a SIMD processing style that achieves both high performance and energy efficiency. The FPGA implementation outperforms a mobile device by 59 times while being 13.5 times more energy efficient.
|
Page generated in 0.019 seconds