41 |
Zpracování obrazu v systému Android - odečet hodnoty elektroměru / Image processing using Android device - electricity meter value recognitionSliž, Jiří January 2015 (has links)
The aim of the work is to design an application for mobile devices with the Android operating system. This application allows image capturing with a camera and image processing with the support of the OpenCV library. The purpose of this application is automatic value recognition of the analog electrometer. The text contains a description of the analog electrometers. The following is characteristic of Android operating system, and this part is directly connected to a draft of the application itself. Next part contains the image processing algorithms, testing and implementation into Android application.
|
42 |
Překlad textu v reálném čase s využitím mobilního zařízení / Real-time text translation with using mobile devicesSztefek, Lukáš January 2014 (has links)
The aim of this thesis is design and implementation of Android application which will serve as real-time translator from one language to another, including several world languages and Czech. The text obtained from image is translated and replaces the original one with keeping its visual look as much as possible. The reader gradually gets familiar with getting text from images, text translation, displaying text on mobile device and Android OS implementation. In conclusion, the experiments with multiple input images and comparison of existings applications have been done.
|
43 |
Automatické ostření s využitím CAN-EF modulu / Autofocus using CAN-EF interfaceIžarík, Marek January 2016 (has links)
The main topic of this master thesis is creating, testing and implementation of algorithms for autofocus with Canon camera lens using CAN-EF interface, while one of the assignments is possibility to continuous focus to the vehicle in traffic monitoring. There are tested a number of criteria for the assessment if sharpness in the image and is designed automatic control system of the lens and camera.
|
44 |
Transformation of sketchy UML Class Diagrams into formalPlantUML modelsAxt, Monique January 2023 (has links)
Sketching software design models is a common and intuitive practice amongsoftware engineers. These informal sketches are transient in nature unlesstransformed into a formal model that can be reused and shared. Manualtransformation, however, is time-consuming and redundant, and a method toautomatically transform these sketches into a permanent and formal softwaremodel is lacking. This study addresses this gap by creating and testingSketchToPlantUML, a sketchrecognition and transformation tool that reduces theeffort of manually transforming static, sketched UML Class Diagrams (CDs) intoformal models. The artefact uses the OpenCV library to preprocess images,segment UML elements, identify geometric features, classify relationships andtransform the output into the equivalent, formal PlantUML model. Tested againsta dataset of 70 sketched CDs, the artefact achieved overall Precision and Recallvalues of 88% and 86% respectively, scoring highest on classes (0.92 / 0.96) andlowest on association relationships (0.76 / 0.76). While the approach providesinsight into image processing and object recognition using OpenCV, a morerobust and generalised solution for automating the transformation of UMLsketches into formal models is needed.
|
45 |
On The Incorporation Of The Personality Factors Into Crowd SimulationJaganathan, Sivakumar 01 January 2007 (has links)
Recently, a considerable amount of research has been performed on simulating the collective behavior of pedestrians in the street or people finding their way inside a building or a room. Comprehensive reviews of the state of the art can be found in Schreckenberg and Deo (2002) and Batty, M., DeSyllas, J. and Duxbury, E. (2003). In all these simulation studies, one area that is lacking is accounting for the effects of human personalities on the outcome. As a result, there is a growing emphasis on researching the effects of human personalities and adding the results to the simulations to make them more realistic. This research investigated the possibility of incorporating personality factors into the crowd simulation model. The first part of this study explored the extraction of quantitative crowd motion from videos and developed a method to compare real video with the simulation output video. Several open source programs were examined and modified to obtain optical flow measurements from real videos captured at sporting events. Optical flow measurements provide information such as crowd density, average velocity with which individuals move in the crowd, as well as other parameters. These quantifiable optical flow calculations provided a strong method for comparing simulation results with those obtained from video footage captured in real life situations. The second part of the research focused on the incorporation of the personality factors into the crowd simulation. Existing crowd models such as HelbingU-Molnar-Farkas-Vicsek (HMFV) do not take individual personality factors into account. The most common approach employed by psychologists for studying personality traits is the Big Five factors or dimensions of personality (NEO: Neuroticism, Extroversion, Openness, Agreeableness and Conscientiousness). In this research forces related to the personality factors were incorporated into the crowd simulation models. The NEO-based forces were incorporated into an existing HMFV simulated implemented in the MASON simulation framework. The simulation results were validated using the quantification procedures developed in the first phase. This research reports on a major expansion of a simulation of pedestrian motion based on the model (HMFV) by Helbing, D., I. J. Farkas, P. Molnár, and T. Vicsek (2002). Example of actual behavior such as a crowd exiting church after service were simulated using NEO-based forces and show a striking resemblance to actual behavior as rated by behavior scientists.
|
46 |
Computer Vision Based Model for Art Skills AssessmentAlghamdi, Asaad 20 December 2022 (has links)
No description available.
|
47 |
FACE RECOGNITION APPLICATION BASED ON EMBEDDED SYSTEMGao, Weihao January 2013 (has links)
No description available.
|
48 |
Application of Auto-tracking to the Study of Insect Body Kinematics in Maneuver FlightSubramanian, Shreyas Vathul 27 August 2012 (has links)
No description available.
|
49 |
Método de detecção automática de eixos de caminhões baseado em imagens / Truck axle detection automatic method based on imagesPanice, Natália Ribeiro 13 September 2018 (has links)
A presente pesquisa tem por objetivo desenvolver um sistema automático de detecção de eixos de caminhões a partir de imagens. Para isso, são apresentados dois sistemas automáticos: o primeiro para extração de imagens de caminhões a partir de filmagens de tráfego rodoviário feitas em seis locais de uma mesma rodovia situada no Estado de São Paulo, e o segundo, para detecção dos eixos dos caminhões nas imagens. Ambos os sistemas foram fundamentados em conceitos de Processamento de Imagens e Visão Computacional e o desenvolvimento foi feito utilizando programação em linguagem Python e as bibliotecas OpenCV e SciKit. O salvamento automático das imagens de caminhões foi necessário para a construção do banco de imagens utilizado no outro método: a detecção dos eixos dos veículos identificados. Neste estágio foram realizadas a segmentação da imagem do caminhão, a detecção propriamente dita e a classificação dos eixos. Na segmentação dos veículos, utilizou-se as técnicas de limiarização adaptativa seguida de morfologia matemática e em outra ocasião, o descritor de texturas LBP; enquanto na detecção, a Transformada de Hough. Da análise de desempenho desses métodos, a taxa de salvamento das imagens foi 69,2% considerando todos os caminhões que se enquadraram nos frames. Com relação às detecções, a segmentação das imagens dos caminhões feita utilizando limiarização adaptativa com morfologia matemática ofereceu resultados de 57,7% da detecção do total de eixos dos caminhões e 65,6% de falsas detecções. A técnica LBP forneceu, para os mesmos casos, respectivamente, 68,3% e 84,2%. O excesso de detecção foi um ponto negativo dos resultados e pode ser relacionado aos problemas do ambiente externo, geralmente intrínsecos às cenas de tráfego de veículos. Dois fatores que interferiram de maneira significativa foram a iluminação e a movimentação das folhas e galhos das árvores devido ao vento. Desconsiderando esse inconveniente, derivado dos fatores recém citados, as taxas de acerto dos dois tipos de segmentação aumentariam para 90,4% e 93,5%, respectivamente, e as falsas detecções mudariam para 66,5% e 54,7%. Desse modo, os dois sistemas propostos podem ser considerados promissores para o objetivo proposto. / This research aims to develop an automatic truck axle detection system using images. Two automatic systems are presented: one for the extraction of truck images from road videos recorded in a São Paulo state highway and one for the axle detection on images. Both systems are based on Image Processing and Computational Vision techniques, with using programming in Python and OpenCV and SciKit libraries. The truck image extraction system was necessary for the creation of image base, to be used on the axle detection system. Thereunto, image segmentation, axle detection and classification on images were made. In segmentation was used adaptive threshold technique, followed by mathematical morphology and on another time, LBP texture descriptors; for detection, was used Hough Transform. Performance analysis on these methods wielded 69.2% on image save rate, on trucks entirely framed on the image. About axle detection, the truck image segmentation using a combination of adaptive threshold and mathematical morphology wielded 57.7% on axle detection, whilst achieving 65.6% of false detection. Using LBP wielded, on the same images, 68.3% on axle detection and 84.2% of false detection. These excesses was a negative result and can be related to intrinsic issues on the external road traffic environment. Two main factors affected the result: lighting condition changes and the movement of tree leaves and branches. Disregarding these two factors, the proposed system had 90.4% axle truck detection rate using adaptive threshold and mathematical morphology and 93.5% using LBP, and the false detection, changed for 66.5% e 54.7%. Thus, both proposed systems are considered promising.
|
50 |
Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video imagesMarcomini, Leandro Arab 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
|
Page generated in 0.0388 seconds