51 |
Object detection and single-board computers : En förstudie gjord på Saab ABJansson, Martin, Petersson, Simon January 2018 (has links)
Saab använder sig i nuläget av ett utdaterat system för att utföra tester av deras produkter. Systemet filmar ur olika vinklar och sammanfogar videoströmmarna till en slutgiltig video, där de sedan kan analysera resultatet av produkten. Enkortsdatorer är något som på senare år har blivit mer och mer populärt, Saab vill därför undersöka om det går att ersätta det äldre systemet med enkortsdatorer och kameror.Det ska undersökas om enkortsdatorn BeagleBoard klarar av att köra objektidentifiering samtidigt som den filmar och utför operationer som videosynkning, videokodning samt sparar den synkade filmen.Undersökningen visade att BeagleBoardens processor inte är tillräckligt kraftfull för att klara av objektidentifieringen utan hårdvarustöd. Istället behöver det utföras av en dator som bearbetar filmen i efterhand och plockar ut objekt. Det har förslagits en bättre metod för att göra objektidentifieringen smartare och lärande som kommer fungera bättre i Saabs fall. / Saab is currently using an old and complex system to perform tests of their products. The system is based on filming from different angles which will be merged to one film from which Saab can analyze the results of their products. Single-board computers is something that have become increasingly popular in the recent years, therefore, we are to investigate whether it is possible or not to replace the older systems with SBCs and cameras.We will also investigate whether the BeagleBoard is capable of detecting objects while filming, synchronizing, encoding and saving the video for later use.The result showed that the processor isn’t powerful enough to handle object identification without full hardware support. Instead, it needs to be performed afterwards by a computer which will identify objects in the video. A better method has been proposed to make object identification smarter and learning, which will work better in Saab’s case and their future work.
|
52 |
Uma solução de baixo custo para o processamento de imagens aéreas obtidas por Veículos Aéreos Não TripuladosSilva, Jonas Fernandes da 19 February 2016 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-15T15:15:35Z
No. of bitstreams: 1
arquivototal.pdf: 3344501 bytes, checksum: 9deb01db1972288d73b0c48155123f90 (MD5) / Made available in DSpace on 2017-08-15T15:15:35Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 3344501 bytes, checksum: 9deb01db1972288d73b0c48155123f90 (MD5)
Previous issue date: 2016-02-19 / Currently, unmanned aerial vehicles (UAV) are increasingly used to aid the various tasks
around the world. The popularization of this equipment associated with the advancement
of technology, particularly the miniaturization of processors, extend their functionalitys. In
agricultural applications, these devices allow monitoring of production by capturing aerial
images, for which are processed and identified areas of interest through specific software.
The research proposes a low-cost solution capable of processing aerial images obtained
by non-metric digital cameras coupled to UAV to identify gaps in plantations or estimate
levels of environmental degradation, which can be deployed in small computers and low
power consumption. Embedded systems coupled in UAV allow perform processing in
real time, which contributes to a preventive diagnosis, reduces the response time and can
avoid damages in the crop. The algorithm used is based on watershed, while the second
algorithm uses classification techniques based on the 1-Nearest Neighbor (1-NN). Are used
the embedded systems DE2i-150 and Intel Edison, both x86 architecture, and Raspberry Pi
2 of ARM architecture. Moreover, the technique 1-NN showed higher tolerance to lighting
problems, however, require more processing power compared to the algorithm based on
watershed. The results show that the proposed system is an efficient and relatively low-cost
solution compared to traditional means of monitoring and can be coupled in a UAV to
perform the processing during the flight. / Atualmente, veículos aéreos não tripulados (VANT) são cada vez mais utilizados no auxílio
a diversas tarefas em todo o mundo. A popularização destes equipamentos associada
ao avanço da tecnologia, sobretudo a miniaturização de processadores, ampliam suas
funcionalidades. Em aplicações agrícolas, estes equipamentos permitem o monitoramento
da produção por meio da captação de imagens aéreas, a partir dos quais são processadas
e identificadas áreas de interesse por meio de softwares específicos. A pesquisa propõe
uma solução de baixo custo capaz de processar imagens aéreas obtidas por câmeras digitais
não métricas acopladas a VANT para identificar falhas em plantações ou estimar níveis
de degradação ambiental, os quais possam ser implantados em computadores de pequeno
porte e baixo consumo, conhecido como sistemas embarcados. Plataformas embarcadas
acopladas a VANT permitem realizar o processamento em tempo real, que contribui para
um diagnóstico preventivo, reduz o tempo de resposta e pode evitar prejuízos na lavoura.
O algoritmo inicialmente avaliado é baseado em watershed, enquanto que o segundo
algoritmo proposto faz uso de técnicas de classificação baseada no 1-vizinho mais próximo
(1-NN). Utilizam-se os sistemas embarcados DE2i-150 e Intel Edison, ambos de arquitetura
x86, e a plataforma Raspberry Pi 2 de arquitetura ARM. Em relação ao processamento
das imagens são alcançados níveis de acurácia em torno de 90%, com uso do algoritmo
baseado em 1-NN. Além disso, a técnica 1-NN apresentou maior tolerância aos problemas
de luminosidade, em contrapartida, demandam maior poder de processamento quando
comparados com o algoritmo baseado em watershed. Os resultados mostram que o sistema
proposto é uma solução eficiente e de custo relativamente baixo em comparação com os
meios tradicionais de monitoramento e pode ser acoplada em um VANT para realizar o
processamento durante o voo.
|
53 |
Método de detecção automática de eixos de caminhões baseado em imagens / Truck axle detection automatic method based on imagesNatália Ribeiro Panice 13 September 2018 (has links)
A presente pesquisa tem por objetivo desenvolver um sistema automático de detecção de eixos de caminhões a partir de imagens. Para isso, são apresentados dois sistemas automáticos: o primeiro para extração de imagens de caminhões a partir de filmagens de tráfego rodoviário feitas em seis locais de uma mesma rodovia situada no Estado de São Paulo, e o segundo, para detecção dos eixos dos caminhões nas imagens. Ambos os sistemas foram fundamentados em conceitos de Processamento de Imagens e Visão Computacional e o desenvolvimento foi feito utilizando programação em linguagem Python e as bibliotecas OpenCV e SciKit. O salvamento automático das imagens de caminhões foi necessário para a construção do banco de imagens utilizado no outro método: a detecção dos eixos dos veículos identificados. Neste estágio foram realizadas a segmentação da imagem do caminhão, a detecção propriamente dita e a classificação dos eixos. Na segmentação dos veículos, utilizou-se as técnicas de limiarização adaptativa seguida de morfologia matemática e em outra ocasião, o descritor de texturas LBP; enquanto na detecção, a Transformada de Hough. Da análise de desempenho desses métodos, a taxa de salvamento das imagens foi 69,2% considerando todos os caminhões que se enquadraram nos frames. Com relação às detecções, a segmentação das imagens dos caminhões feita utilizando limiarização adaptativa com morfologia matemática ofereceu resultados de 57,7% da detecção do total de eixos dos caminhões e 65,6% de falsas detecções. A técnica LBP forneceu, para os mesmos casos, respectivamente, 68,3% e 84,2%. O excesso de detecção foi um ponto negativo dos resultados e pode ser relacionado aos problemas do ambiente externo, geralmente intrínsecos às cenas de tráfego de veículos. Dois fatores que interferiram de maneira significativa foram a iluminação e a movimentação das folhas e galhos das árvores devido ao vento. Desconsiderando esse inconveniente, derivado dos fatores recém citados, as taxas de acerto dos dois tipos de segmentação aumentariam para 90,4% e 93,5%, respectivamente, e as falsas detecções mudariam para 66,5% e 54,7%. Desse modo, os dois sistemas propostos podem ser considerados promissores para o objetivo proposto. / This research aims to develop an automatic truck axle detection system using images. Two automatic systems are presented: one for the extraction of truck images from road videos recorded in a São Paulo state highway and one for the axle detection on images. Both systems are based on Image Processing and Computational Vision techniques, with using programming in Python and OpenCV and SciKit libraries. The truck image extraction system was necessary for the creation of image base, to be used on the axle detection system. Thereunto, image segmentation, axle detection and classification on images were made. In segmentation was used adaptive threshold technique, followed by mathematical morphology and on another time, LBP texture descriptors; for detection, was used Hough Transform. Performance analysis on these methods wielded 69.2% on image save rate, on trucks entirely framed on the image. About axle detection, the truck image segmentation using a combination of adaptive threshold and mathematical morphology wielded 57.7% on axle detection, whilst achieving 65.6% of false detection. Using LBP wielded, on the same images, 68.3% on axle detection and 84.2% of false detection. These excesses was a negative result and can be related to intrinsic issues on the external road traffic environment. Two main factors affected the result: lighting condition changes and the movement of tree leaves and branches. Disregarding these two factors, the proposed system had 90.4% axle truck detection rate using adaptive threshold and mathematical morphology and 93.5% using LBP, and the false detection, changed for 66.5% e 54.7%. Thus, both proposed systems are considered promising.
|
54 |
Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video imagesLeandro Arab Marcomini 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
|
55 |
Avståndsvarnare till MobiltelefonJohansson, Joakim January 2011 (has links)
This report describes a study, description and testing of parts to an application adapted to the operating system Android. The application is supposed to measure the distance to a car ahead. Apart from distance measurements the ability of the application to calculate its own speed with the help of GPS is tested. From these two parameters, speed, distance and some constants the theoretical stopping distance of the car will be calculated in order to warn the driver if the car is too close to the car ahead in relation to its own speed and stopping distance. Tests were conducted on the different applications that were programmed and the result showed that the camera technique in the mobile phone itself limits the maximum distance of the distance measurement application. The max distance the tests in this thesis revealed was approximately 5 meters. The measurement done to the GPS speed calculating application showed that the application was more accurate than the speedometer in the test car. The result of this thesis was that if all the parts were to be put together to a single application the maximum speed that it could be used with some functionality was 13,8 kilometers/hour assuming that the car ahead is at a standstill and the camera on the mobile phone is in a straight line from the license plate. / Denna uppsats beskriver en studie, utveckling och testning av delar och teknik till en applikation anpassad till operativsystemet Android. Applikationen skall kunna mäta avståndet till framförvarande bil. Utöver avståndsmätning så testas applikationens förmåga att kalkylera sin egna hastighet med hjälp av GPS. Utifrån dessa två parametrar, hastighet och avstånd samt några konstanter skall den teoretiska stoppsträckan kunna räknas ut för att kunna varna om fordonet är för nära farmförvarande bil i förhållande till sin egna fart.. Tester utfördes på de olika applikationerna som programmerades och resultatet visade att tekniken i sig sätter stopp för att kunna mäta avståndet till nummerplåten på ett längre avstånd än ca 5m. Mätning av hastigheten var mer noggrann än hastighetsmätaren i bilen. Resultatet blev att om alla delar sattes ihop till en enda applikation så skulle den i bästa fall kunna användas i maximalt 13,8 km/h förutsatt att framförvarande bil är stillastående, och att kameran från telefonen är i en rak vinkel mot framförvarande nummerplåt.
|
56 |
Systém pro sledování únavy řidiče / Driver Fatigue MonitorHošek, Roman January 2012 (has links)
This diploma thesis deals with the options of image processing on mobile platforms, especially on Android operating system, and their use in a driver drowsiness detection system. The introductory part analyses the influence of drowsiness on drivers, focusing chiefly on the microsleep, and describes the already existing driver drowsiness detection systems. The thesis proceeds by the description of possibilities of image processing on mobile platforms with the emphasis on Android operating system together with the OpenCV library, known from the desktop interface. This is followed by comparison of various options of library implementation on a mobile platform. The chapter on image processing describes the algorithms for the detection of objects in the image, usable for detection of face, eyes and their posture. The practical part implements the selected methods for the Android operating system. A referential application was created to provide an explanatory demonstration of these methods on a real device. The individual methods are compared on the basis of time consumption, error rate and other factors.
|
57 |
Detecting Faulty Tape-around Weatherproofing Cables by Computer VisionSun, Ruiwen January 2020 (has links)
More cables will be installed owing to setting up more radio towers when it comes to 5G. However, a large proportion of radio units are constructed high in the open space, which makes it difficult for human technicians to maintain the systems. Under these circumstances, automatic detections of errors among radio cabinets are crucial. Cables and connectors are usually covered with weatherproofing tapes, and one of the most common problems is that the tapes are not closely rounded on the cables and connectors. This makes the tape go out of the cable and look like a waving flag, which may seriously damage the radio systems. The thesis aims at detecting this flagging-tape and addressing the issues. This thesis experiments two methods for object detection, the convolutional neural network as well as the OpenCV and image processing. The former uses YOLO (You Only Look Once) network for training and testing, while in the latter method, the connected component method is applied for the detection of big objects like the cables and line segment detector is responsible for the flagging-tape boundary extraction. Multiple parameters, structurally and functionally unique, were developed to find the most suitable way to meet the requirement. Furthermore, precision and recall are used to evaluate the performance of the system output quality, and in order to improve the requirements, larger experiments were performed using different parameters. The results show that the best way of detecting faulty weatherproofing is with the image processing method by which the recall is 71% and the precision reaches 60%. This method shows better performance than YOLO dealing with flagging-tape detection. The method shows the great potential of this kind of object detection, and a detailed discussion regarding the limitation is also presented in the thesis. / Fler kablar kommer att installeras på grund av installation av fler radiotorn när det gäller 5G. En stor del av radioenheterna är dock konstruerade högt i det öppna utrymmet, vilket gör det svårt för mänskliga tekniker att underhålla systemen. Under dessa omständigheter är automatiska upptäckter av fel bland radioskåp avgörande. Kablar och kontakter täcks vanligtvis med väderbeständiga band, och ett av de vanligaste problemen är att banden inte är rundade på kablarna och kontakterna. Detta gör att tejpen går ur kabeln och ser ut som en viftande flagga, vilket allvarligt kan skada radiosystemen. Avhandlingen syftar till att upptäcka detta flaggband och ta itu med frågorna. Den här avhandlingen experimenterar två metoder för objektdetektering, det invändiga neurala nätverket såväl som OpenCV och bildbehandling. Den förstnämnda använder YOLO (You Only Look Once) nätverk för träning och testning, medan i den senare metoden används den anslutna komponentmetoden för detektering av stora föremål som kablarna och linjesegmentdetektorn är ansvarig för utvinning av bandbandgränsen. Flera parametrar, strukturellt och funktionellt unika, utvecklades för att hitta det mest lämpliga sättet att uppfylla kravet. Dessutom används precision och återkallande för att utvärdera prestandan för systemutgångskvaliteten, och för att förbättra kraven utfördes större experiment med olika parametrar. Resultaten visar att det bästa sättet att upptäcka felaktigt väderbeständighet är med bildbehandlingsmetoden genom vilken återkallelsen är 71% och precisionen når 60%. Denna metod visar bättre prestanda än YOLO som hanterar markering av flaggband. Metoden visar den stora potentialen för denna typ av objektdetektering, och en detaljerad diskussion om begränsningen presenteras också i avhandlingen.
|
58 |
Augmented reality i en industriell tillverkningsprocess / Augmented Reality in an Industrial Manufacturing ProcessWass, Anton, Löwenborg Forsberg, Eddie January 2017 (has links)
Med den digitalisering som sker just nu i den industriella världen väcks stor nyfikenhet på hur framtidens tekniker såsom “Augmented Reality” kan appliceras i industriella tillverkningsprocesser. Målet med examensarbetet var att undersöka om och hur AR-teknik kan utnyttjas i industrin för att förbättra nuvarande arbetsprocesser. Två prototyper utvecklades för AR-glasögonen Microsoft HoloLens och utvärderades genom att jämföra tidigare arbetssätt med nya. Testerna av prototyperna visade att effektiviteten, produktionskvalitén och rörligheten ökade för användaren till en bekostnad av sämre ergonomi. / With the digitization that is happening right now in the industrial world, there is a lot of curiosity about how future technologies like Augmented Reality can be applied in industrial manufacturing processes. The aim of the thesis was to investigate whether and how augmented reality technology can be utilized in industries to improve current work processes. Two prototypes were developed for the augmented reality glasses Microsoft HoloLens and evaluated by comparing previous working methods with new ones. Tests of the prototypes showed that efficiency, production quality and mobility increased for the user at the expense of worse ergonomics.
|
59 |
Comparative Study of Vision Camera-based Vibration Analysis with the Laser Vibrometer MethodMuralidharan, Pradeep Kumar, Yanamadala, Hemanth January 2021 (has links)
Vibration analysis is a method that studies patterns in vibration data and measures vibration levels. It is usually performed on time waveforms of the vibration signal directly and on thefrequency spectrum derived by applying the Fourier Transform on the time waveform. Conventional vibration analysis methods are either expensive, need a complicated setup, or both. Non-contact measurement systems, such as high-speed cameras coupled with computer vision and motion magnification methods, are suitable options for monitoring vibrations of any system. In this work, many classic and state-of-the-art computer vision tracking algorithms were compared. Low and high frame rate videos are used to evaluate their ability to track the oscillatory movement that characterizes vibrations. The trackers are benchmarked with literature and experimental study. Two sets of experiments were carried out in this work, one using a cantilever and another using a robot. The resonance frequencies obtained from the vision camera method are compared to the Laser vibrometer method, which is industry standard. The results show that the resonance frequencies of both methods are closer to each other. The limitations of the tracking algorithm-based approach used for vibration analysis were discussed at the end. Since the methods provided are generic, they may be easily modified for other relevant applications. / Vibrationsanalys är en metod som studerar mönster i vibrationsdata och mäter vibrationsnivåer. Det utförs vanligtvis på tidvågformer av vibrationssignalen direkt och på frekvensen, spektrum som härleds genom att applicera Fourier Transform på tidvågform. Konventionella vibrationsanalysmetoder är antingen dyra, kräver en komplicerad installation eller båda. Beröringsfria mätsystem, till exempel höghastighetskameror i kombination med datorsyn och rörelseförstoringsmetoder, är lämpliga alternativ för att övervaka vibrationer i alla system. I detta arbete jämfördes många klassiska och toppmoderna datorsynsspårningsalgoritmer. Videor med låg och hög bildhastighet används för att utvärdera deras förmåga att spåra den oscillerande rörelsen som kännetecknar vibrationer. Spårarna jämförs med litteratur och experimentell studie. I detta arbete utfördes två uppsättningar experiment, ett med en fribärare och ett annat med en robot. De resonans frekvenser som erhålls från visionkamerametoden jämförs med Laservibrometer metoden, som är branschstandard. Resultaten visar att resonansfrekvenserna för båda metoderna ligger närmare varandra. Begränsningarna av det spårningsalgoritmbaserade tillvägagångssättet som används för vibrationsanalys diskuterades i slutet. Eftersom de angivna metoderna är generiska kan de enkelt modifieras för andra relevanta applikationer.
|
60 |
En jämförelse av Eigenface- och Fisherface-metoden tillämpade i en Raspberry Pi 2 / A comparison between Eigenfaces and Fisherfaces implemented on a Raspberry Pi 2Dahl, Dag, Gustaf, Sterne January 2016 (has links)
Syftet med rapporten är att visa möjligheten att använda Raspberry Pi 2 i ett ansiktsigenkänningssystem. Studien redogör för prestandaskillnader mellan Eigenface och Fisherfacemetoden. Studieförfattarna har genomfört en experimentell studie enligt en kvantitativ metod där tester utgör empirin. Resultatet från testerna kommer presenteras genom diagram och påvisa möjligheten att använda Raspberry Pi 2 som hårdvara i ett ansiktsigenkänningssystem. Genom samma testutförande kommer skillnader mellan igenkänningsmetoderna att påvisas. Studien visar att Raspberry Pi 2 är en lämplig kandidat att använda för mindre ansiktsigenkänningssystem. Vidare framgår det att Fisherface-metoden är det lämpligaste valet att använda vid implementation av systemet. / The purpose with this report is to demonstrate the possibility to use Raspberry Pi 2 as hardware in a face recognition system. The study will show performance differences regarding the Eigenface- and Fisherface-method. To demonstrate the possibility the authors have done tests using an experimental study and quantitative method. To review the tests and to understand the result a qualitative literature review was taken. The tests will be presented as graphs to show the possibility to use Raspberry Pi 2 as hardware in a face recognition system. The same goes for the comparison of the chosen algorithms. The work indicates that Raspberry Pi 2 is a possible candidate to use for smaller face recognition systems. There is also an indication that the Fisherface method is the better choice for face recognition.
|
Page generated in 0.0616 seconds