61 |
UTILIZAÇÃO DE PROCESSAMENTO DIGITAL DE IMAGENS E REDES NEURAIS ARTIFICIAIS PARA O RECONHECIMENTO DE ÍNDICES DE SEVERIDADE DA FERRUGEM ASIÁTICA DA SOJAMelo, Geisla de Albuquerque 25 May 2015 (has links)
Made available in DSpace on 2017-07-21T14:19:24Z (GMT). No. of bitstreams: 1
Melo, Geisla Albuquerque.pdf: 2986772 bytes, checksum: 02494f1ef68a9df48a1184c0a3e81dce (MD5)
Previous issue date: 2015-05-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / According to Embrapa (2013), Brazil is the world's second largest soy producer just after the United States. Season after season, the production and planted area in Brazil is growing, however, climatic factors and crop diseases are affecting plantation, preventing further growth, and causing losses to farmers. Asian rust caused by Phakopsora pachyrhizi, is a foliar disease, considered one of the most important diseases at present, because of the potential for loss. Asian rust can be mistaken for other diseases in soybeans, such as Bacterial Blight, a Stain Brown and Bacterial Pustule, due to similar visual appearances. Thus, the present study aimed to develop an application for mobile devices using the Android platform to perform automatic recognition of the Asian soybean rust severity indices to assist in the early diagnosis and therefore assist in decision-making as the management and control of the disease. For this, was used techniques of digital image processing (DIP) and Artificial Neural Networks (ANN). First, around 3.000 soybean leaves were collected in the field, where about 2.000 were harnessed. Then it were separated by severity index, photographed in a controlled environment, and after that were processed in order to eliminate noise and background images. Filtering preprocessing phase consisted of median filter, Gaussian filter processing for gray scale, Canny edge detector, expansion, find and drawcontours, and finally the cut of leaf. After this was extracted color and texture features of the images, which were the average R, G and B Variant also for the three channels R, G and B according angular momentum, entropy, contrast, homogeneity, and finally correlation the severity degree previously known. With these data, the training was performed an ANN through the neural network simulator BrNeural. During training, parameters such as number of severity levels and number of neurons of the hidden layer have changed. After training, was chosen network architecture that gave better results, with 78.86% accuracy for Resilient-propagation algorithm. This network was saved in an object and inserted into the application, ready to be used with new data. Thus, the application takes the soybean leaf picture and filters the acquired image. After this, it extracts the features and commands internally to the trained neural network, which analyzes and reports the severity. Still, it is optionally possible to see a georeferenced map of the property, with the severities identified by small colored squares, each representing a different index. / Segundo a Embrapa (2013), o Brasil é o segundo maior produtor de soja do mundo, atrás apenas nos Estados Unidos. Safra após safra, a produção e a área plantada do Brasil vem crescendo, entretanto, fatores climáticos e doenças da cultura vêm afetando as lavouras, impedindo um crescimento ainda maior, e causando perdas para os agricultores. A ferrugem asiática, causada pelo fungo Phakopsora pachyrhizi, é uma doença foliar, considerada uma das doenças de maior importância na atualidade, devido ao grande potencial de perdas. A ferrugem asiática pode ser confundida com outras doenças na soja, como o Crestamento Bacteriano, a Mancha Parda e a Pústula Bacteriana, devido às aparências visuais semelhantes. Deste modo, O presente estudo teve por objetivo desenvolver um aplicativo para dispositivos móveis que utilizam a plataforma Android, para realizar o reconhecimento automático dos índices de severidade da ferrugem asiática da soja, para auxiliar no diagnóstico precoce e por consequência, auxiliar na tomada de decisão quanto ao manejo e controle da doença. Para isto, foram utilizadas técnicas de Processamento Digital de Imagens (PDI) e Redes Neurais Artificiais (RNA). Primeiramente, foram coletadas aproximadamente 3 mil folhas de soja em campo, onde cerca de 2 mil foram aproveitadas. Então elas foram separadas por índices de severidade, fotografadas em ambiente controlado, e após isto foram processadas com o objetivo de eliminar ruídos e o fundo das imagens. A fase de filtragem do pré-processamento consistiu nos filtros da mediana, filtro Gaussiano, transformação para escala de cinza, detector de bordas Canny, dilatação, find e drawcontours, e por fim o recorte da folha. Após isto, foram extraídas as características de cor e textura das imagens, que foram as médias R, G e B, Variância também para os três canais R, G e B, Segundo Momento Angular, Entropia, Contraste, Homogeneidade, Correlação e por fim, o Grau de Severidade previamente sabido. Com estes dados, foi realizado o treinamento de uma RNA através do simulador de redes neurais BrNeural. Durante o treinamento, parâmetros como quantidade de níveis de severidade e quantidade de neurônios da camada oculta foram alterados. Após o treinamento, foi escolhida a arquitetura de rede que deu melhor resultado, com 78,86% de acerto para o algoritmo Resilient-propagation. Esta rede foi salva em um objeto e inserida no aplicativo, pronta para ser utilizada com dados novos. Assim, o aplicativo tira a foto da folha de soja e faz a filtragem da imagem adquirida. Após isto, extrai as características e manda internamente para a rede neural treinada, que analisa e informa a severidade. Ainda, opcionalmente é possível ver um mapa georreferenciado da propriedade, com as severidades identificadas por pequenos quadrados coloridos, representando cada um, um índice diferente.
|
62 |
Sistema automático para obtenção de parâmetros do tráfego veicular a partir de imagens de vídeo usando OpenCV / Automatic system to obtain traffic parameters from video images based on OpenCVAndré Luiz Barbosa Nunes da Cunha 08 November 2013 (has links)
Esta pesquisa apresenta um sistema automático para extrair dados de tráfego veicular a partir do pós-processamento de vídeos. Os parâmetros macroscópicos e microscópicos do tráfego são derivados do diagrama espaço-tempo, que é obtido pelo processamento das imagens de tráfego. A pesquisa fundamentou-se nos conceitos de Visão Computacional, programação em linguagem C++ e a biblioteca OpenCV para o desenvolvimento do sistema. Para a detecção dos veículos, duas etapas foram propostas: modelagem do background e segmentação dos veículos. Uma imagem sem objetos (background) pode ser determinada a partir das imagens do vídeo através de vários modelos estatísticos disponíveis na literatura especializada. A avaliação de seis modelos estatísticos indicou o Scoreboard (combinação de média e moda) como o melhor método de geração do background atualizado, por apresentar eficiente tempo de processamento de 18 ms/frame e 95,7% de taxa de exatidão. A segunda etapa investigou seis métodos de segmentação, desde a subtração de fundo até métodos de segmentação por textura. Dentre os descritores de textura, é apresentado o LFP, que generaliza os demais descritores. Da análise do desempenho desses métodos em vídeos coletados em campo, conclui-se que o tradicional método Background Subtraction foi o mais adequado, por apresentar o melhor tempo de processamento (34,4 ms/frame) e a melhor taxa de acertos totais com 95,1% de média. Definido o método de segmentação, foi desenvolvido um método para se definir as trajetórias dos veículos a partir do diagrama espaço-tempo. Comparando-se os parâmetros de tráfego obtidos pelo sistema proposto com medidas obtidas em campo, a estimativa da velocidade obteve uma taxa de acerto de 92,7%, comparado com medidas de velocidade feitas por um radar; por outro lado, a estimativa da taxa de fluxo de tráfego foi prejudicada por falhas na identificação da trajetória do veículo, apresentando valores ora acima, ora abaixo dos obtidos nas coletas manuais. / This research presents an automatic system to collect vehicular traffic data from video post-processing. The macroscopic and microscopic traffic parameters are derived from a space-time diagram, which is obtained by traffic image processing. The research was based on the concepts of Computer Vision, programming in C++, and OpenCV library to develop the system. Vehicle detection was divided in two steps: background modeling and vehicle segmentation. A background image can be determined from the video sequence through several statistical models available in literature. The evaluation of six statistical models indicated Scoreboard (combining mean and mode) as the best method to obtain an updated background, achieving a processing time of 18 ms/frame and 95.7% accuracy rate. The second step investigated six segmentation methods, from background subtraction to texture segmentation. Among texture descriptors, LFP is presented, which generalizes other descriptors. Video images collected on highways were used to analyze the performance of these methods. The traditional background subtraction method was found to be the best, achieving a processing time of 34.4 ms/frame and 95.1% accuracy rate. Once the segmentation process was chosen, a method to determine vehicle trajectories from the space-time diagram was developed. Comparing the traffic parameters obtained by the proposed system to data collected in the field, the estimates for speed were found to be very good, with 92.7% accuracy, when compared with radar-measured speeds. On the other hand, flow rate estimates were affected by failures to identify vehicle trajectories, which produced values above or below manually collected data.
|
63 |
Sistema automático para obtenção de parâmetros do tráfego veicular a partir de imagens de vídeo usando OpenCV / Automatic system to obtain traffic parameters from video images based on OpenCVCunha, André Luiz Barbosa Nunes da 08 November 2013 (has links)
Esta pesquisa apresenta um sistema automático para extrair dados de tráfego veicular a partir do pós-processamento de vídeos. Os parâmetros macroscópicos e microscópicos do tráfego são derivados do diagrama espaço-tempo, que é obtido pelo processamento das imagens de tráfego. A pesquisa fundamentou-se nos conceitos de Visão Computacional, programação em linguagem C++ e a biblioteca OpenCV para o desenvolvimento do sistema. Para a detecção dos veículos, duas etapas foram propostas: modelagem do background e segmentação dos veículos. Uma imagem sem objetos (background) pode ser determinada a partir das imagens do vídeo através de vários modelos estatísticos disponíveis na literatura especializada. A avaliação de seis modelos estatísticos indicou o Scoreboard (combinação de média e moda) como o melhor método de geração do background atualizado, por apresentar eficiente tempo de processamento de 18 ms/frame e 95,7% de taxa de exatidão. A segunda etapa investigou seis métodos de segmentação, desde a subtração de fundo até métodos de segmentação por textura. Dentre os descritores de textura, é apresentado o LFP, que generaliza os demais descritores. Da análise do desempenho desses métodos em vídeos coletados em campo, conclui-se que o tradicional método Background Subtraction foi o mais adequado, por apresentar o melhor tempo de processamento (34,4 ms/frame) e a melhor taxa de acertos totais com 95,1% de média. Definido o método de segmentação, foi desenvolvido um método para se definir as trajetórias dos veículos a partir do diagrama espaço-tempo. Comparando-se os parâmetros de tráfego obtidos pelo sistema proposto com medidas obtidas em campo, a estimativa da velocidade obteve uma taxa de acerto de 92,7%, comparado com medidas de velocidade feitas por um radar; por outro lado, a estimativa da taxa de fluxo de tráfego foi prejudicada por falhas na identificação da trajetória do veículo, apresentando valores ora acima, ora abaixo dos obtidos nas coletas manuais. / This research presents an automatic system to collect vehicular traffic data from video post-processing. The macroscopic and microscopic traffic parameters are derived from a space-time diagram, which is obtained by traffic image processing. The research was based on the concepts of Computer Vision, programming in C++, and OpenCV library to develop the system. Vehicle detection was divided in two steps: background modeling and vehicle segmentation. A background image can be determined from the video sequence through several statistical models available in literature. The evaluation of six statistical models indicated Scoreboard (combining mean and mode) as the best method to obtain an updated background, achieving a processing time of 18 ms/frame and 95.7% accuracy rate. The second step investigated six segmentation methods, from background subtraction to texture segmentation. Among texture descriptors, LFP is presented, which generalizes other descriptors. Video images collected on highways were used to analyze the performance of these methods. The traditional background subtraction method was found to be the best, achieving a processing time of 34.4 ms/frame and 95.1% accuracy rate. Once the segmentation process was chosen, a method to determine vehicle trajectories from the space-time diagram was developed. Comparing the traffic parameters obtained by the proposed system to data collected in the field, the estimates for speed were found to be very good, with 92.7% accuracy, when compared with radar-measured speeds. On the other hand, flow rate estimates were affected by failures to identify vehicle trajectories, which produced values above or below manually collected data.
|
64 |
Increasing the Position Precision of a Navigation Device by a Camera-based Landmark Detection ApproachJumani, Kashif Rashid 24 September 2018 (has links)
The main objective of this paper is to discuss a platform which can provide accurate
information to moving objects like a car in poor environmental conditions where the use of signals of GPS is not possible. This approach is going to integrate imaging sensor data into an inertial navigation system. Navigation systems are getting smart and accurate but still, an error occurs at long distances causing a failure to find out accurate location. In order to increase the accuracy front camera in a car is proposed as a sensor for the navigation system. Before this problem is solved with the help of extended Kalmanfilter but still, the small error occurs. In order to find out, accurate location landmarks will be detected from the real-time environment and will be matched with the landmarks which are already stored database. Detection is the challenge in an open environment in which object must be illumination invariant, pose invariant and scale invariant. Selection between algorithms according to the requirement is important. SIFT is a feature descriptor which creates the description of features in an image and known as the more accurate algorithm. Speeded up robust features (SURF) is another algorithm in computer considered as fast and less accurate than SIFT. Most of the time it is not a problem with given algorithms but the feature is not detected or matched because of illumination, scale, and pose. In this condition use of filters and other techniques is important for better results. Better results mean required information from images must extract easily, this part is obtained with the help of computer vision and image processing. After creating matched images data, this data is given to navigation data calculation so that it can produce an exact location based on matched images and time calculation. Navigation data calculation unit has the connection with Landmark Database so navigation system can compute that at this point landmark is present and it is matched and assure that given location is accurate. In this way accuracy, safety and security can be assured.
|
65 |
Mätning av skärgradshöjd på stål / Measurement of burr height onsteelSvensson, Johan January 2016 (has links)
Idag beskärs en stålrulle inom stålindustrin i ett skärverk, stålrullarna delas tillmindre delband med dålig kontroll av skärgradshöjdens kvalitet. Stickprover tas manuelltvilket endast blir ett fåtal stickprover på en stålrulle som har 150 delband ochär 30 kilometer i längd. En hål omsändning för en stålrulle kostar upp mot en miljonkronor och har en negativ klimatpåverkan. En mjukvaruprototyp för detektering avskärgradshöjd med en referenslinje togs fram. Prototypen innehöll en ljussensor, tvåmotorer, en PC och en prototypkonstruktion. Varje uppgift i programvaran tilldeladesen egen tråd. Operativsystem, trådar och algoritmer prestanda testades för mätningav exekveringstider och periodtider. Resultatet visade att en skärgradsdetektorvar möjlig att realisera. Algoritmen för skärgradshöjd med referenslinje detekteradeskärgradshöjden där amplituden var tillräckligt stor. / Today, a steel roll is cut within the steel industry in a cutting factory, the steel rollsare divided to smaller part bands with poor control of the burr height quality. Samplesis taken manually, the amount of samples is too low to know the quality of thesteel roll, the steel rolls can be divided up to 150 times and the length will be 30 kilometers. A whole resend for one steel roll costs up against a million SEK and has anegative climatic impact. One software prototype for detection of burr heights witha reference line was programmed. The prototype contained one light sensor, two motors,a PC and one prototype construction. Each task in the software was allocatedan own thread. Operating systems, threads and algorithms was performance testedfor measurement of execution times and period times. The result showed that a burrheight detector where possible to implement. The algorithm could detect burrheights that were too large related to its reference line.
|
66 |
Realtidsövervakning av multicastvideoström / Monitoring of multicast video streaming in realtimeHassan, Waleed, Hellström, Martin January 2017 (has links)
Den enorma ökningen av multicasttjänster har visat begränsningarna hos traditionella nätverkshanteringsverktyg vid multicastkvalitetsövervakning. Det behövs någon annan form av övervakningsteknik som inte är en hårdvaruinriktad lösning så som ökad länkgenomströmmning, buffertlängd och kapacitet för att förbättra kundupplevelsen. I rapporten undersöks användningen av biblioteken FFmpeg, och OpenCV samt no-reference image quality assessemnt algoritmen BRISQUE för att förbättra tjänstekvaliteten och kundupplevelsen. Genom att upptäcka kvalitetsbrister hos bildrutor samt bitfel i videoströmmen kan QoS och QoE förbättras. Uppgiftens ändamål är att i realtid detektera avvikelser i bildkvalitet och bitfel i en multicastvideoström för att sedan notifiera tjänsteleverantören med hjälp av SNMP traps. Undersökningen visar positiva resultat med en hybridlösning med användning av både BRISQUE och FFmpeg då båda ensamma inte är tillräckligt anpassade för multimediaövervakning. FFmpeg har möjligheten att detektera avkodningsfel som oftast beror på allvarliga bitfel, och BRISQUE algoritmen utvecklades för att analysera bilder och bestämma bildkvaliteten. Enligt testresultaten kan BRISQUE användas för multicastvideoanalysering eftersom att den subjektiva bildkvaliteten kan bestämmas med god pålitlighet. Kombinationen av dessa metoder har visat bra resultat men behöver undersökas mer för användning av multicastövervakning. / The enormous increase in multicast services has shown the limitations of traditional network management tools in multicast quality monitoring. There is a need for new monitoring technologies that are not hardware-based solutions such as increased link throughput, buffer length and capacity to enhance the quality of experience. This paper examines the use of FFmpeg, and OpenCV as well the no-reference image quality assessment algorithm BRISQUE to improve the quality of service as well as the quality of experience. By detecting image quality deficiencies as well as bit errors in the video stream, the QoS and QoE can be improved. The purpose of this project was to develop a monitoring system that has the ability to detect fluctuations in image quality and bit errors in a multicast video stream in real time and then notify the service provider using SNMP traps. The tests performed in this paper shows positive results when using the hybrid solution proposed in this paper, both BRISQUE and FFmpeg alone are not sufficiently adapted for this purpose. FFmpeg has the ability to detect decoding errors that usually occurs due to serious bit errors and the BRISQUE algorithm was developed to analyse images and determine the subjective image quality. According to the test results BRISQUE can be used for multicast video analysis because the subjective image quality can be determined with good reliability. The combination of these methods has shown good results but needs to be investigated and developed further.
|
67 |
Автоматизация процесса подсчета труб на предприятии с использованием технологий компьютерного зрения : магистерская диссертация / Automation of the process of counting pipes at the enterprise using computer vision technologГуськова, Д. В., Guskova, D. V. January 2022 (has links)
В диссертации рассматривается проблема учета труб на производственных предприятиях. Целью данного исследования является предоставление автоматизированного решения проблемы, которое потребует меньше времени для подсчета труб и будет более эффективным, чем подсчет вручную. Разработан алгоритм, основанный на технологии компьютерного зрения. Для выполнения задачи компьютерного зрения была использована библиотека OpenCV, языком программирования был выбран Python. После разработки алгоритма, основанного на технологии компьютерного зрения, стал возможен автоматический подсчет труб. Дальнейшее исследование может быть проведено для удовлетворения всех необходимых потребностей предприятия. / The dissertation addresses the problem of pipe counting in the manufacturing enterprises. The aim of this study is to provide an automated solution to the problem that will take less time to count pipes and will be more efficient than manual counting. An algorithm based on computer vision technology is developed. The library for undertaking the computer vision task was Open Source Computer Vision (OpenCV) and it was performed in Python. After the development of an algorithm based on computer vision, automatic pipe counting became possible. Further research might be conducted to meet all the required needs of the enterprise.
|
68 |
Digital measurement of irregularly shaped objects : Building a prototypeHenningsson, Casper, Nilsson, Joel January 2023 (has links)
The use of external dimensions in products has great importance in numerous areas and a digital measurement can lead to a more streamlined workflow. In this project are the suitability of sensors for measuring irregularly shaped objects investigated and a prototype for digital measurement built. The prototype consists of a measuring cart, Raspberry Pi, two cameras and MQTT with React web front-end. Measurement is started by a client and reported back to the client with the measurement station’s established measurement values. The result is presented as 79% of measured dimensions ending up within ±10 mm margin of the manually measured value. The result is based on tests of 10 objects with deviating characteristics to challenge the measurement capacity. The biggest challenge that has arisen has been the handling of objects’ perspectives in relation to the cameras. It is one of the areas that could be further developed to improve the reliability of the measuring station. / Användningen av yttermått hos produkter har en stor betydelse inom en stor mängd områden och en digital mätning av detta kan medföra en mer strömlinjeformad arbetsgång. I detta projektet undersöks sensorers lämplighet för mätning av oregelbundet formade objekt och det byggs en prototyp för digital mätning. Prototypen består av en mätvagn, Raspberry Pi, två kameror och MQTT med React-baserat webbgränssnitt. Mätning startas av en klient och rapporteras tillbaka till klienten med mätstationens fastslagna mätvärdena. Resultatet slutar i att 79% av uppmätta dimensioner hamnar inom ±10 mm marginal av manuellt uppmätt värde. Ett resultat som baseras på tester av 10 objekt med avvikande egenskaper för att utmana mätkapaciteten. Det största utmaningen som uppkommit har varit hanteringen av objekts perspektiv i förhållande till kamerorna. Det är ett av områdena som fortsatt har möjlighet att vidarutvecklas för att ytterliggare förbättra mätstationens pålitlighet.
|
69 |
Review and analysis of work sampling methods : the case of an automated labour performance measurement system using the work sampling methodVan Blommenstein, D., Matope, S., Van der Merwe, A.F. January 2011 (has links)
Published Article / This paper analyses work sampling and time study as work measurement methods with the view of employing them in an automated labour performance measurement system. These are compared with respect to Hawthorn effect, labour intensiveness, cost, tediousness and knowledge extensiveness. The analysis proves that work sampling is a better option for developing an automated labour performance measurement system that employs computer vision. Web cameras are used to feed real-time images to a central computer via USB extenders. The computer runs a standalone C++ application that uses a random function to establish when measurements are to be taken. The developed video camera footage is converted into a pixel matrix using OpenCV. This matrix is then filtered and analysed, enabling the tracking of a worker. The data generated is stored in text files. After the work sampling period has elapsed, the data is transferred into Microsoft Excel for analysis. Finally a report of the labour utilisation is generated in Microsoft Excel and then send to the analyst for review.
|
70 |
Mobile high-throughput phenotyping using watershed segmentation algorithmDammannagari Gangadhara, Shravan January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / This research is a part of BREAD PHENO, a PhenoApps BREAD project at K-State which combines contemporary advances in image processing and machine vision to deliver transformative mobile applications through established breeder networks. In this platform, novel image analysis segmentation algorithms are being developed to model and extract plant phenotypes. As a part of this research, the traditional Watershed segmentation algorithm has been extended and the primary goal is to accurately count and characterize the seeds in an image. The new approach can be used to characterize a wide variety of crops. Further, this algorithm is migrated into Android making use of the Android APIs and the first ever user-friendly Android application implementing the extended Watershed algorithm has been developed for Mobile field-based high-throughput phenotyping (HTP).
|
Page generated in 0.0341 seconds