• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Moving Object Detection based on Background Modeling

Luo, Yuanqing January 2014 (has links)
Aim at the moving objects detection, after studying several categories of background modeling methods, we design an improved Vibe algorithm based on image segmentation algorithm. Vibe algorithm builds background model via storing a sample set for each pixel. In order to detect moving objects, it uses several techniques such as fast initialization, random update and classification based on distance between pixel value and its sample set. In our improved algorithm, firstly we use histograms of multiple layers to extract moving objects in block-level in pre-process stage. Secondly we segment the blocks of moving objects via image segmentation algorithm. Then the algorithm constructs region-level information for the moving objects, designs the classification principles for regions and the modification mechanism among neighboring regions. In addition, to solve the problem that the original Vibe algorithm can easily introduce the ghost region into the background model, the improved algorithm designs and implements the fast ghost elimination algorithm. Compared with the tradition pixel-level background modeling methods, the improved method has better  robustness and reliability against the factors like background disturbance, noise and existence of moving objects in the initial stage. Specifically, our algorithm improves the precision rate from 83.17% in the original Vibe algorithm to 95.35%, and recall rate from 81.48% to 90.25%. Considering the affection of shadow to moving objects detection, this paper designs a shadow elimination algorithm based on Red Green and Illumination (RGI) color feature, which can be converted from RGB color space, and dynamic match threshold. The results of experiments demonstrate  that the algorithm can effectively reduce the influence of shadow on the moving objects detection. At last this paper makes a conclusion for the work of this thesis and discusses the future work.
2

Multilayer background modeling under occlusions for spatio-temporal scene analysis

Azmat, Shoaib 21 September 2015 (has links)
This dissertation presents an efficient multilayer background modeling approach to distinguish among midground objects, the objects whose existence occurs over varying time scales between the extremes of short-term ephemeral appearances (foreground) and long-term stationary persistences (background). Traditional background modeling separates a given scene into foreground and background regions. However, the real world can be much more complex than this simple classification, and object appearance events often occur over varying time scales. There are situations in which objects appear on the scene at different points in time and become stationary; these objects can get occluded by one another, and can change positions or be removed from the scene. Inability to deal with such scenarios involving midground objects results in errors, such as ghost objects, miss-detection of occluding objects, aliasing caused by the objects that have left the scene but are not removed from the model, and new objects’ detection when existing objects are displaced. Modeling temporal layers of multiple objects allows us to overcome these errors, and enables the surveillance and summarization of scenes containing multiple midground objects.
3

Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video images

Marcomini, Leandro Arab 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
4

Object Detection From Registered Visual And Infrared Sequences With The Help Of Active Contours

Yuruk, Huseyin 01 July 2008 (has links) (PDF)
Robust object detection from registered infrared and visible image streams is proposed for outdoor surveillance. In doing this, halo effect in infrared images is used as a benefit to extract object boundary by fitting active contour models (snake) to the foreground regions where these regions are detected by using the useful information from both visual and infrared domains together. Synchronization and registration are performed for each infrared and visible image couple. Various background modeling methods such as Single Gaussian, Non- Parametric and Mixture of Gaussian models are implemented. For Single Gaussian and Mixture of Gaussian background modeling, infrared, color intensity and color channels domains are modelled separately. First of all, background subtraction is applied in the infrared domain in order to find the initial foreground regions and these are used as a mask for the foreground detection in the visible domain. After removing the shadows from the foreground regions in the visible domain, pixelwise OR operation is applied between the foreground regions of the infrared and visible couple and the final foreground mask is formed. For Non-Parametric background modeling, all domains are used altogether to extract foreground regions. For all background modelling methods, the resulting mask is used to get the final foreground regions in the infrared image. Finally, snake is applied to each connected component of the foreground regions on the infrared image for the purpose of object detection. Two datasets are used to demonstrate our results for human detection where comparisons against manually segmented human regions and against other results in the literature are presented.
5

Hardware Implementation Of An Active Feature Tracker For Surveillance Applications

Solmaz, Berkan 01 July 2008 (has links) (PDF)
The integration of image sensors and high performance processors into embedded systems enabled the development of intelligent vision systems. In this thesis, we developed an active autonomous system to be used for surveillance applications. The proposed system detects a single moving object in the field of view automatically and tracks it in a wide area by controlling the pan-tilt-zoom features of the camera. The system can also go to an alarm state to warn the user. The processing unit of the system is a Texas Instruments DM642 Evaluation Module which is a low-cost high performance video &amp / imaging development platform designed to develop and evaluate video based applications.
6

Vision-assisted Object Tracking

Ozertem, Kemal Arda 01 February 2012 (has links) (PDF)
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection / for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
7

Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video images

Leandro Arab Marcomini 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
8

Sistema automático para obtenção de parâmetros do tráfego veicular a partir de imagens de vídeo usando OpenCV / Automatic system to obtain traffic parameters from video images based on OpenCV

André Luiz Barbosa Nunes da Cunha 08 November 2013 (has links)
Esta pesquisa apresenta um sistema automático para extrair dados de tráfego veicular a partir do pós-processamento de vídeos. Os parâmetros macroscópicos e microscópicos do tráfego são derivados do diagrama espaço-tempo, que é obtido pelo processamento das imagens de tráfego. A pesquisa fundamentou-se nos conceitos de Visão Computacional, programação em linguagem C++ e a biblioteca OpenCV para o desenvolvimento do sistema. Para a detecção dos veículos, duas etapas foram propostas: modelagem do background e segmentação dos veículos. Uma imagem sem objetos (background) pode ser determinada a partir das imagens do vídeo através de vários modelos estatísticos disponíveis na literatura especializada. A avaliação de seis modelos estatísticos indicou o Scoreboard (combinação de média e moda) como o melhor método de geração do background atualizado, por apresentar eficiente tempo de processamento de 18 ms/frame e 95,7% de taxa de exatidão. A segunda etapa investigou seis métodos de segmentação, desde a subtração de fundo até métodos de segmentação por textura. Dentre os descritores de textura, é apresentado o LFP, que generaliza os demais descritores. Da análise do desempenho desses métodos em vídeos coletados em campo, conclui-se que o tradicional método Background Subtraction foi o mais adequado, por apresentar o melhor tempo de processamento (34,4 ms/frame) e a melhor taxa de acertos totais com 95,1% de média. Definido o método de segmentação, foi desenvolvido um método para se definir as trajetórias dos veículos a partir do diagrama espaço-tempo. Comparando-se os parâmetros de tráfego obtidos pelo sistema proposto com medidas obtidas em campo, a estimativa da velocidade obteve uma taxa de acerto de 92,7%, comparado com medidas de velocidade feitas por um radar; por outro lado, a estimativa da taxa de fluxo de tráfego foi prejudicada por falhas na identificação da trajetória do veículo, apresentando valores ora acima, ora abaixo dos obtidos nas coletas manuais. / This research presents an automatic system to collect vehicular traffic data from video post-processing. The macroscopic and microscopic traffic parameters are derived from a space-time diagram, which is obtained by traffic image processing. The research was based on the concepts of Computer Vision, programming in C++, and OpenCV library to develop the system. Vehicle detection was divided in two steps: background modeling and vehicle segmentation. A background image can be determined from the video sequence through several statistical models available in literature. The evaluation of six statistical models indicated Scoreboard (combining mean and mode) as the best method to obtain an updated background, achieving a processing time of 18 ms/frame and 95.7% accuracy rate. The second step investigated six segmentation methods, from background subtraction to texture segmentation. Among texture descriptors, LFP is presented, which generalizes other descriptors. Video images collected on highways were used to analyze the performance of these methods. The traditional background subtraction method was found to be the best, achieving a processing time of 34.4 ms/frame and 95.1% accuracy rate. Once the segmentation process was chosen, a method to determine vehicle trajectories from the space-time diagram was developed. Comparing the traffic parameters obtained by the proposed system to data collected in the field, the estimates for speed were found to be very good, with 92.7% accuracy, when compared with radar-measured speeds. On the other hand, flow rate estimates were affected by failures to identify vehicle trajectories, which produced values above or below manually collected data.
9

Sistema automático para obtenção de parâmetros do tráfego veicular a partir de imagens de vídeo usando OpenCV / Automatic system to obtain traffic parameters from video images based on OpenCV

Cunha, André Luiz Barbosa Nunes da 08 November 2013 (has links)
Esta pesquisa apresenta um sistema automático para extrair dados de tráfego veicular a partir do pós-processamento de vídeos. Os parâmetros macroscópicos e microscópicos do tráfego são derivados do diagrama espaço-tempo, que é obtido pelo processamento das imagens de tráfego. A pesquisa fundamentou-se nos conceitos de Visão Computacional, programação em linguagem C++ e a biblioteca OpenCV para o desenvolvimento do sistema. Para a detecção dos veículos, duas etapas foram propostas: modelagem do background e segmentação dos veículos. Uma imagem sem objetos (background) pode ser determinada a partir das imagens do vídeo através de vários modelos estatísticos disponíveis na literatura especializada. A avaliação de seis modelos estatísticos indicou o Scoreboard (combinação de média e moda) como o melhor método de geração do background atualizado, por apresentar eficiente tempo de processamento de 18 ms/frame e 95,7% de taxa de exatidão. A segunda etapa investigou seis métodos de segmentação, desde a subtração de fundo até métodos de segmentação por textura. Dentre os descritores de textura, é apresentado o LFP, que generaliza os demais descritores. Da análise do desempenho desses métodos em vídeos coletados em campo, conclui-se que o tradicional método Background Subtraction foi o mais adequado, por apresentar o melhor tempo de processamento (34,4 ms/frame) e a melhor taxa de acertos totais com 95,1% de média. Definido o método de segmentação, foi desenvolvido um método para se definir as trajetórias dos veículos a partir do diagrama espaço-tempo. Comparando-se os parâmetros de tráfego obtidos pelo sistema proposto com medidas obtidas em campo, a estimativa da velocidade obteve uma taxa de acerto de 92,7%, comparado com medidas de velocidade feitas por um radar; por outro lado, a estimativa da taxa de fluxo de tráfego foi prejudicada por falhas na identificação da trajetória do veículo, apresentando valores ora acima, ora abaixo dos obtidos nas coletas manuais. / This research presents an automatic system to collect vehicular traffic data from video post-processing. The macroscopic and microscopic traffic parameters are derived from a space-time diagram, which is obtained by traffic image processing. The research was based on the concepts of Computer Vision, programming in C++, and OpenCV library to develop the system. Vehicle detection was divided in two steps: background modeling and vehicle segmentation. A background image can be determined from the video sequence through several statistical models available in literature. The evaluation of six statistical models indicated Scoreboard (combining mean and mode) as the best method to obtain an updated background, achieving a processing time of 18 ms/frame and 95.7% accuracy rate. The second step investigated six segmentation methods, from background subtraction to texture segmentation. Among texture descriptors, LFP is presented, which generalizes other descriptors. Video images collected on highways were used to analyze the performance of these methods. The traditional background subtraction method was found to be the best, achieving a processing time of 34.4 ms/frame and 95.1% accuracy rate. Once the segmentation process was chosen, a method to determine vehicle trajectories from the space-time diagram was developed. Comparing the traffic parameters obtained by the proposed system to data collected in the field, the estimates for speed were found to be very good, with 92.7% accuracy, when compared with radar-measured speeds. On the other hand, flow rate estimates were affected by failures to identify vehicle trajectories, which produced values above or below manually collected data.
10

A Comparative Evaluation Of Foreground / Background Segmentation Algorithms

Pakyurek, Muhammet 01 September 2012 (has links) (PDF)
A COMPARATIVE EVALUATION OF FOREGROUND / BACKGROUND SEGMENTATION ALGORITHMS Pakyurek, Muhammet M.Sc., Department of Electrical and Electronics Engineering Supervisor: Prof. Dr. G&ouml / zde Bozdagi Akar September 2012, 77 pages Foreground Background segmentation is a process which separates the stationary objects from the moving objects on the scene. It plays significant role in computer vision applications. In this study, several background foreground segmentation algorithms are analyzed by changing their critical parameters individually to see the sensitivity of the algorithms to some difficulties in background segmentation applications. These difficulties are illumination level, view angles of camera, noise level, and range of the objects. This study is mainly comprised of two parts. In the first part, some well-known algorithms based on pixel difference, probability, and codebook are explained and implemented by providing implementation details. The second part includes the evaluation of the performances of the algorithms which is based on the comparison v between the foreground background regions indicated by the algorithms and ground truth. Therefore, some metrics including precision, recall and f-measures are defined at first. Then, the data set videos having different scenarios are run for each algorithm to compare the performances. Finally, the performances of each algorithm along with optimal values of their parameters are given based on f measure.

Page generated in 0.1056 seconds