• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Pattern Acquisition Methods for Information Extraction Systems

Marcińczuk, Michał January 2007 (has links)
This master thesis treats about Event Recognition in the reports of Polish stockholders. Event Recognition is one of the Information Extraction tasks. This thesis provides a comparison of two approaches to Event Recognition: manual and automatic. In the manual approach regular expressions are used. Regular expressions are used as a baseline for the automatic approach. In the automatic approach three Machine Learning methods were applied. In the initial experiment the Decision Trees, naive Bayes and Memory Based Learning methods are compared. A modification of the standard Memory Based Learning method is presented which goal is to create a classifier that uses only positives examples in the classification task. The performance of the modified Memory Based Learning method is presented and compared to the baseline and also to other Machine Learning methods. In the initial experiment one type of annotation is used and it is the meeting date annotation. The final experiment is conducted using three types of annotations: the meeting time, the meeting date and the meeting place annotation. The experiments show that the classification can be performed using only one class of instances with the same level of performance. / (+48)669808616
12

Reconhecimento de atividades suspeitas em ambiente externo via análise de vídeo infravermelho

Fernandes, Henrique Coelho 26 October 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Surveillance has become, in the last years, something ubiquity in our society. Every day it is more notorious the presence of intelligent systems for surveillance in our everyday life. This is due to technological advances achieved in recent decades (storage and processing speed increasing, miniaturization of devices like biometric detectors and video cameras) as the constant feeling of insecurity experienced in several countries. Following the dark days of 9/11, security and surveillance became paramount. This work aims the study of techniques for the development of a surveillance system of an outdoor parking lot based on a stationary camera. Considering that in an outdoor parking lot it is very important that the surveillance is made both day and night, in this work we use an infrared camera to record images. An infrared camera allows to see objects of interest in the scene even at night. The images used for the experiments in this work were recorded by the student in Laval University campus (Canada) during an internship he held in the "Canada Research Chair in Multipolar Infrared Vision". A surveillance system based on video cameras is usually composed of three parts: (i) motion detection, (ii) tracking and (iii)event management. In this work, we use a dynamic background subtraction technique to detect motion (motion segmentation). This technique adapts to abrupt changes on the scene's illumination making the technique robust to this changes. Besides, we use ow analysis to restrict the segmentation process only to regions where we have motion in the scene. The object tracking technique used is based on a two phase cycle: prediction and correction. The events of interest which occur in the monitored area are modeled explicitly and then recognized and interpreted. The main goal of this project is to recognize suspicious events. Experimental results show that such techniques are suitable for a surveillance system for an outdoor parking lot based on a infrared stationary camera. / Vigilância se tornou, nos últimos anos, algo ubíquo em nossa sociedade. Cada dia que passa é mais notória a presença de sistemas inteligentes de vigilância em nosso dia-a-dia. Isso se deve tanto aos avanços tecnológicos alcançados nas últimas décadas (aumento da capacidade de processamento e armazenamento, miniaturização de dispositivos como detectores biométricos e câmeras de vídeo) como a constante sensação de insegurança vivida em vários países. Após os dias sombrios de 11/09, segurança e vigilância se tornaram algo primordial. Este trabalho visa o estudo de técnicas para o desenvolvimento de um sistema de vigilância para um estacionamento externo baseado em uma câmera estacionária. Tendo em vista que em um estacionamento externo é de suma importância que a vigilância seja feita tanto de dia quanto de noite, neste trabalho utilizamos uma câmera que captura imagens infravermelhas. Uma câmera infravermelha permite que enxerguemos objetos de interesse na cena até mesmo a noite. As imagens usadas nos experimentos realizados neste trabalho foram colhidas no campus da Universidade de Laval (Canadá) durante um estágio realizado no Canada Research Chair in Multipolar Infrared Vision. Um sistema de vigilância baseado em câmeras de vídeo geralmente possui três partes principais: (i) detecção de movimento, (ii) monitoramento e (iii) gerenciamento de eventos. Neste trabalho, utilizamos uma dinâmica técnica de subtração de plano de fundo para realizar a detecção de movimento (segmentação de movimento). Esta técnica se adapta às mudanças bruscas de iluminação na cena tornando o método de segmentação robusto a estas mudanças. Além disso, utilizamos análise de uxo de movimento para restringir a segmentação somente às regiões onde existem algum movimento na cena. A técnica de monitoramento de objetos em movimento usada neste trabalho é baseada em um ciclo de dois estágios: previsão e correção. Os eventos de interesse que ocorrem na área monitorada são modelados de forma explícita sendo então reconhecidos e interpretados. O foco principal deste trabalho é o reconhecimento de eventos suspeitos. Resultados experimentais obtidos mostram que tais técnicas são adequadas para um sistema de vigilância de um estacionamento externo baseado em uma câmera estacionária infravermelha. / Mestre em Ciência da Computação
13

Recognition Of Complex Events In Open-source Web-scale Videos: Features, Intermediate Representations And Their Temporal Interactions

Bhattacharya, Subhabrata 01 January 2013 (has links)
Recognition of complex events in consumer uploaded Internet videos, captured under realworld settings, has emerged as a challenging area of research across both computer vision and multimedia community. In this dissertation, we present a systematic decomposition of complex events into hierarchical components and make an in-depth analysis of how existing research are being used to cater to various levels of this hierarchy and identify three key stages where we make novel contributions, keeping complex events in focus. These are listed as follows: (a) Extraction of novel semi-global features – firstly, we introduce a Lie-algebra based representation of dominant camera motion present while capturing videos and show how this can be used as a complementary feature for video analysis. Secondly, we propose compact clip level descriptors of a video based on covariance of appearance and motion features which we further use in a sparse coding framework to recognize realistic actions and gestures. (b) Construction of intermediate representations – We propose an efficient probabilistic representation from low-level features computed from videos, based on Maximum Likelihood Estimates which demonstrates state of the art performance in large scale visual concept detection, and finally, (c) Modeling temporal interactions between intermediate concepts – Using block Hankel matrices and harmonic analysis of slowly evolving Linear Dynamical Systems, we propose two new discriminative feature spaces for complex event recognition and demonstrate significantly improved recognition rates over previously proposed approaches.
14

Arcabouço para análise de eventos em vídeos. / Framework for analyzing events in videos.

SILVA, Adson Diego Dionisio da. 07 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-07T15:29:04Z No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) / Made available in DSpace on 2018-05-07T15:29:04Z (GMT). No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) Previous issue date: 2015-08-31 / O reconhecimento automático de eventos de interesse em vídeos envolvendo conjuntos de ações ou de interações entre objetos. Pode agregar valor a sistemas de vigilância,aplicações de cidades inteligentes, monitoramento de pessoas com incapacidades físicas ou mentais, dentre outros. Entretanto, conceber um arcabouço que possa ser adaptado a diversas situações sem a necessidade de um especialista nas tecnologias envolvidas, continua sendo um desafio para a área. Neste contexto, a pesquisa realizada tem como base a criação de um arcabouço genérico para detecção de eventos em vídeo com base em regras. Para criação das regras, os usuários formam expressões lógicas utilizando Lógica de Primeira Ordem e relacionamos termos com a álgebra de intervalos de Allen, adicionando assim um contexto temporal às regras. Por ser um arcabouço, ele é extensível, podendo receber módulos adicionais para realização de novas detecções e inferências Foi realizada uma avaliação experimental utilizando vídeos de teste disponíveis no site Youtube envolvendo um cenário de trânsito, com eventos de ultrapassagem do sinal vermelho e vídeos obtidos de uma câmera ao vivo do site Camerite, contendo eventos de carros estacionando. O foco do trabalho não foi criar detectores de objetos (e.g. carros ou pessoas) melhores do que aqueles existentes no estado da arte, mas propor e desenvolver uma estrutura genérica e reutilizável que integra diferentes técnicas de visão computacional. A acurácia na detecção dos eventos ficou no intervalo de 83,82% a 90,08% com 95% de confiança. Obteve acurácia máxima (100%) na detecção dos eventos, quando substituído os detectores de objetos por rótulos atribuídos manualmente, o que indicou a eficácia do motor de inferência desenvolvido para o arcabouço. / Automatic recognition of relevant events in videos involving sets of actions or interactions between objects can improve surveillance systems, smart cities applications, monitoring of people with physical or mental disabilities, among others. However, designing a framework that can be adapted to several situations without an expert in the involved technologies remains a challenge. In this context, this work is based on the creation of a rule-based generic framework for event detection in video. To create the rules, users form logical expressions using firstorder logic (FOL) and relate the terms with the Allen’s interval algebra, adding a temporal context to the rules. Once it is a framework, it is extensible, and may receive additional modules for performing new detections and inferences. Experimental evaluation was performed using test videos available on Youtube, involving a scenario of traffic with red light crossing events and videos from Camerite website containing parking car events. The focus of the work was not to create object detectors (e.g. cars or people) better than those existing in the state-of-the-art, but, propose and develop a generic and reusable framework that integrates differents computer vision techniques. The accuracy in the detection of the events was within the range of 83.82% and 90.08% with 95% confidence. Obtained maximum accuracy (100 %) in the detection of the events, when replacing the objects detectors by labels manually assigned, what indicated the effectiveness of the inference engine developed for this framework.
15

Robust Subspace Estimation Using Low-rank Optimization. Theory And Applications In Scene Reconstruction, Video Denoising, And Activity Recognition.

Oreifej, Omar 01 January 2013 (has links)
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories . . . etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the iii turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the iv problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and `1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by `1 norm. Second, since the object’s motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned diffi- culties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions v of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size . . . etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modelled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.

Page generated in 0.1047 seconds