121 |
Facet model optic flow and rigid body motionLee, Jongsoo January 1985 (has links)
The dissertation uses the facet model technique to compute the optic flow field directly from a time sequence of image frames. Two techniques, an iterative and a non-iterative one, determine 3D motion parameters and surface structure (relative depth) from the computed optic flow field. Finally we discuss a technique for the image segmentation based on the multi-object motion using both optic flow and its time derivative.
The facet model technique computes optic flow locally by solving over-constrained linear equations obtained from a fit over 3D (row, column, and time) neighborhoods in an image sequence. The iterative technique computes motion parameters and surface structure using each to update the other. This technique essentially uses the least square error method on the relationship between optic flow field and rigid body motion. The non-iterative technique computes motion parameters by solving a linear system derived from the relationship between optic flow field and rigid body motion and then computes the relative depth of each pixel using the motion parameters computed. The technique also estimates errors of both the computed motion parameters and the relative depth when the optic flow is perturbed. / Ph. D.
|
122 |
Unsupervised self-adaptive abnormal behavior detection for real-time surveillance. / 實時無監督自適應異常行為檢測系統 / Shi shi wu jian du zi shi ying yi chang xing wei jian ce xi tongJanuary 2009 (has links)
Yu, Tsz Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 95-100). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Surveillance and Computer Vision --- p.3 / Chapter 1.2 --- The Need for Abnormal Behavior Detection --- p.3 / Chapter 1.2.1 --- The Motivation --- p.3 / Chapter 1.2.2 --- Choosing the Right Surveillance Target --- p.5 / Chapter 1.3 --- Abnormal Behavior Detection: An Overview --- p.6 / Chapter 1.3.1 --- Challenges in Detecting Abnormal Behaviors --- p.6 / Chapter 1.3.2 --- Limitations of Existing Approaches --- p.8 / Chapter 1.3.3 --- New Design Concepts --- p.9 / Chapter 1.3.4 --- Requirements for Abnormal Behavior Detection --- p.10 / Chapter 1.4 --- Contributions --- p.11 / Chapter 1.4.1 --- An Unsupervised Experience-based Approach for Abnormal Behavior Detection --- p.11 / Chapter 1.4.2 --- Motion Histogram Transform: A Novel Feature Descriptors --- p.12 / Chapter 1.4.3 --- Real-time Algorithm for Abnormal Behavior Detection --- p.12 / Chapter 1.5 --- Thesis Organization --- p.13 / Chapter 2 --- Literature Review --- p.14 / Chapter 2.1 --- From Segmentation to Visual Tracking --- p.14 / Chapter 2.1.1 --- Environment Modeling and Segmentation --- p.15 / Chapter 2.1.2 --- Spatial-temporal Feature Extraction --- p.18 / Chapter 2.2 --- Detecting Irregularities in Videos --- p.21 / Chapter 2.2.1 --- Model-based Method --- p.22 / Chapter 2.2.2 --- Non Model-based Method --- p.26 / Chapter 3 --- Design Framework --- p.29 / Chapter 3.1 --- Dynamic Scene and Behavior Model --- p.30 / Chapter 3.1.1 --- Images Sequences and Video --- p.30 / Chapter 3.1.2 --- Motions and Behaviors in Video --- p.31 / Chapter 3.1.3 --- Discovering Abnormal Behavior --- p.32 / Chapter 3.1.4 --- Problem Definition --- p.33 / Chapter 3.1.5 --- System Assumption --- p.34 / Chapter 3.2 --- Methodology --- p.35 / Chapter 3.2.1 --- Potential Improvements --- p.35 / Chapter 3.2.2 --- The Design Framework --- p.36 / Chapter 4 --- Implementation --- p.40 / Chapter 4.1 --- Preprocessing --- p.40 / Chapter 4.1.1 --- Data Input --- p.41 / Chapter 4.1.2 --- Motion Detection --- p.41 / Chapter 4.1.3 --- The Gaussian Mixture Background Model --- p.43 / Chapter 4.2 --- Feature Extraction --- p.46 / Chapter 4.2.1 --- Optical Flow Estimation --- p.47 / Chapter 4.2.2 --- Motion Histogram Transforms --- p.53 / Chapter 4.3 --- Feedback Learning --- p.56 / Chapter 4.3.1 --- The Observation Matrix --- p.58 / Chapter 4.3.2 --- Eigenspace Transformation --- p.58 / Chapter 4.3.3 --- Self-adaptive Update Scheme --- p.61 / Chapter 4.3.4 --- Summary --- p.62 / Chapter 4.4 --- Classification --- p.63 / Chapter 4.4.1 --- Detecting Abnormal Behavior via Statistical Saliencies --- p.64 / Chapter 4.4.2 --- Determining Feedback --- p.65 / Chapter 4.5 --- Localization and Output --- p.66 / Chapter 4.6 --- Conclusion --- p.69 / Chapter 5 --- Experiments --- p.71 / Chapter 5.1 --- Experiment Setup --- p.72 / Chapter 5.2 --- A Summary of Experiments --- p.74 / Chapter 5.3 --- Experiment Results: Part 1 --- p.78 / Chapter 5.4 --- Experiment Results: Part 2 --- p.81 / Chapter 5.5 --- Experiment Results: Part 3 --- p.83 / Chapter 5.6 --- Experiment Results: Part 4 --- p.86 / Chapter 5.7 --- Analysis and Conclusion --- p.86 / Chapter 6 --- Conclusions --- p.88 / Chapter 6.1 --- Application Extensions --- p.88 / Chapter 6.2 --- Limitations --- p.89 / Chapter 6.2.1 --- Surveillance Range --- p.89 / Chapter 6.2.2 --- Preparation Time for the System --- p.89 / Chapter 6.2.3 --- Calibration of Background Model --- p.90 / Chapter 6.2.4 --- Instability of Optical Flow Feature Extraction --- p.91 / Chapter 6.2.5 --- Lack of 3D information --- p.91 / Chapter 6.2.6 --- Dealing with Complex Behavior Patterns --- p.92 / Chapter 6.2.7 --- Potential Improvements --- p.92 / Chapter 6.2.8 --- New Method for Classification --- p.93 / Chapter 6.2.9 --- Introduction of Dynamic Texture as a Feature --- p.93 / Chapter 6.2.10 --- Using Multiple-camera System --- p.93 / Chapter 6.3 --- Summary --- p.94 / Bibliography --- p.95
|
123 |
A hierarchical graphical model for recognizing human actions and interactions in videoPark, Sangho 28 August 2008 (has links)
Not available / text
|
124 |
Organization of the cerebellum: correlating biochemistry, physiology and anatomy in the ventral uvula of pigeonsGraham, David Unknown Date
No description available.
|
125 |
Binocular vision and three-dimensional motion perception : the use of changing disparity and inter-ocular velocity differencesGrafton, Catherine E. January 2011 (has links)
This thesis investigates the use of binocular information for motion-in-depth (MID) perception. There are at least two different types of binocular information available to the visual system from which to derive a perception of MID: changing disparity (CD) and inter-ocular velocity differences (IOVD). In the following experiments, we manipulate the availability of CD and IOVD information in order to assess the relative influence of each on MID judgements. In the first experiment, we assessed the relative effectiveness of CD and IOVD information for MID detection, and whether the two types of binocular information are processed by separate mechanisms with differing characteristics. Our results suggest that, both CD and IOVD information can be utilised for MID detection, yet, the relative dependence on either of these types of MID information varies between observers. We then went on to explore the contribution of CD and IOVD information to time-to-contact (TTC) perception, whereby an observer judges the time at which an approaching stimulus will contact them. We confirmed that the addition of congruent binocular information to looming stimuli can influence TTC judgements, but that there is no influence from binocular information indicating no motion. Further to this, we found that observers could utilise both CD and IOVD for TTC judgements, although once again, individual receptiveness to CD and/or IOVD information varied. Thus, we demonstrate that the human visual system is able to process both CD and IOVD information, but the influence of either (or both) of these cues on an individual’s perception has been shown to be mutually independent.
|
126 |
Bayesian 3D multiple people tracking using multiple indoor cameras and microphonesLee, Yeongseon. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Rusell M. Mersereau; Committee Member: Biing Hwang (Fred) Juang; Committee Member: Christopher E. Heil; Committee Member: Georgia Vachtsevanos; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
127 |
Segmentação de movimento por fluxo óticoKuiaski, José Rosa 24 August 2012 (has links)
A percepção de movimento é uma característica essencial à sobrevivência de diversas espécies. Na natureza, é através do movimento que uma presa percebe a chegada de um predador e decide em que direção deve fugir, bem como o predador detecta a presença de uma presa e decide para onde atacar. O Sistema Visual Humano é mais sensível a movimento do que a imagens estáticas, sendo capaz de separar as informações de movimento originadas pela movimentação própria das informações de movimento de objetos animados no ambiente. A Teoria Ecológica de Gibson (1979) provê uma base para o entendimento de como esse processo de percepção ocorre e estende-se com o conceito do que chamamos de campo vetorial de Fluxo Ótico, através do qual se representa computacionalmente o movimento. O objetivo principal deste trabalho é procurar reproduzir computacionalmente esse comportamento, para possíveis aplicações em navegação autônoma e processamento de vídeo com movimentação desconhecida. Para isso, vale-se das técnicas de estimação de Fluxo Ótico presentes na literatura, tais como as propostas por Lucas e Kanade (1981) e Farneback (1994). Em primeiro momento, avalia-se a possibilidade de utilização de uma técnica estatística de separação cega de fontes, a chamada Análise de Componentes Independentes, tomando como base o trabalho de Bell e Sejnowski (1997), na qual se mostra que tal análise aplicada em imagens fornece filtros de bordas. Depois, avalia-se a utilização do Foco de Expansão para movimentos translacionais. Resultados experimentais demonstram uma maior viabilidade da abordagem por Foco de Expansão. / Motion Perception is an essential feature for the survival of several species. In nature, it is through motion that a prey perceives the predator and is able to decide which direction to escape, and the predator detects the presence of a prey and decides where to attack. The Human Visual System is more sensitive to motion than to static imagery, and it is able to separate motion information due to egomotion from that due to an animated object in the environment. The Ecological Theory of Gibson (1979) provides a basis for understanding how this processes of perception occurs, and leads to the concept of what we call the vector field of Optical Flow, through which computational motion is represented. The main objective of this work is to try to reproduce computationally this behaviour, for possible applications in autonomous navigation and video processing with unknown self-motion. For this, we use some Optical Flow estimation techniques, as those proposed by Lucas and Kanade (1981) and Farneback (1994). At first, we assess the possibility of using a statistical technique of blind source separation, the so-called Independent Component Analysis, based on the work of Bell and Sejnowski (1997), which demonstrates that this technique, when applied to imagery, provides edge filters. Then, we assess the use of the Focus of Expansion to translational motion. Experimental results show the second approach, using the Focus of Expansion, is more viable than through Independent Component Analysis.
|
128 |
Segmentação de movimento por fluxo óticoKuiaski, José Rosa 24 August 2012 (has links)
A percepção de movimento é uma característica essencial à sobrevivência de diversas espécies. Na natureza, é através do movimento que uma presa percebe a chegada de um predador e decide em que direção deve fugir, bem como o predador detecta a presença de uma presa e decide para onde atacar. O Sistema Visual Humano é mais sensível a movimento do que a imagens estáticas, sendo capaz de separar as informações de movimento originadas pela movimentação própria das informações de movimento de objetos animados no ambiente. A Teoria Ecológica de Gibson (1979) provê uma base para o entendimento de como esse processo de percepção ocorre e estende-se com o conceito do que chamamos de campo vetorial de Fluxo Ótico, através do qual se representa computacionalmente o movimento. O objetivo principal deste trabalho é procurar reproduzir computacionalmente esse comportamento, para possíveis aplicações em navegação autônoma e processamento de vídeo com movimentação desconhecida. Para isso, vale-se das técnicas de estimação de Fluxo Ótico presentes na literatura, tais como as propostas por Lucas e Kanade (1981) e Farneback (1994). Em primeiro momento, avalia-se a possibilidade de utilização de uma técnica estatística de separação cega de fontes, a chamada Análise de Componentes Independentes, tomando como base o trabalho de Bell e Sejnowski (1997), na qual se mostra que tal análise aplicada em imagens fornece filtros de bordas. Depois, avalia-se a utilização do Foco de Expansão para movimentos translacionais. Resultados experimentais demonstram uma maior viabilidade da abordagem por Foco de Expansão. / Motion Perception is an essential feature for the survival of several species. In nature, it is through motion that a prey perceives the predator and is able to decide which direction to escape, and the predator detects the presence of a prey and decides where to attack. The Human Visual System is more sensitive to motion than to static imagery, and it is able to separate motion information due to egomotion from that due to an animated object in the environment. The Ecological Theory of Gibson (1979) provides a basis for understanding how this processes of perception occurs, and leads to the concept of what we call the vector field of Optical Flow, through which computational motion is represented. The main objective of this work is to try to reproduce computationally this behaviour, for possible applications in autonomous navigation and video processing with unknown self-motion. For this, we use some Optical Flow estimation techniques, as those proposed by Lucas and Kanade (1981) and Farneback (1994). At first, we assess the possibility of using a statistical technique of blind source separation, the so-called Independent Component Analysis, based on the work of Bell and Sejnowski (1997), which demonstrates that this technique, when applied to imagery, provides edge filters. Then, we assess the use of the Focus of Expansion to translational motion. Experimental results show the second approach, using the Focus of Expansion, is more viable than through Independent Component Analysis.
|
129 |
Bayesian 3D multiple people tracking using multiple indoor cameras and microphonesLee, Yeongseon 13 May 2009 (has links)
This thesis represents Bayesian joint audio-visual tracking for the 3D locations of multiple people and a current speaker in a real conference environment. To achieve this objective, it focuses on several different research interests, such as acoustic-feature detection, visual-feature detection, a non-linear Bayesian framework, data association, and sensor fusion. As acoustic-feature detection, time-delay-of-arrival~(TDOA) estimation is used for multiple source detection. Localization performance using TDOAs is also analyzed according to different configurations of microphones. As a visual-feature detection, Viola-Jones face detection is used to initialize the locations of unknown multiple objects. Then, a corner feature, based on the results from the Viola-Jones face detection, is used for motion detection for robust objects. Simple point-to-line correspondences between multiple cameras using fundamental matrices are used to determine which features are more robust. As a method for data association and sensor fusion, Monte-Carlo JPDAF and a data association with IPPF~(DA-IPPF) are implemented in the framework of particle filtering. Three different tracking scenarios of acoustic source tracking, visual source tracking, and joint acoustic-visual source tracking are represented using the proposed algorithms. Finally the real-time implementation of this joint acoustic-visual tracking system using a PC, four cameras, and six microphones is addressed with two parts of system implementation and real-time processing.
|
Page generated in 0.1531 seconds