• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 13
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 28
  • 28
  • 15
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Unsupervised self-adaptive abnormal behavior detection for real-time surveillance. / 實時無監督自適應異常行為檢測系統 / Shi shi wu jian du zi shi ying yi chang xing wei jian ce xi tong

January 2009 (has links)
Yu, Tsz Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 95-100). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Surveillance and Computer Vision --- p.3 / Chapter 1.2 --- The Need for Abnormal Behavior Detection --- p.3 / Chapter 1.2.1 --- The Motivation --- p.3 / Chapter 1.2.2 --- Choosing the Right Surveillance Target --- p.5 / Chapter 1.3 --- Abnormal Behavior Detection: An Overview --- p.6 / Chapter 1.3.1 --- Challenges in Detecting Abnormal Behaviors --- p.6 / Chapter 1.3.2 --- Limitations of Existing Approaches --- p.8 / Chapter 1.3.3 --- New Design Concepts --- p.9 / Chapter 1.3.4 --- Requirements for Abnormal Behavior Detection --- p.10 / Chapter 1.4 --- Contributions --- p.11 / Chapter 1.4.1 --- An Unsupervised Experience-based Approach for Abnormal Behavior Detection --- p.11 / Chapter 1.4.2 --- Motion Histogram Transform: A Novel Feature Descriptors --- p.12 / Chapter 1.4.3 --- Real-time Algorithm for Abnormal Behavior Detection --- p.12 / Chapter 1.5 --- Thesis Organization --- p.13 / Chapter 2 --- Literature Review --- p.14 / Chapter 2.1 --- From Segmentation to Visual Tracking --- p.14 / Chapter 2.1.1 --- Environment Modeling and Segmentation --- p.15 / Chapter 2.1.2 --- Spatial-temporal Feature Extraction --- p.18 / Chapter 2.2 --- Detecting Irregularities in Videos --- p.21 / Chapter 2.2.1 --- Model-based Method --- p.22 / Chapter 2.2.2 --- Non Model-based Method --- p.26 / Chapter 3 --- Design Framework --- p.29 / Chapter 3.1 --- Dynamic Scene and Behavior Model --- p.30 / Chapter 3.1.1 --- Images Sequences and Video --- p.30 / Chapter 3.1.2 --- Motions and Behaviors in Video --- p.31 / Chapter 3.1.3 --- Discovering Abnormal Behavior --- p.32 / Chapter 3.1.4 --- Problem Definition --- p.33 / Chapter 3.1.5 --- System Assumption --- p.34 / Chapter 3.2 --- Methodology --- p.35 / Chapter 3.2.1 --- Potential Improvements --- p.35 / Chapter 3.2.2 --- The Design Framework --- p.36 / Chapter 4 --- Implementation --- p.40 / Chapter 4.1 --- Preprocessing --- p.40 / Chapter 4.1.1 --- Data Input --- p.41 / Chapter 4.1.2 --- Motion Detection --- p.41 / Chapter 4.1.3 --- The Gaussian Mixture Background Model --- p.43 / Chapter 4.2 --- Feature Extraction --- p.46 / Chapter 4.2.1 --- Optical Flow Estimation --- p.47 / Chapter 4.2.2 --- Motion Histogram Transforms --- p.53 / Chapter 4.3 --- Feedback Learning --- p.56 / Chapter 4.3.1 --- The Observation Matrix --- p.58 / Chapter 4.3.2 --- Eigenspace Transformation --- p.58 / Chapter 4.3.3 --- Self-adaptive Update Scheme --- p.61 / Chapter 4.3.4 --- Summary --- p.62 / Chapter 4.4 --- Classification --- p.63 / Chapter 4.4.1 --- Detecting Abnormal Behavior via Statistical Saliencies --- p.64 / Chapter 4.4.2 --- Determining Feedback --- p.65 / Chapter 4.5 --- Localization and Output --- p.66 / Chapter 4.6 --- Conclusion --- p.69 / Chapter 5 --- Experiments --- p.71 / Chapter 5.1 --- Experiment Setup --- p.72 / Chapter 5.2 --- A Summary of Experiments --- p.74 / Chapter 5.3 --- Experiment Results: Part 1 --- p.78 / Chapter 5.4 --- Experiment Results: Part 2 --- p.81 / Chapter 5.5 --- Experiment Results: Part 3 --- p.83 / Chapter 5.6 --- Experiment Results: Part 4 --- p.86 / Chapter 5.7 --- Analysis and Conclusion --- p.86 / Chapter 6 --- Conclusions --- p.88 / Chapter 6.1 --- Application Extensions --- p.88 / Chapter 6.2 --- Limitations --- p.89 / Chapter 6.2.1 --- Surveillance Range --- p.89 / Chapter 6.2.2 --- Preparation Time for the System --- p.89 / Chapter 6.2.3 --- Calibration of Background Model --- p.90 / Chapter 6.2.4 --- Instability of Optical Flow Feature Extraction --- p.91 / Chapter 6.2.5 --- Lack of 3D information --- p.91 / Chapter 6.2.6 --- Dealing with Complex Behavior Patterns --- p.92 / Chapter 6.2.7 --- Potential Improvements --- p.92 / Chapter 6.2.8 --- New Method for Classification --- p.93 / Chapter 6.2.9 --- Introduction of Dynamic Texture as a Feature --- p.93 / Chapter 6.2.10 --- Using Multiple-camera System --- p.93 / Chapter 6.3 --- Summary --- p.94 / Bibliography --- p.95

Interpretable Machine Learning and Sparse Coding for Computer Vision

Landecker, Will 01 August 2014 (has links)
Machine learning offers many powerful tools for prediction. One of these tools, the binary classifier, is often considered a black box. Although its predictions may be accurate, we might never know why the classifier made a particular prediction. In the first half of this dissertation, I review the state of the art of interpretable methods (methods for explaining why); after noting where the existing methods fall short, I propose a new method for a particular type of black box called additive networks. I offer a proof of trustworthiness for this new method (meaning a proof that my method does not "make up" the logic of the black box when generating an explanation), and verify that its explanations are sound empirically. Sparse coding is part of a family of methods that are believed, by many researchers, to not be black boxes. In the second half of this dissertation, I review sparse coding and its application to the binary classifier. Despite the fact that the goal of sparse coding is to reconstruct data (an entirely different goal than classification), many researchers note that it improves classification accuracy. I investigate this phenomenon, challenging a common assumption in the literature. I show empirically that sparse reconstruction is not necessarily the right intermediate goal, when our ultimate goal is classification. Along the way, I introduce a new sparse coding algorithm that outperforms competing, state-of-the-art algorithms for a variety of important tasks.

Representations and matching techniques for 3D free-form object and face recognition

Mian, Ajmal Saeed January 2007 (has links)
[Truncated abstract] The aim of visual recognition is to identify objects in a scene and estimate their pose. Object recognition from 2D images is sensitive to illumination, pose, clutter and occlusions. Object recognition from range data on the other hand does not suffer from these limitations. An important paradigm of recognition is model-based whereby 3D models of objects are constructed offline and saved in a database, using a suitable representation. During online recognition, a similar representation of a scene is matched with the database for recognizing objects present in the scene . . . The tensor representation is extended to automatic and pose invariant 3D face recognition. As the face is a non-rigid object, expressions can significantly change its 3D shape. Therefore, the last part of this thesis investigates representations and matching techniques for automatic 3D face recognition which are robust to facial expressions. A number of novelties are proposed in this area along with their extensive experimental validation using the largest available 3D face database. These novelties include a region-based matching algorithm for 3D face recognition, a 2D and 3D multimodal hybrid face recognition algorithm, fully automatic 3D nose ridge detection, fully automatic normalization of 3D and 2D faces, a low cost rejection classifier based on a novel Spherical Face Representation, and finally, automatic segmentation of the expression insensitive regions of a face.

Image Based Attitude And Position Estimation Using Moment Functions

Mukundan, R 07 1900 (has links) (PDF)
No description available.


Nicholas M Chimitt (16680375) 28 July 2023 (has links)
<p>Imaging at range for the purposes of biometric, scientific, or militaristic applications often suffer due to degradations by the atmosphere. These degradations, due to the non-uniformity of the atmospheric medium, can be modeled as being caused by turbulence. Dating back to the days of Kolmogorov in the 1940’s, the field has had many successes in modeling and some in mitigating the effects of turbulence in images. Today, modern restoration methods are often in the form of learning-based solutions which require a large amount of training data. This places atmospheric turbulence mitigation at an interesting point in its history; simulators which accurately capture the effects of the atmosphere were developed without any consideration of deep learning methods and are often missing critical requirements for today’s solutions.</p><p><br></p><p>In this work, we describe a simulator which is not only fast and accurate but has the additional property of being end-to-end differentiable, allowing for end-to-end training with a reconstruction network. This simulation, which we refer to as Zernike-based simulation, performs at a similar level of accuracy as its purely optics-based simulation counterparts while being up to 1000x faster. To achieve this we combine theoretical developments, engineering efforts, and learning-based solutions. Our Zernike-based simulation not only aids in the application of modern solutions to this classical problem but also opens the field to new possibilities with what we refer to as computational image formation.chimi</p>

Context-based Image Concept Detection and Annotation

Unknown Date (has links)
Scene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel intensities, texture, color gradient) of sub-regions of an image to identify objects. The initially assigned objects names must be further examined to ensure they are compatible with each other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence, spatial and semantical priors) and object to scene constraints as background information, a concept classifier predicts the most semantically consistent set of names for discovered objects. The additional background information that describes concepts is called context. In this dissertation, a framework for building context-based concept detection is presented that uses a combination of multiple contextual relationships to refine the result of underlying feature-based object detectors to produce most semantically compatible concepts. In addition to the lack of ability to capture semantical dependencies, object detectors suffer from high dimensionality of feature space that impairs them. Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion) can also result in low-quality visual features that impact the accuracy of detected concepts. The object detectors used to build context-based framework experiments in this study are based on the state-of-the-art generative and discriminative graphical models. The relationships between model variables can be easily described using graphical models and the dependencies and precisely characterized using these representations. The generative context-based implementations are extensions of Latent Dirichlet Allocation, a leading topic modeling approach that is very effective in reduction of the dimensionality of the data. The discriminative contextbased approach extends Conditional Random Fields which allows efficient and precise construction of model by specifying and including only cases that are related and influence it. The dataset used for training and evaluation is MIT SUN397. The result of the experiments shows overall 15% increase in accuracy in annotation and 31% improvement in semantical saliency of the annotated concepts. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection

Fundamental numerical schemes for parameter estimation in computer vision.

Scoleri, Tony January 2008 (has links)
An important research area in computer vision is parameter estimation. Given a mathematical model and a sample of image measurement data, key parameters are sought to encapsulate geometric properties of a relevant entity. An optimisation problem is often formulated in order to find these parameters. This thesis presents an elaboration of fundamental numerical algorithms for estimating parameters of multi-objective models of importance in computer vision applications. The work examines ways to solve unconstrained and constrained minimisation problems from the view points of theory, computational methods, and numerical performance. The research starts by considering a particular form of multi-equation constraint function that characterises a wide class of unconstrained optimisation tasks. Increasingly sophisticated cost functions are developed within a consistent framework, ultimately resulting in the creation of a new iterative estimation method. The scheme operates in a maximum likelihood setting and yields near-optimal estimate of the parameters. Salient features of themethod are that it has simple update rules and exhibits fast convergence. Then, to accommodate models with functional dependencies, two variant of this initial algorithm are proposed. These methods are improved again by reshaping the objective function in a way that presents the original estimation problem in a reduced form. This procedure leads to a novel algorithm with enhanced stability and convergence properties. To extend the capacity of these schemes to deal with constrained optimisation problems, several a posteriori correction techniques are proposed to impose the so-called ancillary constraints. This work culminates by giving two methods which can tackle ill-conditioned constrained functions. The combination of the previous unconstrained methods with these post-hoc correction schemes provides an array of powerful constrained algorithms. The practicality and performance of themethods are evaluated on two specific applications. One is planar homography matrix computation and the other trifocal tensor estimation. In the case of fitting a homography to image data, only the unconstrained algorithms are necessary. For the problem of estimating a trifocal tensor, significant work is done first on expressing sets of usable constraints, especially the ancillary constraints which are critical to ensure that the computed object conforms to the underlying geometry. Evidently here, the post-correction schemes must be incorporated in the computational mechanism. For both of these example problems, the performance of the unconstrained and constrained algorithms is compared to existing methods. Experiments reveal that the new methods perform with high accuracy to match a state-of-the-art technique but surpass it in execution speed. / Thesis (Ph.D.) - University of Adelaide, School of Mathemtical Sciences, Discipline of Pure Mathematics, 2008

Learning General Features From Images and Audio With Stacked Denoising Autoencoders

Nifong, Nathaniel H. 23 January 2014 (has links)
One of the most impressive qualities of the brain is its neuro-plasticity. The neocortex has roughly the same structure throughout its whole surface, yet it is involved in a variety of different tasks from vision to motor control, and regions which once performed one task can learn to perform another. Machine learning algorithms which aim to be plausible models of the neocortex should also display this plasticity. One such candidate is the stacked denoising autoencoder (SDA). SDA's have shown promising results in the field of machine perception where they have been used to learn abstract features from unlabeled data. In this thesis I develop a flexible distributed implementation of an SDA and train it on images and audio spectrograms to experimentally determine properties comparable to neuro-plasticity. Specifically, I compare the visual-auditory generalization between a multi-level denoising autoencoder trained with greedy, layer-wise pre-training (GLWPT), to one trained without. I test a hypothesis that multi-modal networks will perform better than uni-modal networks due to the greater generality of features that may be learned. Furthermore, I also test the hypothesis that the magnitude of improvement gained from this multi-modal training is greater when GLWPT is applied than when it is not. My findings indicate that these hypotheses were not confirmed, but that GLWPT still helps multi-modal networks adapt to their second sensory modality.

Page generated in 0.1076 seconds