• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 15
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigation of Computer Vision Techniques for Object Classification on an Intelligent Wheelchair System for the Cognitively Impaired

Oramasionwu, Paul 09 December 2013 (has links)
The purpose of this research was to investigate object classification algorithms for the application of wheelchair interaction with the environment for the cognitively impaired wheelchair user. Towards this end, top performing object classification algorithms were trained on images of the target object classes (chair, dresser, and sink/washbasin) obtained from the internet and tested on images of the target object classes obtain in the home and patient room environments; these algorithms were Locality-constrained Linear Coding (LLC) [1], Kernel Descriptors (KDES) [2], and Hierarchical Matching Pursuit (HMP) [3]. It was found that HMP achieved the highest over classification accuracy (71.3%) in the home environment and LLC achieved the greatest accuracy (85.0%) in the patient room environment. This research also sought to investigate the potential of active learning to improve upon the obtained classification performance. A maximum mean classification accuracy of 98.6% was achieved when active learning was applied.
2

Investigation of Computer Vision Techniques for Object Classification on an Intelligent Wheelchair System for the Cognitively Impaired

Oramasionwu, Paul 09 December 2013 (has links)
The purpose of this research was to investigate object classification algorithms for the application of wheelchair interaction with the environment for the cognitively impaired wheelchair user. Towards this end, top performing object classification algorithms were trained on images of the target object classes (chair, dresser, and sink/washbasin) obtained from the internet and tested on images of the target object classes obtain in the home and patient room environments; these algorithms were Locality-constrained Linear Coding (LLC) [1], Kernel Descriptors (KDES) [2], and Hierarchical Matching Pursuit (HMP) [3]. It was found that HMP achieved the highest over classification accuracy (71.3%) in the home environment and LLC achieved the greatest accuracy (85.0%) in the patient room environment. This research also sought to investigate the potential of active learning to improve upon the obtained classification performance. A maximum mean classification accuracy of 98.6% was achieved when active learning was applied.
3

Improving Object Classification in X-ray Luggage Inspection

Shi, Xinhua 27 July 2000 (has links)
X-ray detection methods have increasingly been used as an effective means for the automatic detection of explosives. While a number of devices are now commercially available, most of these technologies are not yet mature. The purpose of this research has been to investigate methods for using x-ray dual-energy transmission and scatter imaging technologies more effectively. Followed by an introduction and brief overview of x-ray detection technologies, a model for a prototype x-ray scanning system, which was built at Virginia Tech, is given. This model has primarily been used for the purpose of system analysis, design and simulations. Then, an algorithm is developed to correct the non-uniformity of transmission detectors in the prototype scanning system. The x-ray source output energy in the prototype scanning system is not monochromatic, resulting in two problems: spectrum overlap and output signal unbalance between high and low energy levels, which will degrade the performance of dual-energy x-ray sensing. A copper filter has been introduced and a numerical optimization method to remove thickness effect of objects has been developed to improve the system performance. The back scattering and forward scattering signals are functions of solid angles between the object and detectors. A given object may be randomly placed anywhere on the conveyor belt, resulting in a variation in the detected signals. Both an adaptive modeling technique and least squares method are used to decrease this distance effect. Finally, discriminate function methods have been studied experimentally, and classification rules have been obtained to separate explosives from other types of materials. In some laboratory tests on various scenarios by inserting six explosive simulants, we observed improvements in classification accuracy from 60% to 80%, depending on the complexity of luggage bags. / Ph. D.
4

Scene Segmentation and Object Classification for Place Recognition

Cheng, Chang 01 August 2010 (has links)
This dissertation tries to solve the place recognition and loop closing problem in a way similar to human visual system. First, a novel image segmentation algorithm is developed. The image segmentation algorithm is based on a Perceptual Organization model, which allows the image segmentation algorithm to ‘perceive’ the special structural relations among the constituent parts of an unknown object and hence to group them together without object-specific knowledge. Then a new object recognition method is developed. Based on the fairly accurate segmentations generated by the image segmentation algorithm, an informative object description that includes not only the appearance (colors and textures), but also the parts layout and shape information is built. Then a novel feature selection algorithm is developed. The feature selection method can select a subset of features that best describes the characteristics of an object class. Classifiers trained with the selected features can classify objects with high accuracy. In next step, a subset of the salient objects in a scene is selected as landmark objects to label the place. The landmark objects are highly distinctive and widely visible. Each landmark object is represented by a list of SIFT descriptors extracted from the object surface. This object representation allows us to reliably recognize an object under certain viewpoint changes. To achieve efficient scene-matching, an indexing structure is developed. Both texture feature and color feature of objects are used as indexing features. The texture feature and the color feature are viewpoint-invariant and hence can be used to effectively find the candidate objects with similar surface characteristics to a query object. Experimental results show that the object-based place recognition and loop detection method can efficiently recognize a place in a large complex outdoor environment.
5

Vehicle detection and classification in video sequences / Upptäckt och klassificering av fordon i videosekvenser

Böckert, Andreas January 2002 (has links)
The purpose of this thesis is to investigate the applicability of a certain model based classification algorithm. The algorithm is centered around a flexible wireframe prototype that can instantiate a number of different vehicle classes such as a hatchback, pickup or a bus to mention a few. The parameters of the model are fitted using Newton minimization of errors between model line segments and observed line segments. Furthermore a number of methods for object detection based on motion are described and evaluated. Results from both experimental and real world data is presented.
6

Toward The Frontiers Of Stacked Generalization Architecture For Learning

Mertayak, Cuneyt 01 September 2007 (has links) (PDF)
In pattern recognition, &ldquo / bias-variance&rdquo / trade-off is a challenging issue that the scientists has been working to get better generalization performances over the last decades. Among many learning methods, two-layered homogeneous stacked generalization has been reported to be successful in the literature, in different problem domains such as object recognition and image annotation. The aim of this work is two-folded. First, the problems of stacked generalization are attacked by a proposed novel architecture. Then, a set of success criteria for stacked generalization is studied. A serious drawback of stacked generalization architecture is the sensitivity to curse of dimensionality problem. In order to solve this problem, a new architecture named &ldquo / unanimous decision&rdquo / is designed. The performance of this architecture is shown to be comparably similar to two layered homogeneous stacked generalization architecture in low number of classes while it performs better than stacked generalization architecture in higher number of classes. Additionally, a new success criterion for two layered homogeneous stacked generalization architecture is proposed based on the individual properties of the used descriptors and it is verified in synthetic datasets.
7

Vehicle detection and classification in video sequences / Upptäckt och klassificering av fordon i videosekvenser

Böckert, Andreas January 2002 (has links)
<p>The purpose of this thesis is to investigate the applicability of a certain model based classification algorithm. The algorithm is centered around a flexible wireframe prototype that can instantiate a number of different vehicle classes such as a hatchback, pickup or a bus to mention a few. The parameters of the model are fitted using Newton minimization of errors between model line segments and observed line segments. Furthermore a number of methods for object detection based on motion are described and evaluated. Results from both experimental and real world data is presented.</p>
8

BRAIN-INSPIRED MACHINE LEARNING CLASSIFICATION MODELS

Amerineni, Rajesh 01 May 2020 (has links)
This dissertation focuses on the development of three classes of brain-inspired machine learning classification models. The models attempt to emulate (a) multi-sensory integration, (b) context-integration, and (c) visual information processing in the brain.The multi-sensory integration models are aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli. Two multimodal classification models are introduced: the feature integrating (FI) model and the decision integrating (DI) model. The FI model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The DI model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are be implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the “inverse effectiveness principle” by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions. The context-integrating model emulates the brain’s ability to use contextual information to uniquely resolve the interpretation of ambiguous stimuli. A deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process is introduced. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments are designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain’s ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of visual information in the visual cortex. A convolution neural network (CNN) model, inspired by the hierarchical processing of visual information in the brain, is introduced to fuse information from an ensemble of multi-axial sensors in order to classify strikes such as boxing punches and taekwondo kicks in combat sports. Although CNNs are not an obvious choice for non-array data nor for signals with non-linear variations, it will be shown that CNN models can effectively classify multi-axial multi-sensor signals. Experiments involving the classification of three-axis accelerometer and three-axes gyroscope signals measuring boxing punches and taekwondo kicks showed that the performance of the fusion classifiers were significantly superior to the uni-axial classifiers. Interestingly, the classification accuracies of the CNN fusion classifiers were significantly higher than those of the DTW fusion classifiers. Through training with representative signals and the local feature extraction property, the CNNs tend to be invariant to the latency shifts and non-linear variations. Moreover, by increasing the number of network layers and the training set, the CNN classifiers offer the potential for even better performance as well as the ability to handle a larger number of classes. Finally, due to the generalized formulations, the classifier models can be easily adapted to classify multi-dimensional signals of multiple sensors in various other applications.
9

Fixed-wing Classification through Visually Perceived Motion Extraction with Time Frequency Analysis

Chaudhry, Haseeb 19 January 2022 (has links)
The influx of unmanned aerial systems over the last decade has increased need for airspace awareness. Monitoring solutions such as drone detection, tracking, and classification become increasingly important to maintain compliance for regulatory and security purposes, as well as for recognizing aircraft that may not be so. Vision systems offer significant size, weight, power, and cost (SWaP-C) advantages, which motivates exploration of algorithms to further aid with monitoring performance. A method to classify aircraft using vision systems to measure their motion characteristics is explored. It builds on the assumption that at least continuous visual detection or at most visual tracking of an object of interest is already accomplished. Monocular vision is in part limited by range/scale ambiguity, where range and scale information of an object projected onto the image plane of a camera using a pin- hole model is generally lost. In an indirect effort to attempt to recover scale information via identity, classification of aircraft can aid in improvement of. These measured motion characteristics can then be used to classify the perceived object based on its unique motion profile over time, using signal classification techniques. The study is not limited to just unmanned aircraft, but includes full scale aircraft in the simulated dataset used to provide a representative set of aircraft scale and motion. / Doctor of Philosophy / The influx of small drones over the last decade has increased need for airspace awareness to ensure they do not become a nuisance when operated by unqualified or ill-intentioned personnel. Monitoring airspace around locations where drone usage would be unwanted or a security issue is increasingly necessary, especially for more range and endurance capable fixed wing (airplane) drones. This work presents a solution utilizing a single camera to address the classification part of fixed wing drone monitoring, as cameras are extremely common, generally cheap, information rich sensors. Once an aircraft of interest is detected, classifying it can provide additional information regarding its intentions. It can also help improve visual detection and tracking performance since classification can help change expectations of where and how the aircraft may continue to travel. Most existing visual classification works rely on features visible on the aircraft itself or its silhouette shape. This work discusses an approach to classification by characterizing visually perceived motion of an aircraft as it flies through the air. The study is not limited to just drones, but includes full scale aircraft in the simulated dataset used. Video of an airplane is used to extract motion from each frame. This motion is condensed to and expressed as a single time signal, that is then classified using a neural network trained to recognize audio samples using a time-frequency representation called a spectrogram. This transfer learning approach with Resnet based spectrogram classification is able to achieve 90.9% precision on the simulated test set used.
10

Klasifikace silniční sítě z dat leteckého laserového skenování a optických dat DPZ vysokého rozlišení / Classification of road network from airborne laser scanning data and from remote sensing images with high resolution

Kuchařová, Jana January 2013 (has links)
Classification of road network from airborne laser scanning data and from remote sensing images with high resolution Abstract Object classification of land cover is currently one of the methods of remote Earth exploration. Road network classification only is unique because it is covered with anthropogenic material and has different characteristics than other elements of the landscape. This work deals with the possibility of using a combination of data from airborne laser scanning and high resolution optical data for detection of the road network in the specific area. The premise is that the use of two different types of data could provide better results, because airborne laser scanning data provide very precise information about the position and height of the point, while satellite data of very high resolution represent the real landscape. Searching for suitable features and classification rules for unambiguous determination of the road network is one of the objectives of the work. Segmentation parameters will also be important for object classification. Another objective is to verify the transferability of classification schemes into the other scene. The results should present a response on whether a procedure can be applied over a different location and also that the use of two types of data can bring...

Page generated in 0.1316 seconds