• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 15
  • 15
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Autonomous Navigation, Perception and Probabilistic Fire Location for an Intelligent Firefighting Robot

Kim, Jong Hwan 09 October 2014 (has links)
Firefighting robots are actively being researched to reduce firefighter injuries and deaths as well as increase their effectiveness on performing tasks. There has been difficulty in developing firefighting robots that autonomously locate a fire inside of a structure that is not in the direct robot field of view. The commonly used sensors for robots cannot properly function in fire smoke-filled environments where high temperature and zero visibility are present. Also, the existing obstacle avoidance methods have limitations calculating safe trajectories and solving local minimum problem while avoiding obstacles in real time under cluttered and dynamic environments. In addition, research for characterizing fire environments to provide firefighting robots with proper headings that lead the robots to ultimately find the fire is incomplete. For use on intelligent firefighting robots, this research developed a real-time local obstacle avoidance method, local dynamic goal-based fire location, appropriate feature selection for fire environment assessment, and probabilistic classification of fire, smoke and their thermal reflections. The real-time local obstacle avoidance method called the weighted vector method is developed to perceive the local environment through vectors, identify suitable obstacle avoidance modes by applying a decision tree, use weighting functions to select necessary vectors and geometrically compute a safe heading. This method also solves local obstacle avoidance problems by integrating global and local goals to reach the final goal. To locate a fire outside of the robot field of view, a local dynamic goal-based 'Seek-and-Find' fire algorithm was developed by fusing long wave infrared camera images, ultraviolet radiation sensor and Lidar. The weighted vector method was applied to avoid complex static and unexpected dynamic obstacles while moving toward the fire. This algorithm was successfully validated for a firefighting robot to autonomously navigate to find a fire outside the field of view. An improved 'Seek-and-Find' fire algorithm was developed using Bayesian classifiers to identify fire features using thermal images. This algorithm was able to discriminate fire and smoke from thermal reflections and other hot objects, allowing the prediction of a more robust heading for the robot. To develop this algorithm, appropriate motion and texture features that can accurately identify fire and smoke from their reflections were analyzed and selected by using multi-objective genetic algorithm optimization. As a result, mean and variance of intensity, entropy and inverse difference moment in the first and second order statistical texture features were determined to probabilistically classify fire, smoke, their thermal reflections and other hot objects simultaneously. This classification performance was measured to be 93.2% accuracy based on validation using the test dataset not included in the original training dataset. In addition, the precision, recall, F-measure, and G-measure were 93.5 - 99.9% for classifying fire and smoke using the test dataset. / Ph. D.
2

Dynamic Spectrum Access Network Simulation and Classification of Secondary User Properties

Rebholz, Matthew John 17 June 2013 (has links)
This thesis explores the use of the Naïve Bayesian classifier as a method of determining high-level information about secondary users in a Dynamic Spectrum Access (DSA) network using a low complexity channel sensing method.  With a growing number of users generating an increased demand for broadband access, determining an efficient method for utilizing the limited available broadband is a developing current and future issue.  One possible solution is DSA, which we simulate using the Universal DSA Network Simulator (UDNS), created by our team at Virginia Tech. However, DSA requires user devices to monitor large amounts of bandwidth, and the user devices are often limited in their acceptable size, weight, and power.  This greatly limits the usable bandwidth when using complex channel sensing methods.  Therefore, this thesis focuses on energy detection for channel sensing. Constraining computing requirements by operating with limited spectrum sensing equipment allows for efficient use of limited broadband by user devices.  The research on using the Naïve Bayesian classifier coupled with energy detection and the UDNS serves as a strong starting point for supplementary work in the area of radio classification. / Master of Science
3

Prediction of Optimal Bayesian Classification Performance for LADAR ATR

Greenewald, Kristjan H. 11 September 2012 (has links)
No description available.
4

Bayesian classification of DNA barcodes

Anderson, Michael P. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Suzanne Dubnicka / DNA barcodes are short strands of nucleotide bases taken from the cytochrome c oxidase subunit 1 (COI) of the mitochondrial DNA (mtDNA). A single barcode may have the form C C G G C A T A G T A G G C A C T G . . . and typically ranges in length from 255 to around 700 nucleotide bases. Unlike nuclear DNA (nDNA), mtDNA remains largely unchanged as it is passed from mother to offspring. It has been proposed that these barcodes may be used as a method of differentiating between biological species (Hebert, Ratnasingham, and deWaard 2003). While this proposal is sharply debated among some taxonomists (Will and Rubinoff 2004), it has gained momentum and attention from biologists. One issue at the heart of the controversy is the use of genetic distance measures as a tool for species differentiation. Current methods of species classification utilize these distance measures that are heavily dependent on both evolutionary model assumptions as well as a clearly defined "gap" between intra- and interspecies variation (Meyer and Paulay 2005). We point out the limitations of such distance measures and propose a character-based method of species classification which utilizes an application of Bayes' rule to overcome these deficiencies. The proposed method is shown to provide accurate species-level classification. The proposed methods also provide answers to important questions not addressable with current methods.
5

Automatic Recognition of Artificial Objects in Side-scan Sonar Imerage

Li, Ying-Zhang 02 August 2011 (has links)
Abstract The interpretation and identification of information from the side-scan sonar imagery are mainly depended on visual observation and personal experiences. Recent studies tended to increase the identification efficiency by using numerical analysis methods. This can reduce the error that cause by the differences of observer¡¦s experience as well as by extended time observation. The position around the center line of the slant range corrected side-scan sonar imagery might result in the degradation of the ability of numerical methods to successfully detect artificial objects. Theoretically, this problem could be solved by using a specific characteristic function to identify the existence of concrete reefs, and then filtering the noise of the central line area with a threshold value. This study was intended to develop fully automatic sonar imagery processing system for the identification of cubic concrete and cross-type protective artificial reefs in Taiwan offshore area. The procedures of the automatic sonar imagery processing system are as follows: (1) Image Acquisition¡G500kHz with slant range of 75m. (2) Feature Extraction¡Ggrey level co-occurrence matrix (i.e., Entropy, Homogeneity and Mean) (3) Classification¡Gunsupervised Bayesian classifier. (4) Object Identification¡Gby characteristic feature (i.e., Entropy). (5) Object¡¦s Status Analysis¡Gobject¡¦s circumference¡Barea¡Bcenter of mass and quantity. This study used the sonar images collected at Chey-Ding artificial reef site in Kaohsiung City as a case study, aiming to verify the automatic sonar imagery processing system and find out the optimum window size. The image characteristic functions include one set of first order parameter (i.e., mean) and two sets of second order parameter (i.e., entropy and homogeneity). Eight(8) sonar images with 1-8 sets of cubic concrete and cross-type protective artificial reefs where used in this step. The identification efficiency of the system, in terms of the produce¡¦s accuracy, is 79.41%. The results illustrated that there were 16~28 sets of artificial reefs being detected in this case which is comparable with the actual amount of 17 sets. Based on this investigation, the optimum window size was concluded to be 12¡Ñ12 pixels with sliding size of 4 pixel. Imagery collected at Fang-Liau artificial reef site of Pingtung County was tested. For the purpose of applicability, the original imagery (2048¡Ñ2800 pixels) was divided into 8 consecutive smaller sized imagery frames(2048¡Ñ350 pixels). The influence of using a two-fold classification procedure and a central filtering method to reduce the noise that caused by slant range correction were discussed. The results showed that central line filtering method is applicable. The results of object¡¦s status analysis showed that there are 156-236 sets of reefs existed. Automatic determination of the target using the characteristic function of entropy is feasible. If the value is larger than 1.45, it represents positive identification of concrete artificial reefs. It can be classified as muddy sand seabed type if the value is smaller than 1.35. If the value is between 1.35~1.45, it illustrates the existence of a transition zone where objects of smaller in dimensions might exist. To achieve the purpose of automatic operation, firstly, we have to identify the existence of the concrete reefs by using the specific characteristic function. Based on the result of existing concrete reefs, suture line filtering method will hence be used to filter the noise from the image information. For that all of the procedures are automatically operated without human intervention. Key word: side-scan sonar ; characteristic function ; gray level co-occurrence matrix ; Bayesian classification ;entropy ; homogeneity ; mean
6

Automated Identification and Analysis of Stationary Targets on Seafloor with Sidescan Sonar Imagery

Guo, Meng-wei 11 May 2008 (has links)
The normal procedure for the detection of underwater stationary targets is mainly by the application of side-scan sonar. In addition, the identification of targets within the side-scan sonar imagery is primarily based on the visual observation of the operator. Due to its complexity and poor effectiveness, the visual observation procedure was gradually been substituted by numerical analysis procedures and programs. The purpose of the current investigation was dedicated to the development of an automatic image analysis program for the detection and identification of cubic concrete artificial reefs (2 m x 2 m x 2m) in the south-western coastal area off Taiwan. The major components and methodologies of the program include: (1)Image acquisition; side-scan sonar at 500 kHz and slant range at 75 m. (2)Feature extraction; grey level co-occurrence matrix. (3)Feature Classification; unsupervised Bayesian classifier. (4)Target identification; cluster analysis. (5)Target properties analysis, includes circumference, area, central coordinates and quantity of the targets. Program verification and optimal parameters determination were conducted with a sonograph (650 ¡Ñ 650 pixels) acquired at the Chey-Ding artificial reef site off Kaohsiung County. Feature functions employed in this program include entropy, homogeneity, and mean value. The identification accuracy can reach 93% at the most. In addition, the number of artificial reefs estimated by the program was within 9 to 20, while the actual number is 15. A realistic evaluation of this program was conducted with a sonograph (2048 ¡Ñ 6050 pixels) acquired at Fang-Liau artificial reef site off Pyngdong County. In addition to the cubic reefs, the targets at this site include cross-shaped artificial reefs with dimensions less than the cubic reefs. The sonograph was divided into smaller blocks with dimensions of 2048 x 550 pixels during evaluation. The results showed that each block can be evaluated based on the value of the seed point obtained by cluster analysis. The seed point which fells between 20.6 and 24.4 indicates that there are cubic reefs existed. Between 15.3 and 17.4 indicates that there are targets with smaller dimensions (i.e., crossed reefs) existed which can not be identified properly. Between 10.1 and 10.9, there is no target existed on the seafloor. The results indicated that the number of targets identified is between 122 and 240. According to the results of this investigation, the automatic image analysis program can improve the detection and identification of stationary targets within side-scan sonar imagery.
7

Multi-Bayesian Approach to Stochastic Feature Recognition in the Context of Road Crack Detection and Classification

Steckenrider, John J. 04 December 2017 (has links)
This thesis introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. The approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike many conventional methods, these features’ uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 33% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor. / Master of Science / Humans have an outstanding ability to understand things about the world around them. We learn from our youngest years how to make sense of things and perceive our environment even when it is not easy. To do this, we inherently think in terms of probabilities, updating our belief as we gain new information. The methods introduced here allow an autonomous system to think similarly, by applying a fairly common probabilistic technique to the task of perception and classification. In particular, road cracks are observed and classified using these methods, in order to develop an autonomous road condition monitoring system. The results of this research are promising; cracks are identified and correctly categorized with 92% accuracy, and the additional “intelligence” of the system leads to a 33% improvement in road crack assessment. These methods could be applied in a variety of contexts as the leading edge of robotics research seeks to develop more robust and human-like ways of perceiving the world.
8

Fine-Grained Bayesian Zero-Shot Object Recognition

Sarkhan Badirli (11820785) 03 January 2022 (has links)
<div>Building machine learning algorithms to recognize objects in real-world tasks is a very challenging problem. With increasing number of classes, it becomes very costly and impractical to collect samples for all classes to obtain an exhaustive data to train the model. This limited labeled data bottleneck prevails itself more profoundly over fine grained object classes where some of these classes may lack any labeled representatives in the training data. A robust algorithm in this realistic scenario will be required to classify samples from well-represented classes as well as to handle samples from unknown origin. In this thesis, we break down this difficult task into more manageable sub-problems and methodically explore novel solutions to address each component in a sequential order.</div><div><br></div><div>We begin with zero-shot learning (ZSL) scenario where classes that are lacking any labeled images in the training data, i.e., unseen classes, are assumed to have some semantic descriptions associated with them. The ZSL paradigm is motivated by analogy to humans’ learning process. We human beings can recognize new categories by just knowing some semantic descriptions of them without even seeing any instances from these categories. We</div><div>develop a novel hierarchical Bayesian classifier for ZSL task. The two-layer architecture of the model is specifically designed to exploit the implicit hierarchy present among classes, in particular evident in fine-grained datasets. In the proposed method, there are latent classes that define the class hierarchy in the image space and semantic information is used to build the Bayesian hierarchy around these meta-classes. Our Bayesian model imposes local priors on semantically similar classes that share the same meta-class to realize knowledge transfer. We finally derive posterior predictive distributions to reconcile information about local and global priors and then blend them with data likelihood for the final likelihood calculation. With its closed form solution, our two-layer hierarchical classifier proves to be fast in training and flexible to model both fine and coarse-grained datasets. In particular, for challenging fine-grained datasets the proposed model can leverage the large number of seen classes to its advantage for a better local prior estimation without sacrificing on seen class accuracy.</div><div>Side information plays a critical role in ZSL and ZSL models hold on a strong assumption that the side information is strongly correlated with image features. Our model uses side information only to build hierarchy, thus, no explicit correlation between image features is assumed. This in turn leads the Bayesian model to be very resilient to various side</div><div>information sources as long as they are discriminative enough to define class hierarchy.</div><div><br></div><div>When dealing with thousands of classes, it becomes very difficult to obtain semantic descriptions for fine grained classes. For example, in species classification where classes display very similar morphological traits, it is impractical if not impossible to derive characteristic</div><div>visual attributes that can distinguish thousands of classes. Moreover, it would be unrealistic to assume that an exhaustive list of visual attributes characterizing all object classes, both seen and unseen, can be determined based only on seen classes. We propose DNA as a side</div><div>information to overcome this obstacle in order to do fine grained zero-shot species classification. We demonstrate that 658 base pair long DNA barcodes can be sufficient to serve as a robust source of side information for newly compiled insect dataset with more than thousand</div><div>classes. The experiments is further validated on well-known CUB dataset on which DNA attributes proves to be as competitive as word vectors. Our proposed Bayesian classifier delivers state of the art results on both datasets while using DNA as side information.</div><div><br></div><div>Traditional ZSL framework, however, is not quite suitable for scalable species identification and discovery. For example, insects are one of the largest groups of animal kingdom</div><div>with estimated 5.5 million species yet only 20% of them is described. We extend the traditional ZSL into a more practical framework where no explicit side information is available for unseen classes. We transform our Bayesian model to utilize taxonomical hierarchy of species</div><div>to perform insect identification at scale. Our approach is the first to combine two different data modalities, namely image and DNA information, to perform insect identification with</div><div>more than thousand classes. Our algorithm not only classifies known species with impressive 97% accuracy but also identifies unknown species and classify them to their true genus with 81% accuracy.</div><div><br></div><div>Our approach has the ability to address some major societal issues in climate change such as changing insect distributions and measuring biodiversity across the world. We believe this work can pave the way for more precise and more importantly the scalable monitoring of</div><div>biodiversity and can become instrumental in offering objective measures of the impacts of recent changes our planet has been going through.</div>
9

Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring

Yan, Shuo 01 August 2008 (has links)
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.
10

Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring

Yan, Shuo 01 August 2008 (has links)
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.

Page generated in 0.1306 seconds