• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 272
  • 73
  • 23
  • 15
  • 10
  • 7
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 500
  • 500
  • 123
  • 114
  • 104
  • 95
  • 94
  • 92
  • 87
  • 71
  • 70
  • 69
  • 63
  • 61
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Principal features based texture classification using artificial neural networks

Shang, Changjing January 1995 (has links)
No description available.
22

Combining Object and Feature Dynamics in Probabilistic Tracking

Taycher, Leonid, Fisher III, John W., Darrell, Trevor 02 March 2005 (has links)
Objects can exhibit different dynamics at different scales, a property that isoftenexploited by visual tracking algorithms. A local dynamicmodel is typically used to extract image features that are then used as inputsto a system for tracking the entire object using a global dynamic model.Approximate local dynamicsmay be brittle---point trackers drift due to image noise and adaptivebackground models adapt to foreground objects that becomestationary---but constraints from the global model can make them more robust.We propose a probabilistic framework for incorporating globaldynamics knowledge into the local feature extraction processes.A global tracking algorithm can beformulated as a generative model and used to predict feature values thatinfluence the observation process of thefeature extractor. We combine such models in a multichain graphicalmodel framework.We show the utility of our framework for improving feature tracking and thusshapeand motion estimates in a batch factorization algorithm.We also propose an approximate filtering algorithm appropriate for onlineapplications, and demonstrate its application to problems such as backgroundsubtraction, structure from motion and articulated body tracking.
23

Optical Imaging and Computer Vision Technology for Corn Quality Measurement

Fang, Jian 01 December 2011 (has links)
The official U.S. standards for corn have been available for almost one hundred years. Corn grading system has been gradually updated over the years. In this thesis, we investigated a fast corn grading system, which includes the mechanical part and the computer recognition part. The mechanical system can deliver the corn kernels onto the display plate. For the computer recognition algorithms, we extracted common features from each corn kernel, and classified them to measure the grain quality.
24

Automatic Mapping of Off-road Trails and Paths at Fort Riley Installation, Kansas

Oller, Adam 01 May 2012 (has links)
The U.S. Army manages thousands of sites that cover millions of acres of land for various military training purposes and activities and often faces a great challenge on how to optimize the use of resources. A typical example is that the training activities often lead to off-road vehicle trails and paths and how to use the trails and paths in terms of minimizing maintenance cost becomes a problem. Being able to accurately extract and map the trails and paths is critical in advancing the U.S. Army's sustainability practices. The primary objective of this study is to develop a method geared specifically toward the military's needs of identifying and updating the off-road vehicle trails and paths for both environmental and economic purposes. The approach was developed using a well-known template matching program, called Feature Analyst, to analyze and extract the relevant trails and paths from Fort Riley's designated training areas. A 0.5 meter resolution false color infrared orthophoto with various spectral transformations/enhancements were used to extract the trails and paths. The optimal feature parameters for the highest accuracy of detecting the trails and paths were also investigated. A modified Heidke skill score was used for accuracy assessment of the outputs in comparison to the observed. The results showed the method was very promising, compared to traditional visual interpretation and hand digitizing. Moreover, suggested methods for extracting the trails and paths using remotely sensed images, including image spatial and spectral resolution, image transformations and enhancements, and kernel size, was obtained. In addition, the complexity of the trails and paths and the discussion on how to improve their extraction in the future were given.
25

A Pattern Recognition Approach to Electromyography Data

Mitzev, Ivan Stefanov 07 August 2010 (has links)
EMG classification is widely used in electric control of mechanically developed prosthesis, robots development, clinical application etc. It has been evaluated for years, but the main goal of this research is to develop an easy to implement and fast to execute pattern recognition method for classifying signals used for human gait analysis. This method is based on adding two new temporal features (form factor and standard deviation) for EMG signal recognition and using them along with several popular features (area under the curve, wavelength function-pathway and zero crossing rate) to come up with a low complexity suitable feature extraction. Results are presented for EMG data and a comparison with existing methods is made to validate the applicability of the foregoing method. It is shown that the best combination in terms of accuracy and time performance is given by spectral and temporal extraction features along with neural network recognition (NN) algorithm.
26

CNN MODEL FOR RECOGNITION OF TEXT-BASED CAPTCHAS AND ANALYSIS OF LEARNING BASED ALGORITHMS’ VULNERABILITIES TO VISUAL DISTORTION

Amiri Golilarz, Noorbakhsh 01 May 2023 (has links) (PDF)
Due to the rapid progress and advancements in deep learning and neural networks, manyapproaches and state-of-the-art researches have been conducted in these fields which cause developing various learning-based attacks leading to vulnerability of websites and portals. This kind of attacks decrease the security of the websites which results in releasing the sensitive and important personal information. These days, preserving the security of the websites is one of the most challenging tasks. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is kind of test which are developed by designers and are available in various websites to distinguish and differentiate humans from robots in order to protect the websites from possible attacks. In this dissertation, we proposed a CNN based approach to attack and break text-based CAPTCHAs. The proposed method has been compared with several state-of-the-art approaches in terms of recognition accuracy (RA). Based on the results, the developed method can break and recognize CAPTCHAs at high accuracy. Additionally, we wanted to check how to make these CAPTCHAs hard to be broken, so we employed five types of distortions in these CAPTCHAs. The recognition accuracy in presence of these noises has been calculated. The results indicate that adversarial noise can make CAPTCHAs much difficult to be broken. The results have been compared with some state-of-the-art approaches. This analysis can be helpful for CAPTCHA developers to consider these noises in their developed CAPTCHAs. This dissertation also presents a hybrid model based on CNN-SVM to solve text-based CAPTCHAs. The developed method contains four main steps, namely: segmentation, feature extraction, feature selection, and recognition. For segmentation, we suggested using histogram and k-means clustering. For feature extraction, we developed a new CNN structure. The extracted features are passed through the mRMR algorithm to select the most efficient features. These selected features are fed into SVM for further classification and recognition. The results have been compared with several state-of-the-art methods to show the superiority of the developed approach. In general, this dissertation presented deep learning-based methods to solve text-based CAPTCHAs. The efficiency and effectiveness of the developed methods have been compared with various state-of-the-art methods. The developed techniques can break CAPTCHAs at high accuracy and also in a short time. We utilized Peak Signal to Noise Ratio (PSNR), ROC, accuracy, sensitivity, specificity, and precision to evaluate and measure the performance analysis of different methods. The results indicate the superiority of the developed methods.
27

A methodology for feature based 3D face modelling from photographs

Abson, Karl, Ugail, Hassan, Ipson, Stanley S. January 2008 (has links)
In this paper, a new approach to modelling 3D faces based on 2D images is introduced. Here 3D faces are created using two photographs from which we extract facial features based on image manipulation techniques. Through the image manipulation techniques we extract the crucial feature lines of the face in two views. These are then used in modifying a template base mesh which is created in 3D. This base mesh, which has been designed by keeping facial animation in mind, is then subdivided to provide the level of detail required. The methodology, as it stands, is semi-automatic whereby our goal is to automate this process in order to provide an inexpensive and expedient way of producing realistic face models intended for animation purposes. Thus, we show how image manipulation techniques can be used to create binary images which can in turn be used in manipulating a base mesh that can be adapted to a given facial geometry. In order to explain our approach more clearly we discuss a series of examples where we create 3D facial geometry of individuals given the corresponding image data.
28

Iris recognition based on feature extraction

Rampally, Deepthi January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / D. V. Satish Chandra / Biometric technologies are the foundation of personal identification systems. A biometric system recognizes an individual based on some characteristics or processes. Characteristics used for recognition include features measured from face, fingerprints, hand geometry, handwriting, iris, retina, vein, signature and voice. Among the various techniques, iris recognition is regarded as the most reliable and accurate biometric recognition system. However, the technology of iris coding is still at an early stage. Iris recognition system consists of a segmentation system that localizes the iris region in an eye image and isolates eyelids, eyelashes. Segmentation is achieved using circular Hough transform for localizing the iris and pupil regions, linear Hough transform for localizing the eyelids and thresholding for detecting eyelashes. The segmented iris region is normalized to a rectangular block with fixed polar dimensions using Daugman’s rubber sheet model. The work presented in this report involves extraction of iris templates using the algorithms developed by Daugman. Features are then extracted from these templates using wavelet transform to perform the recognition task. Method of extracting features using cumulative sums is also investigated. Iris codes are generated for each cell by computing cumulative sums which describe variations in the grey values of iris. For determining the performance of the proposed iris recognition systems, CASIA database and UBRIS.v1 database of digitized grayscale eye images are used. K-nearest neighbor and Hamming distance classifiers are used to determine the similarity between the iris templates. The performance of the proposed methods is evaluated and compared.
29

Efficient feature detection using OBAloG: optimized box approximation of Laplacian of Gaussian

Jakkula, Vinayak Reddy January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis presents a novel approach for detecting robust and scale invariant interest points in images. The detector accurately and efficiently approximates the Laplacian of Gaussian using an optimal set of weighted box filters that take advantage of integral images to reduce computations. When combined with state-of-the art descriptors for matching, the algorithm performs better than leading feature tracking algorithms including SIFT and SURF in terms of speed and accuracy.
30

An Architecture for Sensor Data Fusion to Reduce Data Transmission Bandwidth

Lord, Dale, Kosbar, Kurt 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / Sensor networks can demand large amounts of bandwidth if the raw sensor data is transferred to a central location. Feature recognition and sensor fusion algorithms can reduce this bandwidth. Unfortunately the designers of the system, having not yet seen the data which will be collected, may not know which algorithms should be used at the time the system is first installed. This paper describes a flexible architecture which allows the deployment of data reduction algorithms throughout the network while the system is in service. The network of sensors approach not only allows for signal processing to be pushed closer to the sensor, but helps accommodate extensions to the system in a very efficient and structured manner.

Page generated in 0.1091 seconds