• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 150
  • 14
  • 14
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 189
  • 189
  • 189
  • 107
  • 99
  • 85
  • 62
  • 48
  • 44
  • 42
  • 25
  • 23
  • 20
  • 19
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

The use of mobile phones as service-delivery devices in sign language machine translation system

Ghaziasgar, Mehrdad January 2010 (has links)
Masters of Science / This thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method; the power consumption; and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems. / South Africa
182

Accurate Joint Detection from Depth Videos towards Pose Analysis

Kong, Longbo 05 1900 (has links)
Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
183

Handwritten signature verification using backpropagation neural network

Tang, Yubo 01 October 2000 (has links)
No description available.
184

An interactive system to enhance social and verbal communication skills of children withautism spectrum disorders

Unknown Date (has links)
Affecting one in every 68 children, Autism Spectrum Disorder (ASD) is one of the fastest growing developmental disabilities. Scientific research has proven that early behavioral intervention can improve learning, communication, and social skills. Similarly, studies have shown that the usage of of-the-shelf technology boosts motivation in children diagnosed with ASD while increasing their attention span and ability to interact socially. Embracing perspectives from different fields of study can lead to the development of an effective tool to complement traditional treatment of those with ASD. This thesis documents the re-engineering, extension, and evolu- tion of Ying, an existing web application designed to aid in the learning of autistic children. The original methodology of Ying combines expertise from other research areas including developmental psychology, semantic learning, and computer science. In this work, Ying is modifed to incorporate aspects of traditional treatment, such as Applied Behavior Analysis. Using cutting-edge software technology in areas like voice recognition and mobile device applications, this project aspires to use software engineering approaches and audio-visual interaction with the learner to enhance social behavior and reinforce verbal communication skills in children with ASD, while detecting and storing learning patterns for later study. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
185

Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images

Hill, Evelyn June January 2004 (has links)
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
186

Automated image classification via unsupervised feature learning by K-means

Karimy Dehkordy, Hossein 09 July 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.
187

Active geometric model : multi-compartment model-based segmentation & registration

Mukherjee, Prateep 26 August 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
188

A high resolution 3D and color image acquisition system for long and shallow impressions in crime scenes

Egoda Gamage, Ruwan Janapriya January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In crime scene investigations it is necessary to capture images of impression evidence such as tire track or shoe impressions. Currently, such evidence is captured by taking two-dimensional (2D) color photographs or making a physical cast of the impression in order to capture the three-dimensional (3D) structure of the information. This project aims to build a digitizing device that scans the impression evidence and generates (i) a high resolution three-dimensional (3D) surface image, and (ii) a co-registered two-dimensional (2D) color image. The method is based on active structured lighting methods in order to extract 3D shape information of a surface. A prototype device was built that uses an assembly of two line laser lights and a high-definition video camera that is moved at a precisely controlled and constant speed along a mechanical actuator rail in order to scan the evidence. A prototype software was also developed which implements the image processing, calibration, and surface depth calculations. The methods developed in this project for extracting the digitized 3D surface shape and 2D color images include (i) a self-contained calibration method that eliminates the need for pre-calibration of the device; (ii) the use of two colored line laser lights projected from two different angles to eliminate problems due to occlusions; and (iii) the extraction of high resolution color image of the impression evidence with minimal distortion.The system results in sub-millimeter accuracy in the depth image and a high resolution color image that is registered with the depth image. The system is particularly suitable for high quality images of long tire track impressions without the need for stitching multiple images.
189

A scalable approach to processing adaptive optics optical coherence tomography data from multiple sensors using multiple graphics processing units

Kriske, Jeffery Edward, Jr. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive optics-optical coherence tomography (AO-OCT) is a non-invasive method of imaging the human retina in vivo. It can be used to visualize microscopic structures, making it incredibly useful for the early detection and diagnosis of retinal disease. The research group at Indiana University has a novel multi-camera AO-OCT system capable of 1 MHz acquisition rates. Until this point, a method has not existed to process data from such a novel system quickly and accurately enough on a CPU, a GPU, or one that can scale to multiple GPUs automatically in an efficient manner. This is a barrier to using a MHz AO-OCT system in a clinical environment. A novel approach to processing AO-OCT data from the unique multi-camera optics system is tested on multiple graphics processing units (GPUs) in parallel with one, two, and four camera combinations. The design and results demonstrate a scalable, reusable, extensible method of computing AO-OCT output. This approach can either achieve real time results with an AO-OCT system capable of 1 MHz acquisition rates or be scaled to a higher accuracy mode with a fast Fourier transform of 16,384 complex values.

Page generated in 0.1298 seconds