• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 888
  • 304
  • 88
  • 81
  • 27
  • 21
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 17
  • 15
  • 13
  • Tagged with
  • 2292
  • 2292
  • 985
  • 934
  • 639
  • 614
  • 499
  • 486
  • 387
  • 317
  • 272
  • 256
  • 250
  • 216
  • 215
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

An architecture and interaction techniques for handling ambiguity in recognition-based input

Mankoff, Jennifer C. January 2001 (has links)
No description available.

A reexamination of the role of the hippocampus in object-recognition memory using neurotoxic lesions and ischemia in rats

Duva, Christopher Adam 11 1900 (has links)
Paradoxical results on object-recognition delayed nonmatching-to-sample (DNMS) tasks have been found in monkeys and rats that receive either partial, ischemia-induced hippocampal lesions or complete hippocampal ablation. Ischemia results in severe DNMS impairments, which have been attributed to circumscribed CA1 cell loss. However, ablation studies indicate that the hippocampus plays only a minimal role in the performance of the DNMS task. Two hypotheses have been proposed to account for these discrepant findings (Bachevalier & Mishkin, 1989). First, the "hippocampal interference" hypothesis posits that following ischemia, the partially damaged hippocampus may disrupt activity in extrahippocampal structures that are important for object-recognition memory. Second, previously undetected ischemia-induced extrahippocampal damage may be responsible for the DNMS impairments attributed to CA1 cell loss. To test the "hippocampal interference" hypothesis, the effect of partial NMDAinduced lesions of the dorsal hippocampus were investigated on DNMS performance in rats. These lesions damaged much of the same area, the CA1, as did ischemia; but did so without depriving the entire forebrain of oxygen, thereby reducing the possibility of extrahippocampal damage. In Experiment 1, rats were trained on the DNMS task prior to receiving an NMDA-lesion. Postoperatively, these rats reacquired the nonmatching rule at a rate equivalent to controls and were unimpaired in performance at delays up to 300 s. In Experiment 2, naive rats were given NMDA-lesions and then trained on DNMS. These rats acquired the DNMS rule at a rate equivalent to controls and performed normally at delays up to 300 s. These findings suggest that interference from a partially damaged hippocampus cannot account for the ischemia-induced DNMS impairments and that they are more likely produced by extrahippocampal neuropathology. In Experiment 3, rats from the previous study were tested on the Morris water-maze. Compared to sham-lesioned animals, rats with partial lesions of the dorsal hippocampus were impaired in the acquisition of the water-maze task. Thus, subtotal NMDA-lesions of the hippocampus impaired spatial memory while leaving nonspatial memory intact. Mumby et al. (1992b) suggested that the ischemia-induced extrahippocampal damage underlying the DNMS deficits is mediated or produced by the postischemic hippocampus. To test this idea, preoperatively trained rats in Experiment 4 were subject to cerebral ischemia followed within 1hr by hippocampal aspiration lesions. It was hypothesized that ablation soon after ischemia would block the damage putatively produced by the postischemic hippocampus and thereby prevent the development of postoperative DNMS deficits. Unlike "ischemia-only" rats, the rats with the combined lesion were able to reacquire the nonmatching rule at a normal rate and performed normally at delays up to 300 s. Thus, hippocampectomy soon after ischemia eliminated the pathogenic process that lead to ischemia-induced DNMS deficits. Experiment 5 investigated the role of ischemiainduced CA1 cell death as a factor in the production of extrahippocampal neuropathology. Naive rats were given NMDA-lesions of the dorsal hippocampus followed 3 weeks later by cerebral ischemia. If the ischemia-induced CA1 neurotoxicity is responsible for producing extrahippocampal damage then preischemic ablation should attenuate this process and prevent the development of DNMS impairments. This did not occur: Rats with the combined lesion were as impaired as the "ischemia-only" rats in the acquisition of the DNMS task. This suggests that the ischemia-induced pathogenic processes that result in extrahippocampal neuropathology comprise more than CA1 neurotoxicity. The findings presented in this thesis are consistent with the idea that ischemiainduced DNMS deficits in rats are the result of extrahippocampal damage mediated or produced by the postischemic hippocampus. The discussion focuses on three main points: 1) How might the post-ischemic hippocampus be involved in the production of extrahippocampal neuropathology? 2) In what brain region(s) might this damage be occurring? 3) What anatomical, molecular, or functional neuropathology might ischemia produce in extrahippocampal brain regions? The results are also discussed in terms of a specialized role for the hippocampus in mnemonic functions and the recently emphasized importance of the rhinal cortex in object-recognition memory.

Localization of Stroke Using Microwave Technology and Inner product Subspace Classifier

Prabahar, Jasila January 2014 (has links)
Stroke or “brain attack” occurs when a blood clot carried by the blood vessels from other part of the body blocks the cerebral artery in the brain or when a blood vessel breaks and interrupts the blood flow to parts of the brain. Depending on which part of the brain is being damaged functional abilities controlled by that region of the brain is lost. By interpreting the patient’s symptoms it is possible to make a coarse estimate of the location of the stroke, e.g. if it is on the left or right hemisphere of the brain. The aim of this study was to evaluate if microwave technology can be used to estimate the location of haemorrhagic stroke. In the first part of the thesis, CT images of the patients for whom the microwave measurement are taken is analysed and are used as a reference to know the location of bleeding in the brain. The X, Y and Z coordinates are calculated from the target slice (where the bleeding is more prominent). Based on the bleeding coordinated the datasets are divided into classes. Under supervised learning method the ISC algorithm is trained to classify stroke in the left and right hemispheres; stroke in the anterior and posterior part of the brain and the stroke in the inferior and superior region of the brain. The second part of the thesis is to analyse the classification result in order to identify the patients that were being misclassified. The classification results to classify the location of bleeding were promising with a high sensitivity and specificity that are indicated by the area under the ROC curve (AUC). AUC of 0.86 was obtained for bleedings in the left and right brain and an AUC of 0.94 was obtained for bleeding in the inferior and superior brain. The main constraint was the small size of the dataset and few availability of dataset with bleeding in the front brain that leads to imbalance between classes. After analysis it was found that bleedings that were close to the skull and few small bleedings that are deep inside the brain are being misclassified. Many factors can be responsible for misclassification like the antenna position, head size, amount of hair etc. The overall results indicate that SDD using ISC algorithm has high potential to distinguish bleedings in different locations. It is expected that the results will be more stable with increased patient dataset for training.

Pen-Chant : Acoustic Emissions of Handwriting and Drawing

Seniuk, Andrew G. 27 September 2009 (has links)
The sounds generated by a writing instrument ("pen-chant") provide a rich and under-utilized source of information for pattern recognition. We examine the feasibility of recognition of handwritten cursive text, exclusively through an analysis of acoustic emissions. We design and implement a family of recognizers using a template matching approach, with templates and similarity measures derived variously from: smoothed amplitude signal with fixed resolution, discrete sequence of magnitudes obtained from peaks in the smoothed amplitude signal, and ordered tree obtained from a scale space signal representation. Test results are presented for recognition of isolated lowercase cursive characters and for whole words. We also present qualitative results for recognizing gestures such as circling, scratch-out, check-marks, and hatching. Our first set of results, using samples provided by the author, yield recognition rates of over 70% (alphabet) and 90% (26 words), with a confidence of 8%, based solely on acoustic emissions. Our second set of results uses data gathered from nine writers. These results demonstrate that acoustic emissions are a rich source of information, usable - on their own or in conjunction with image-based features - to solve pattern recognition problems. In future work, this approach can be applied to writer identification, handwriting and gesture-based computer input technology, emotion recognition, and temporal analysis of sketches. / Thesis (Master, Computing) -- Queen's University, 2009-09-27 08:56:53.895

Feature extraction and evaluation for cervical cell recognition

Cahn, Robert L. January 1977 (has links)
No description available.

Combined top-down and bottom-up algorithms for using context in text recognition

Bouchard, Diana C. January 1979 (has links)
No description available.

Fuzzy Clustering Analysis

Karim, Ehsanul, Madani, Sri Phani Venkata Siva Krishna, Yun, Feng January 2010 (has links)
The Objective of this thesis is to talk about the usage of Fuzzy Logic in pattern recognition. There are different fuzzy approaches to recognize the pattern and the structure in data. The fuzzy approach that we choose to process the data is completely depends on the type of data. Pattern reorganization as we know involves various mathematical transforms so as to render the pattern or structure with the desired properties such as the identification of a probabilistic model which provides the explaination of the process generating the data clarity seen and so on and so forth. With this basic school of thought we plunge into the world of Fuzzy Logic for the process of pattern recognition. Fuzzy Logic like any other mathematical field has its own set of principles, types, representations, usage so on and so forth. Hence our job primarily would focus to venture the ways in which Fuzzy Logic is applied to pattern recognition and knowledge of the results. That is what will be said in topics to follow. Pattern recognition is the collection of all approaches that understand, represent and process the data as segments and features by using fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved. In the broadest sense, pattern recognition is any form of information processing for which both the input and output are different kind of data, medical records, aerial photos, market trends, library catalogs, galactic positions, fingerprints, psychological profiles, cash flows, chemical constituents, demographic features, stock options, military decisions.. Most pattern recognition techniques involve treating the data as a variable and applying standard processing techniques to it.

Artificial training samples for the improvement of pattern recognitionsystems

Ni, Zhibo., 倪志博. January 2012 (has links)
Pattern recognition is the assignment of some sort of label to a given input value or instance, according to some specific learning algorithm. The recognition performance is directly linked with the quality and size of the training data. However, in many real pattern recognition implementations, it is difficult or not so convenient to collect as many samples as possible for training up the classifier, such as face recognition or Chinese character recognition. In view of the shortage of training samples, the main object of our research is to investigate the generation and use of artificial samples for improving the recognition performance. Besides enhancing the learning, artificial samples are also used in a novel way such that a conventional Chinese character recognizer can read half or combined Chinese character segments. It greatly simplifies the segmentation procedure as well as reduces the error introduced by segmentation. Two novel generation models have been developed to evaluate the effectiveness of supplementing artificial samples in the training. One model generates artificial faces with various facial expressions or lighting conditions by morphing and warping two given sample faces. We tested our face generation model in three popular 2D face databases, which contain both gray scale and color images. Experiments show the generated faces look quite natural and they improve the recognition rates by a large margin. The other model uses stroke and radical information to build new Chinese characters. Artificial Chinese characters are produced by Bezier curves passing through some specified points. This model is more flexible in generating artificial handwritten characters than merely distorting the genuine real samples, with both stroke level and radical level variations. Another feature of this character generation model is that it does not require any real handwritten character sample at hand. In other words, we can train the conventional character classifier and perform character recognition tasks without collecting handwritten samples. Experiment results have validated its possibility and the recognition rate is still acceptable. Besides tackling the small sample size problem in face recognition and isolated character recognition, we improve the performance of bank check legal amount recognizer by proposing character segments recognition and applying Hidden Markov Model (HMM). It is hoped that this thesis can provide some insights for future researches in artificial sample generation, face morphing, Chinese character segmentation and text recognition or some other related issues. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy

Unfamiliar facial identity registration and recognition performance enhancement

Adam, Mohamad Z. January 2013 (has links)
The work in this thesis aims at studying the problems related to the robustness of a face recognition system where specific attention is given to the issues of handling the image variation complexity and inherent limited Unique Characteristic Information (UCI) within the scope of unfamiliar identity recognition environment. These issues will be the main themes in developing a mutual understanding of extraction and classification tasking strategies and are carried out as a two interdependent but related blocks of research work. Naturally, the complexity of the image variation problem is built up from factors including the viewing geometry, illumination, occlusion and other kind of intrinsic and extrinsic image variation. Ideally, the recognition performance will be increased whenever the variation is reduced and/or the UCI is increased. However, the variation reduction on 2D facial images may result in loss of important clues or UCI data for a particular face alternatively increasing the UCI may also increase the image variation. To reduce the lost of information, while reducing or compensating the variation complexity, a hybrid technique is proposed in this thesis. The technique is derived from three conventional approaches for the variation compensation and feature extraction tasks. In this first research block, transformation, modelling and compensation approaches are combined to deal with the variation complexity. The ultimate aim of this combination is to represent (transformation) the UCI without losing the important features by modelling and discard (compensation) and reduce the level of the variation complexity of a given face image. Experimental results have shown that discarding a certain obvious variation will enhance the desired information rather than sceptical in losing the interested UCI. The modelling and compensation stages will benefit both variation reduction and UCI enhancement. Colour, gray level and edge image information are used to manipulate the UCI which involve the analysis on the skin colour, facial texture and features measurement respectively. The Derivative Linear Binary transformation (DLBT) technique is proposed for the features measurement consistency. Prior knowledge of input image with symmetrical properties, the informative region and consistency of some features will be fully utilized in preserving the UCI feature information. As a result, the similarity and dissimilarity representation for identity parameters or classes are obtained from the selected UCI representation which involves the derivative features size and distance measurement, facial texture and skin colour. These are mainly used to accommodate the strategy of unfamiliar identity classification in the second block of the research work. Since all faces share similar structure, classification technique should be able to increase the similarities within the class while increase the dissimilarity between the classes. Furthermore, a smaller class will result on less burden on the identification or recognition processes. The proposed method or collateral classification strategy of identity representation introduced in this thesis is by manipulating the availability of the collateral UCI for classifying the identity parameters of regional appearance, gender and age classes. In this regard, the registration of collateral UCI s have been made in such a way to collect more identity information. As a result, the performance of unfamiliar identity recognition positively is upgraded with respect to the special UCI for the class recognition and possibly with the small size of the class. The experiment was done using data from our developed database and open database comprising three different regional appearances, two different age groups and two different genders and is incorporated with pose and illumination image variations.

Image segmentation on the basis of texture and depth

Booth, David M. January 1991 (has links)
No description available.

Page generated in 0.1737 seconds