Spelling suggestions: "subject:"perceptrons"" "subject:"perceptrones""
11 |
Image processing and the histology of the human lung.Reisch, Michael Lawrence. January 1969 (has links)
No description available.
|
12 |
A computer-based environment for compression experiments with code sounds from the lexiphoneMartin, Willis Pittman January 1969 (has links)
The Lexiphone is a reading machine for the blind which makes an optical to auditory transformation from the printed character to a sound code. This thesis is the development of a computer-based environment for studying the code.
Fluctuations in the code signals generated by repeated scanning of the same ink pattern were studied using a Fourier analysis routine. From the Fourier coefficients representing these code signals it was established that the error in mean pitch of the code sound produced for the letter "s" is less than 1%. This error is typical for the alphabet and does not cause the blind user difficulty.
The method of compressing the code signals is explained and studied with the aid of a Hadamard transform routine. This transform permits ready, comparison
of compressed and uncompressed code signals.
The results of direct comparisons between uncompressed code and compressed
code are disappointing: the two presentations seem approximately equivalent. The reading rate in words per minute for a blind subject trained to read with the uncompressed code was not improved with the compressed code. A previous worker had found that the compressed code for letters was better discriminated and easier to learn. In. another experiment reported in the thesis six sighted subjects were used: three subjects were taught eight four-letter words presented in uncompressed code and the other three were taught the same words in a compressed
version of the code. The learning curves for the two groups were approximately the same. Experimental time for subject testing was less than that used by the previous worker and suggestions are made for further experiments
which may elucidate the problem of reading compressed code. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
13 |
Design and construction of an opaque optical contour tracer for character recognition researchAustin, George Marshall January 1967 (has links)
This thesis describes the design and instrumentation of an opaque contour-tracing scanner for studies in optical character recognition (OCR).
Most previous OCR machines have attempted to recognize characters by mask matching, a technique which requires a large and expensive computer, and which is sensitive to small changes in type font. Contour tracing is a promising new approach to OCR. In contour tracing, the outside of the character is followed, and the resulting horizontal and vertical co-ordinates, X(t) and Y(t), of the scanning spot are processed for recognition. Although much additional research is required on both scanner design and processing algorithms, it is expected that an OCR device which uses a contour-tracing scanner will be significantly less expensive than existing multifont recognition machines.
In this thesis, four possible contour-tracing scanners are proposed and evaluated on the basis of cost, complexity and availability of components. The design that was chosen for construction used an X-Y oscilloscope and a photomultiplier as a flying-spot scanner. In instrumenting this design, a digital-to-analogue converter, an up-down counter and many other special purpose logic circuits were designed and constructed.
The scanner successfully contour traced Letraset characters, typewritten characters and handprinted characters. At the machines maximum speed, a character is completely traced in approximately 10 msec. Photographs of contour traces and the X(t) and Y(t) waveforms are included in the thesis.
Although the present system will only trace two adjacent characters, proposed modifications to the system would enable an entire line of characters to be contour-traced.
Included in the thesis are recommendations for further research on scanner design. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
14 |
An experimental system for recognizing typewritten charactersFletcher, Thomas Ralph January 1970 (has links)
An optical character recognition system was developed using the Lexiphone, a direct-translation reading machine for the blind, interfaced to a digital computer. The system was designed to read alphanumeric typewritten characters. In the development stages one serifed type-style was used exclusively. Three additional different, but serifed, type-styles were also used.
Classification was performed by a method referred to as "component deletion". This method extends a nearest neighbours classification scheme enabling feature vectors differing in length by one two-bit segment to be compared by making their dimensionality the same. The method includes the classification of equal length vectors. The component deletion technique provided similar results to exact matching yet required significantly less training.
Using carbon ribbon print, recognition was better than 92% for the four type-styles combined. The use of cloth ribbon for the type-style used in the development lowered recognition by a few percent. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
15 |
Image processing and the histology of the human lung.Reisch, Michael Lawrence. January 1969 (has links)
No description available.
|
16 |
An evaluation of the perceptron theoryLumb, Dale Raymond,1936- January 1959 (has links)
Call number: LD2668 .T4 1959 L85
|
17 |
Initialising neural networks with prior knowledgeRountree, Nathan, n/a January 2007 (has links)
This thesis explores the relationship between two classification models: decision trees and multilayer perceptrons.
Decision trees carve up databases into box-shaped regions, and make predictions based on the majority class in each box. They are quick to build and relatively easy to interpret. Multilayer perceptrons (MLPs) are often more accurate than decision trees, because they are able to use soft, curved, arbitrarily oriented decision boundaries. Unfortunately MLPs typically require a great deal of effort to determine a good number and arrangement of neural units, and then require many passes through the database to determine a good set of connection weights. The cost of creating and training an MLP is thus hundreds of times greater than the cost of creating a decision tree, for perhaps only a small gain in accuracy.
The following scheme is proposed for reducing the computational cost of creating and training MLPs. First, build and prune a decision tree to generate prior knowledge of the database. Then, use that knowledge to determine the initial architecture and connection weights of an MLP. Finally, use a training algorithm to refine the knowledge now embedded in the MLP. This scheme has two potential advantages: a suitable neural network architecture is determined very quickly, and training should require far fewer passes through the data.
In this thesis, new algorithms for initialising MLPs from decision trees are developed. The algorithms require just one traversal of a decision tree, and produce four-layer MLPs with the same number of hidden units as there are nodes in the tree. The resulting MLPs can be shown to reach a state more accurate than the decision trees that initialised them, in fewer training epochs than a standard MLP. Employing this approach typically results in MLPs that are just as accurate as standard MLPs, and an order of magnitude cheaper to train.
|
18 |
Tone classification of syllable-segmented Thai speech based on multilayer perceptronSatravaha, Nuttavudh, January 2002 (has links)
Thesis (Ph. D.)--West Virginia University, 2002. / Title from document title page. Document formatted into pages; contains v, 130 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 107-118).
|
19 |
Soft decision decoding of block codes using multilayer perceptronsBartz, Michael 08 1900 (has links)
No description available.
|
20 |
A digital method for generating a reference point in a fingerprint.Karasik, Richard Paul January 1969 (has links)
No description available.
|
Page generated in 0.0394 seconds