• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Magnetic stripe reader used to collect computer laboratory statistics

Ramesh, Maganti V. January 1990 (has links)
This thesis is concerned with interfacing a magnetic stripe reader with an AT&T PC 6300 consisting of a 20 MB hard disk and with collecting laboratory usage statistics. Laboratory usage statistics includes the name and social security number of the student,along with other necessary details. This system replaces all manual modes of entering data, checks for typographical errors, renames the file containing a particular day's data to a file that has the current day's date as its new filename, and keeps track of the number of students for a particular day. This procedure will ensure security of laboratory equipment and can be modified for each computer laboratory on campus. The program results indicate an acceleration of data entry, favorable student response, and an increase in the accuracy of the data recorded. / Department of Computer Science
2

Separation and recognition of connected handprinted capital English characters

Ting, Voon-Cheung Roger January 1986 (has links)
The subject of machine recognition of connected characters is investigated. A generic single character recognizer (SCR) assumes there is only one character in the image. The goal of this project is to design a connected character segmentation algorithm (CCSA) without the above assumption. The newly designed CCSA will make use of a readily available SCR. The input image (e.g. a word with touching letters) is first transformed (thinned) into its skeletal form. The CCSA will then extract the image features (nodes and branches) and store them in a hierarchical form. The hierarchy stems from the left-to-right rule of writing of the English language. The CCSA will first attempt to recognize the first letter. When this is done, the first letter is deleted and the algorithm repeats. After extracting the image features, the CCSA starts to create a set of test images from the beginning of the word (i.e. beginning of the description). Each test image contains one more feature than its predecessor. The number of test images in the set is constrained by a predetermined fixed width or a fixed total number of features. The SCR is then called to examine each test image. The recognizable test image(s) in the set are extracted. Let each recognizable test image be denoted by C₁. For each C₁, a string of letters C₂, C₃, CL is formed. C₂ is the best recognized test image in a set of test images created after the deletion of C₁ from the beginning of the current word. C₃ through CL are created by the same method. All such strings are examined to determine which string contains the best recognized C₁. Experimental results on test images with two characters yield a recognition rate of 72.66%. Examples with more than two characters are also shown. Furthermore, the experimental results suggested that topologically simple test images can be more difficult to recognize than those which are topologically more complex. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
3

The Design of Microcomputer-Based Sound Synthesis Hardware

Hamilton, Richard L. 05 1900 (has links)
Microcomputer-based music synthesis hardware is being developed at North Texas State University (NTSU). The work described in this paper continues this effort to develop hardware designs for inexpensive, but good quality, sound synthesizers. In order to pursue their activities, researchers in computer assisted instruction in music theory, psychoacoustics, and music composition need quality sound sources. The ultimate goal of my research is to develop good quality sound synthesis hardware which can fill these needs economically. This paper explores three topics: 1) how a computer makes music--a short nontechnical description; 2) what has been done previously--a review of the literature; and 3) what factors bear on the quality of microcomputer-based systems, including encoding of musical passages, software development, and hardware design. These topics lead to the discussion of a particular sound synthesizer which the author has designed.
4

磁気記録評価装置用変位拡大位置決め制御機構の機構形状とコントローラの統合化設計

ANDO, Hiroki, 安藤, 大樹, SAKAI, Takeshi, 酒井, 猛, OBINATA, Goro, 大日方, 五郎 07 1900 (has links)
No description available.
5

Word level training of handwritten word recognition systems

Chen, Wen-Tsong. January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 96-109). Also available on the Internet.
6

More than words text-to-speech technology as a matter of self-efficacy, self-advocacy, and choice /

Parr, Michelann. January 1900 (has links)
Thesis (Ph.D.). / Written for the Dept. of Integrated Studies in Education. Title from title page of PDF (viewed 2009/03/30). Includes bibliographical references.
7

Word level training of handwritten word recognition systems /

Chen, Wen-Tsong. January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 96-109). Also available on the Internet.
8

Colour image segmentation using perceptual colour difference saliency algorithm

Bukola, Taiwo Tunmike 23 August 2017 (has links)
Submitted in fulfillment of the requirements for the Master's Degree in Information and Communication Technology, Durban, University of Technology, Durban, South Africa, 2017. / The topic of colour image segmentation has been and still is a hot issue in areas such as computer vision and image processing because of its wide range of practical applications. The urge has led to the development of numerous colour image segmentation algorithms to extract salient objects from colour images. However, because of the diverse imaging conditions in varying application domains, accuracy and robustness of several state-of-the-art colour image segmentation algorithms still leave room for further improvement. This dissertation reports on the development of a new image segmentation algorithm based on perceptual colour difference saliency along with binary morphological operations. The algorithm consists of four essential processing stages which are colour image transformation, luminance image enhancement, salient pixel computation and image artefact filtering. The input RGB colour image is first transformed into the CIE L*a*b colour image to achieve perceptual saliency and obtain the best possible calibration of the transformation model. The luminance channel of the transformed colour image is then enhanced using an adaptive gamma correction function to alleviate the adverse effects of illumination variation, low contrast and improve the image quality significantly. The salient objects in the input colour image are then determined by calculating saliency at each pixel in order to preserve spatial information. The computed saliency map is then filtered using the morphological operations to eliminate undesired factors that are likely present in the colour image. A series of experiments was performed to evaluate the effectiveness of the new perceptual colour difference saliency algorithm for colour image segmentation. This was accomplished by testing the algorithm on a large set of a hundred and ninety images acquired from four distinct publicly available benchmarks corporal. The accuracy of the developed colour image segmentation algorithm was quantified using four widely used statistical evaluation metrics in terms of precision, F-measure, error and Dice. Promising results were obtained despite the fact that the experimental images were selected from four different corporal and in varying imaging conditions. The results have indeed demonstrated that the performance of the newly developed colour image segmentation algorithm is consistent with an improved performance compared to a number of other saliency and non- saliency state-of-the-art image segmentation algorithms. / M
9

Design of a realtime high speed recognizer for unconstrained handprinted alphanumeric characters

Wong, Ing Hoo January 1985 (has links)
This thesis presents the design of a recognizer for unconstrained handprinted alphanumeric characters. The design is based on a thinning process that is capable of producing thinned images with well defined features that are considered essential for character image description and recognition. By choosing the topological points of the thinned ('line') character image as these desired features, the thinning process achieves not only a high degree of data reduction but also transforms a binary image into a discrete form of line drawing that can be represented by graphs. As a result powerful graphical analysis techniques can be applied to analyze and classify the image. The image classification is performed in two stages. Firstly, a technique for identifying the topological points in the thinned image is developed. These topological points represent the global features of the image and because of their invariance to elastic deformations, they are used for image preclassification. Preclassification results in a substantial reduction in the entropy of the input image. The subsequent process can concentrate only on the differentiation of images that are topologically equivalent. In the preclassifier simple logic operations localized to the immediate neighbourhood of each pixel are used. These operations are also highly independent and easy to implement using VLSI. A graphical technique for image extraction and representation called the chain coded digraph representation is introduced. The technique uses global features such as nodes and the Freeman's chain codes for digital curves as branches. The chain coded digraph contains all the information that is present in the thinned image. This avoids using the image feature extraction approach for image description and data reduction (a difficult process to optimize) without sacrificing speed or complexity. After preclassification, a second stage of the recognition process analyses the chain coded digraph using the concept of attributed relational graph (ARG). ARG representation of the image can be obtained readily through simple transformations or rewriting rules from the chain coded digraph. The ARG representation of an image describes the shape primitives in the image and their relationships. Final classification of the input image can be made by comparing its ARG with the ARGs of known characters. The final classification involves only the comparison of ARGs of a predetermined topology. This information is crucial to the design of a matching algorithm called the reference guided inexact matching procedure, designed for high speed matching of character image ARGs. This graph matching procedure is shown to be much faster than other conventional graph matching procedures. The designed recognizer is implemented in Pascal on the PDP11/23 and VAX 11/750 computer. Test using Munson's data shows a high recognition rate of 91.46%. However, the recognizer is designed with the aim of an eventual implementation using VLSI and also as a basic recognizer for further research in reading machines. Therefore its full potential is yet to be realized. Nevertheless, the experiments with Munson's data illustrates the effectiveness of the design approach and the advantages it offers as a basic system for future research. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
10

Development of experiments for the digital signal processing teaching laboratory

Jen, Kwang-Suz 13 October 2010 (has links)
Digital Signal Processing (DSP) is a technology-driven field which develops as early as mid-1960 when computers and other digital circuitry became fast enough to process large amounts of data efficiently. Since then techniques and applications of DSP have been expanding at a tremendous rate. With the development of large-scale integration, the cost and size of digital components are reducing, and speed of digital components is increasing. Thus the range of applications of DSP techniques is growing. Almost all current discussions of speech bandwidth compression systems are directed toward digital implementation, because these are now the most practical. The importance of DSP appears to be increasing with no visible signs of saturation. This thesis provides the description and results of designing laboratory experiments for the illustration of basic theory in the field of DSP. All experiments are written for the Texas Instruments TMS320I0 digital signal processing microcomputer and based on softwares provided by Atlanta Signal Process, Inc. (ASPI). The use of the 320/pc Algorithm Development Package (ADP) and Digital Filter Design Package (DFDP) developed by ASPI is introduced. The basic concepts, such as linear convolution, Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filter design, Fast Fourier Transform (FF1), are demonstrated. The IBM PC AT is interfaced with the TMS32010 processor. The experiments and their introductions in the thesis also serve as a manual for the DSP Laboratory; to complement the introductory signal processing course. / Master of Science

Page generated in 0.0866 seconds