• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The moments and distribution for an estimate of the Shannon information measure and its application to ecology

Hutcheson, Kermit January 1969 (has links)
This dissertation deals primarily with the moments and distribution H̅ = - Σ n<sub>i</sub>/N logn<sub>i</sub>/N. Some techniques of obtaining multivariate moments, in particular multinomial moments, are given. I The approach used In obtaining the moments of H̅ was through the probability generating function of the multinomial distribution. A series of rather simple mathematical operations will produce the E(H̅) as an Integral and Var(H̅) as a double Integral. These Integrale are evaluated exactly thus giving the exact mean and variance of H̅. The mean and variance Is also given In series form. The series for the mean of H̅ appears to be divergent. Several charts are given which Indicate the percent error incurred when the series are used. The combinatorial approach was used In finding the asymptotic distribution of H̅. The IBM 1130 and the IBM 360 model 65 were used to do this work. The results is that H̅ is asymptotically normal In the general case and H̅ is asymptotically chi-square In the equiprobable case. Tables are given for the mean and variance of H̅ in the general case and In the equiprobable case. Two methods are given for finding multivariate moments. The Q-Product Method due to Shenton, Bowman, and Reinfelds [36th Session of the International Statistical Institute,, 1967] and the Small Sample Method. There is every indication that these methods can be completely automated. A table of the first fourteen binomial moments is given and a table through order six of the multinomial moments is given. / Ph. D.
22

Reliability of quantum-mechanical communication systems.

January 1968 (has links)
Issued also as a Sc.D. thesis in the Dept. of Electrical Engineering, 1968. / Bibliography: p.103-104.
23

Entropy reduction of English text using variable length grouping

Ast, Vincent Norman 01 July 1972 (has links)
It is known that the entropy of English text can be reduced by arranging the text into groups of two or more letters each. The higher the order of the grouping the greater is the entropy reduction. Using this principle in a computer text compressing system brings about difficulties, however, because the number of entries required in the translation table increases exponentially with group size. This experiment examined the possibility of using a translation table containing only selected entries of all group sizes with the expectation of obtaining a substantial entropy reduction with a relatively small table. An expression was derived that showed that the groups which should be included in the table are not necessarily those that occur frequently but rather occur more frequently than would be expected due to random occurrence. This was complicated by the fact that any grouping affects the frequency of occurrence of many other related groups. An algorithm was developed in which the table originally starts with the regular 26 letters of the alphabet and the space. Entries, which consist of letter groups, complete words, and word groups, are then added one by one based on the selection criterion. After each entry is added adjustments are made to account for the interaction of the groups. This algorithm was programmed on a computer and was run using a text sample of about 7000 words. The results showed that the entropy could easily be reduced down to 3 bits per letter with a table of less than 200 entries. With about 500 entries the entropy could be reduced to about 2.5 bits per letter. About 60% of the table was composed of letter groups, 42% of single words and 8% of word groups and indicated that the extra complications involved in handling word groups may not be worthwhile. A visual examination of the table showed that many entries were very much oriented to the particular sample. This may or may not be desirable depending on the intended use of the translating system.
24

A Common Representation Format for Multimedia Documents

Jeong, Ki Tai 12 1900 (has links)
Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available text is diffused and is not uniform with the image features. This retrieval system works by representing a multimedia document as a single data structure. CRFMD is applicable to other areas of multimedia document retrieval and processing, such as medical image retrieval, World Wide Web searching, and museum collection retrieval.

Page generated in 0.1364 seconds