Return to search

The information rate and other parameters of probabilistic context free grammars and their parsers

Probabilistic context-free languages are defined by giving predetermined probabilities (preprobabilities) for the choices that their grammars make when generating. Chapter 1 shows how to carry out the above definition, and how to calculate some parameters or the language; for instance: average length or work, mean square length, digraph probabilities, entropy. Chapter 2 introduces generating ffunctions related to grammars. It uses them to derive a condition for which preprobabilities give rise to well-fformed probability spaces. Two ffunctions, the length and entropy generating ffunctions are studied in detail. They are algebraic ffunctions, can in general only be defined implicitly, but can be used to give unified explicit methods or calculating all the parameters or chapter I (and more). Chapter 3 defines and shows how to calculate the information rate or a language. As a by-blow, Macmillan's theorem is extended (for a small class or processes) to an analogue or the Central Limit Theorem. Chapter 4 tries to compare the efficiencies or different parsing algorithms. In a reasonable sense, all deterministic parsers take equal average time to parse, any backtracking parser is slower, but there is no general algorithm for calculating the speed or a backtracking parser.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:595041
Date January 1974
CreatorsKestner, Simon
PublisherUniversity of Warwick
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://wrap.warwick.ac.uk/72009/

Page generated in 0.0019 seconds