Spelling suggestions: "subject:"19kontext free grammar"" "subject:"20context free grammar""
1 |
Consistency of Probabilistic Context-Free GrammarsStüber, Torsten 10 May 2012 (has links) (PDF)
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
|
2 |
Consistency of Probabilistic Context-Free GrammarsStüber, Torsten 10 May 2012 (has links)
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
|
3 |
A New Constructive Method for the One-Letter Context-Free GrammarsAndrei, Åtefan, Chin, Wei Ngan 01 1900 (has links)
Constructive methods for obtaining the regular grammar counterparts for some sub-classes of the context free grammars (cfg) have been investigated by many researchers. An important class of grammars for which this is always possible is the one-letter cfg. We show in this paper a new constructive method for transforming arbitrary one-letter cfg to an equivalent regular expression of star-height 0 or 1. Our new result is considerably simpler than a previous construction by Leiss, and we also propose a new normal form for a regular expression with single-star occurrence. Through an alphabet factorization theorem, we show how to go beyond the one-letter cfg in a straight-forward way. / Singapore-MIT Alliance (SMA)
|
4 |
Comparison of Description Length for Text CorpusHuang, Chung-Hsiang 24 May 2012 (has links)
In this thesis, we compare the description length of different grammars, and extend the research of automatic grammar learning to the grammar production of Stanford parser. In our research before, we have introduced that how to minimize the description length of the grammar which is generated from the Academia Sinica Balanced Corpus. Based on the concept of data compression, the encoding method in our research is effective in reducing the description length of a text corpus. Moreover, we further discussed about the description length of two special cases of context-free grammars: exhaustive and recursive. The exhaustive grammar is that for every distinct sentence in the corpus is derived, and the recursive one covers all strings. In our research of this thesis, we use a parsing tool called "Stanford parser" to parse sentences and generate grammar rules. We also compare the description length of the grammar parsed by machine with the grammar fixed by artificial. In one of the experiments, we use Stanford parser to parse ASBC corpus, and the description length is 53.0Mb. The description length of rule is only 52,683. In the other experiment, we use Stanford parser to parse Sinica Treebank and compare the description length of the generated grammar with the origin. The result shows that the description length of grammar of the Sinica Treebank is 2.76Mb, and the grammar generated by Stanford parser is 4.02Mb.
|
5 |
Learning of Context-Free Grammars From A CorpusChen, Tai-Hung 17 September 2007 (has links)
none
|
6 |
Constraints for Membership in Formal Languages under Systematic Search and Stochastic Local SearchHe, Jun January 2013 (has links)
This thesis focuses on constraints for membership in formal languages under both the systematic search and stochastic local search approaches to constraint programming (CP). Such constraints are very useful in CP for the following three reasons: They provide a powerful tool for user-level extensibility of CP languages. They are very useful for modelling complex work shift regulation constraints, which exist in many shift scheduling problems. In the analysis, testing, and verification of string-manipulating programs, string constraints often arise. We show in this thesis that CP solvers with constraints for membership in formal languages are much more suitable than existing solvers used in tools that have to solve string constraints. In the stochastic local search approach to CP, we make the following two contributions: We introduce a stochastic method of maintaining violations for the regular constraint and extend our method to the automaton constraint with counters. To improve the usage of constraints for which there exists no known constant-time algorithm for neighbour evaluation, we introduce a framework of using solution neighbourhoods, and give an efficient algorithm of constructing a solution neighbourhood for the regular constraint. In the systematic search approach to CP, we make the following two contributions: We show that there may be unwanted consequences when using a propagator that may underestimate a cost of a soft constraint, as the propagator may guide the search to incorrect (non-optimum) solutions to an over-constrained problem. We introduce and compare several propagators that compute correctly the cost of the edit-distance based soft-regular constraint. We show that the context-free grammar constraint is useful and introduce an improved propagator for it.
|
7 |
A Perception Based Question-Answering Architecture Derived from Computing with WordsTorres Parra, Jimena Cecilia 01 December 2009 (has links)
Most search applications in use today employ a keyword based search mechanism, which do not have any deductive abilities and are therefore unable to understand human perceptions underlying any given search. This paper proposes a framework for a Fuzzy Expert System for question-answer support while searching within a specific domain. Development of such a framework requires computing theories which can understand and manipulate the knowledge inherent in natural language based documents. To this end, we can now employ the newly introduced theory of Computing with Words (CW). The recent introduction of CW, by Lofti Zadeh, signifies a break from the traditional computing model and promises to enable analysis of natural language based information. In order to provide a bridge between raw natural language text and CW, the use of Probabilistic Context Free Grammar (PCFG) is proposed. Together the two theories form the core of the proposed framework that allows search applications to be constructed with the capabilities of deduction and perception analysis using a natural language interface.
|
8 |
O vymazávacích pravidlech v řízených gramatikách / On Erasing Rules in Regulated GrammarsZemek, Petr January 2010 (has links)
This work discusses the effect of erasing rules to the generative power of regulated grammars, which is a big open problem in the theory of regulated rewriting. It studies the possibility of removal of erasing rules from regulated grammars by aggregation of current, up-to-date results concerning this elimination and by presentation of a new condition, called k-limited erasing, under which all erasing rules can be always removed from regularly controlled context-free grammars without affecting their generative power. This result partially solves the abovementioned problem. Moreover, a new algorithm for elimination of erasing rules from context-free grammars is presented. This algorithm does not require any predetermination of so called epsilon-nonterminals (in contrast to the standard algorithm used in textbooks). In the conclusion, a significance of these results concerning syntactical analysis is discussed.
|
9 |
Crossing Dependencies in PersianDehdari, Jonathan M. 13 July 2006 (has links) (PDF)
Languages occasionally have syntactic constructions that are difficult, if not impossible, to describe using a context-free grammar. One such construction is a crossing dependency. Crossing dependencies have been well studied for Dutch and Swiss German (Huybregts, 1976; Shieber, 1985), and recently for Tagalog (Maclachlan and Rambow, 2003). In this paper I propose that Persian exhibits crossing dependencies. In this SOV language, a light verb construction in the future tense becomes interrupted by a future auxiliary verb, which agrees with its subject in person and number. The future auxiliary also splits passive constructions in a similar manner. These forms present interesting challenges for computational models of language. I will discuss implications of this phenomenon within current formal and linguistic theories.
|
10 |
Building software tools for combat modeling and analysisChen, Yuanxin 12 1900 (has links)
Approved for public release, distribution is unlimited / The focus of this thesis is to use and leverage the strengths of dynamic computer program analysis methodologies in software engineering testing and debugging such as program behavior modeling and event grammars to automate the building and analysis of combat simulations. An original high level language METALS (Meta-Language for Combat Simulations) and its associated parser and C++ code generator were designed to reduce the amount of time and developmental efforts needed to build sophisticated real world combat simulations. A C++ simulation of the Navy's current mine avoidance problem in littoral waters was generated using high level METALS description in the thesis as a demonstration. The software tools that were developed will allow users to focus their attention and efforts in the problem domain while sparing them to a considerable extent the rigors of detailed implementation. / Major, Republic of Singapore Navy
|
Page generated in 0.1016 seconds