• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 14
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 65
  • 65
  • 39
  • 23
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Grammar Rewriting

McAllester, David 01 December 1991 (has links)
We present a term rewriting procedure based on congruence closure that can be used with arbitrary equational theories. This procedure is motivated by the pragmatic need to prove equations in equational theories where confluence can not be achieved. The procedure uses context free grammars to represent equivalence classes of terms. The procedure rewrites grammars rather than terms and uses congruence closure to maintain certain congruence properties of the grammar. Grammars provide concise representations of large term sets. Infinite term sets can be represented with finite grammars and exponentially large term sets can be represented with linear sized grammars.
12

A Mixed-Response Intelligent Tutoring System Based on Learning from Demonstration

Alvarez Xochihua, Omar 2012 May 1900 (has links)
Intelligent Tutoring Systems (ITS) have a significant educational impact on student's learning. However, researchers report time intensive interaction is needed between ITS developers and domain-experts to gather and represent domain knowledge. The challenge is augmented when the target domain is ill-defined. The primary problem resides in often using traditional approaches for gathering domain and tutoring experts' knowledge at design time and conventional methods for knowledge representation built for well-defined domains. Similar to evolving knowledge acquisition approaches used in other fields, we replace this restricted view of ITS knowledge learning merely at design time with an incremental approach that continues training the ITS during run time. We investigate a gradual knowledge learning approach through continuous instructor-student demonstrations. We present a Mixed-response Intelligent Tutoring System based on Learning from Demonstration that gathers and represents knowledge at run time. Furthermore, we implement two knowledge representation methods (Weighted Markov Models and Weighted Context Free Grammars) and corresponding algorithms for building domain and tutoring knowledge-bases at run time. We use students' solutions to cybersecurity exercises as the primary data source for our initial framework testing. Five experiments were conducted using various granularity levels for data representation, multiple datasets differing in content and size, and multiple experts to evaluate framework performance. Using our WCFG-based knowledge representation method in conjunction with a finer data representation granularity level, the implemented framework reached 97% effectiveness in providing correct feedback. The ITS demonstrated consistency when applied to multiple datasets and experts. Furthermore, on average, only 1.4 hours were needed by instructors to build the knowledge-base and required tutorial actions per exercise. Finally, the ITS framework showed suitable and consistent performance when applied to a second domain. These results imply that ITS domain models for ill-defined domains can be gradually constructed, yet generate successful results with minimal effort from instructors and framework developers. We demonstrate that, in addition to providing an effective tutoring performance, an ITS framework can offer: scalability in data magnitude, efficiency in reducing human effort required for building a confident knowledge-base, metacognition in inferring its current knowledge, robustness in handling different pedagogical and tutoring criteria, and portability for multiple domain use.
13

Comparison of Description Length for Text Corpus

Huang, Chung-Hsiang 24 May 2012 (has links)
In this thesis, we compare the description length of different grammars, and extend the research of automatic grammar learning to the grammar production of Stanford parser. In our research before, we have introduced that how to minimize the description length of the grammar which is generated from the Academia Sinica Balanced Corpus. Based on the concept of data compression, the encoding method in our research is effective in reducing the description length of a text corpus. Moreover, we further discussed about the description length of two special cases of context-free grammars: exhaustive and recursive. The exhaustive grammar is that for every distinct sentence in the corpus is derived, and the recursive one covers all strings. In our research of this thesis, we use a parsing tool called "Stanford parser" to parse sentences and generate grammar rules. We also compare the description length of the grammar parsed by machine with the grammar fixed by artificial. In one of the experiments, we use Stanford parser to parse ASBC corpus, and the description length is 53.0Mb. The description length of rule is only 52,683. In the other experiment, we use Stanford parser to parse Sinica Treebank and compare the description length of the generated grammar with the origin. The result shows that the description length of grammar of the Sinica Treebank is 2.76Mb, and the grammar generated by Stanford parser is 4.02Mb.
14

Learning of Context-Free Grammars From A Corpus

Chen, Tai-Hung 17 September 2007 (has links)
none
15

Constraints for Membership in Formal Languages under Systematic Search and Stochastic Local Search

He, Jun January 2013 (has links)
This thesis focuses on constraints for membership in formal languages under both the systematic search and stochastic local search approaches to constraint programming (CP). Such constraints are very useful in CP for the following three reasons: They provide a powerful tool for user-level extensibility of CP languages. They are very useful for modelling complex work shift regulation constraints, which exist in many shift scheduling problems. In the analysis, testing, and verification of string-manipulating programs, string constraints often arise. We show in this thesis that CP solvers with constraints for membership in formal languages are much more suitable than existing solvers used in tools that have to solve string constraints. In the stochastic local search approach to CP, we make the following two contributions: We introduce a stochastic method of maintaining violations for the regular constraint and extend our method to the automaton constraint with counters. To improve the usage of constraints for which there exists no known constant-time algorithm for neighbour evaluation, we introduce a framework of using solution neighbourhoods, and give an efficient algorithm of constructing a solution neighbourhood for the regular constraint. In the systematic search approach to CP, we make the following two contributions: We show that there may be unwanted consequences when using a propagator that may underestimate a cost of a soft constraint, as the propagator may guide the search to incorrect (non-optimum) solutions to an over-constrained problem. We introduce and compare several propagators that compute correctly the cost of the edit-distance based soft-regular constraint. We show that the context-free grammar constraint is useful and introduce an improved propagator for it.
16

Complexities of Order-Related Formal Language Extensions / Komplexiteter hos ordnings-relaterade utökningar av formella språk

Berglund, Martin January 2014 (has links)
The work presented in this thesis discusses various formal language formalisms that extend classical formalisms like regular expressions and context-free grammars with additional abilities, most relating to order. This is done while focusing on the impact these extensions have on the efficiency of parsing the languages generated. That is, rather than taking a step up on the Chomsky hierarchy to the context-sensitive languages, which makes parsing very difficult, a smaller step is taken, adding some mechanisms which permit interesting spatial (in)dependencies to be modeled. The most immediate example is shuffle formalisms, where existing language formalisms are extended by introducing operators which generate arbitrary interleavings of argument languages. For example, introducing a shuffle operator to the regular expressions does not make it possible to recognize context-free languages like anbn, but it does capture some non-context-free languages like the language of all strings containing the same number of as, bs and cs. The impact these additions have on parsing has many facets. Other than shuffle operators we also consider formalisms enforcing repeating substrings, formalisms moving substrings around, and formalisms that restrict which substrings may be concatenated. The formalisms studied here all have a number of properties in common. They are closely related to existing regular and context-free formalisms. They operate in a step-wise fashion, deriving strings by sequences of rule applications of individually limited power. Each step generates a constant number of symbols and does not modify parts that have already been generated. That is, strings are built in an additive fashion that does not explode in size (in contrast to e.g. Lindenmayer systems). All languages here will have a semi-linear Parikh image. They feature some interesting characteristic involving order or other spatial constraints. In the example of the shuffle multiple derivations are in a sense interspersed in a way that each is unaware of. All of the formalisms are intended to be limited enough to make an efficient parsing algorithm at least for some cases a reasonable goal. This thesis will give intuitive explanations of a number of formalisms fulfilling these requirements, and will sketch some results relating to the parsing problem for them. This should all be viewed as preparation for the more complete results and explanations featured in the papers given in the appendices. / Denna avhandling diskuterar utökningar av klassiska formalismer inom formella språk, till exempel reguljära uttryck och kontextfria grammatiker. Utökningarna handlar på ett eller annat sätt omordning, och ett särskilt fokus ligger på att göra utökningarna på ett sätt som dels har intressanta spatiala/ordningsrelaterade effekter och som dels bevarar den effektiva parsningen som är möjlig för de ursprungliga klassiska formalismerna. Detta står i kontrast till att ta det större steget upp i Chomsky-hierarkin till de kontextkänsliga språken, vilket medför ett svårt parsningsproblem. Ett omedelbart exempel på en sådan utökning är s.k. shuffle-formalismer. Dessa utökar existerande formalismer genom att introducera operatorer som godtyckligt sammanflätar strängar från argumentspråk. Om shuffle-operator introduceras till de reguljära uttrycken ger det inte förmågan att känna igen t.ex. det kontextfria språket anbn, men det fångar istället vissa språk som inte är kontextfria, till exempel språket som består av alla strängar som innehåller lika många a:n, b:n och c:n. Sättet på vilket dessa utökningar påverkar parsningsproblemet är mångfacetterat. Utöver dessa shuffle-operatorer tas också formalismer där delsträngar kan upprepas, formalismer där delsträngar flyttas runt, och formalismer som begränsar hur delsträngar får konkateneras upp. Formalismerna som tas upp här har dock vissa egenskaper gemensamma. De är nära besläktade med de klassiska reguljära och kontextfria formalismerna. De arbetar stegvis, och konstruerar strängar genom successiva applikationer av individuellt enkla regler. Varje steg genererar ett konstant antal symboler och modifierar inte det som redan genererats. Det vill säga, strängar byggs additivt och längden på dem kan inte explodera (i kontrast till t.ex. Lindenmayer-system). Alla språk som tas upp kommer att ha en semi-linjär Parikh-avbildning. De har någon instressant spatial/ordningsrelaterad egenskap. Exempelvis sättet på vilket shuffle-operatorer sammanflätar annars oberoende deriveringar. Alla formalismerna är tänkta att vara begränsade nog att det är resonabelt att ha effektiv parsning som mål. Denna avhandling kommer att ge intuitiva förklaring av ett antal formalismer som uppfyller ovanstående krav, och kommer att skissa en blandning av resultat relaterade till parsningsproblemet för dem. Detta bör ses som förberedande inför läsning av de mer djupgående och komplexa resultaten och förklaringarna i de artiklar som finns inkluderade som appendix.
17

A Perception Based Question-Answering Architecture Derived from Computing with Words

Torres Parra, Jimena Cecilia 01 December 2009 (has links)
Most search applications in use today employ a keyword based search mechanism, which do not have any deductive abilities and are therefore unable to understand human perceptions underlying any given search. This paper proposes a framework for a Fuzzy Expert System for question-answer support while searching within a specific domain. Development of such a framework requires computing theories which can understand and manipulate the knowledge inherent in natural language based documents. To this end, we can now employ the newly introduced theory of Computing with Words (CW). The recent introduction of CW, by Lofti Zadeh, signifies a break from the traditional computing model and promises to enable analysis of natural language based information. In order to provide a bridge between raw natural language text and CW, the use of Probabilistic Context Free Grammar (PCFG) is proposed. Together the two theories form the core of the proposed framework that allows search applications to be constructed with the capabilities of deduction and perception analysis using a natural language interface.
18

Some aspects of error correction of programming languages

Frentiu, M. January 1976 (has links)
The thesis treats the problem of error correction in a context free language, and the design of an error correcting parser for the BASIC language. Two important things can be said about this thesis. First, it presents the problem of error correction in a context free language, and the existing results in the field. The concept of a context free language as a model for a programming language, and the definitions and results used later are presented or reviewed. A distance between two strings is defined and used to develop a “minimum distance error correcting parser”. Second, the thesis develops two global error correcting parsers. The first one is the top-down global error correcting parser, obtained by transforming Unger’s top-down parser into an error correcting one. Then the idea of Graham and Rhodes, of condensing the surrounding context of error, is extended, and a global simple precedence error correcting parser is obtained by analysing the whole content of the error, available from the input string. These parsers, and other known methods are then used to design and partially implement an error correcting parser for BASIC.
19

New Results on Context-Free Tree Languages

Osterholzer, Johannes 04 May 2018 (has links)
Context-free tree languages play an important role in algebraic semantics and are applied in mathematical linguistics. In this thesis, we present some new results on context-free tree languages.
20

Musical Phrase Segmentation via Grammatical Induction

Perkins, Reed James 06 April 2022 (has links)
Procedural generation algorithms can infer rules based on a dataset of examples when each example is made up of labeled components. Unfortunately, musical sequences resist potential inclusion in these kinds of datasets because they lack explicit structural semantics. In order to algorithmically transform a musical sequence into a sequence of labeled components, a segmentation process is needed. We outline a solution to the challenge of musical phrase segmentation that uses grammatical induction algorithms, a class of algorithms which infer a context-free grammar from an input sequence. We study five different grammatical induction algorithms on three different datasets, one of which is introduced in this work. Additionally, we test how the performance of each algorithm varies when transforming musical sequences using viewpoint combinations. Our experiments show that the algorithm longestFirst achieves the best F1 scores across all three datasets, and that viewpoint combinations which include the duration viewpoint result in the best performance.

Page generated in 0.0524 seconds