Spelling suggestions: "subject:"batural language processing"" "subject:"featural language processing""
1051 |
Grapheme-to-phoneme conversion and its application to transliterationJiampojamarn, Sittichai 06 1900 (has links)
Grapheme-to-phoneme conversion (G2P) is the task of converting a word, represented by a sequence of graphemes, to its pronunciation, represented by a sequence of phonemes. The G2P task plays a crucial role in speech synthesis systems, and is an important part of other applications, including spelling correction and speech-to-speech machine translation. G2P conversion is a complex task, for which a number of diverse solutions have been proposed. In general, the problem is challenging because the source string does not unambiguously specify the target representation. In addition, the training data include only example word
pairs without the structural information of subword alignments.
In this thesis, I introduce several novel approaches for G2P conversion. My contributions can be categorized into (1) new alignment models and (2) new output generation models. With respect to alignment models, I present techniques including many-to-many alignment, phonetic-based alignment, alignment by integer linear programing and alignment-by-aggregation. Many-to-many alignment is designed to replace the one-to-one
alignment that has been used almost exclusively in the past. The new many-to-many alignments are more precise and accurate in expressing grapheme-phoneme relationships. The other proposed alignment approaches attempt to advance the training method beyond the use of Expectation-Maximization (EM). With respect to generation models, I first describe a framework for integrating many-to-many alignments and language models for grapheme classification. I then propose joint processing for G2P using online discriminative training. I integrate a generative joint n-gram model into the discriminative framework. Finally, I apply the proposed G2P systems to name transliteration generation and mining tasks. Experiments show that the proposed system achieves state-of-the-art performance in both the G2P and name transliteration tasks.
|
1052 |
Automatic Text Ontological Representation and Classification via Fundamental to Specific Conceptual Elements (TOR-FUSE)Razavi, Amir Hossein 16 July 2012 (has links)
In this dissertation, we introduce a novel text representation method mainly used for text classification purpose. The presented representation method is initially based on a variety of closeness relationships between pairs of words in text passages within the entire corpus. This representation is then used as the basis for our multi-level lightweight ontological representation method (TOR-FUSE), in which documents are represented based on their contexts and the goal of the learning task. The method is unlike the traditional representation methods, in which all the documents are represented solely based on the constituent words of the documents, and are totally isolated from the goal that they are represented for. We believe choosing the correct granularity of representation features is an important aspect of text classification. Interpreting data in a more general dimensional space, with fewer dimensions, can convey more discriminative knowledge and decrease the level of learning perplexity. The multi-level model allows data interpretation in a more conceptual space, rather than only containing scattered words occurring in texts. It aims to perform the extraction of the knowledge tailored for the classification task by automatic creation of a lightweight ontological hierarchy of representations. In the last step, we will train a tailored ensemble learner over a stack of representations at different conceptual granularities. The final result is a mapping and a weighting of the targeted concept of the original learning task, over a stack of representations and granular conceptual elements of its different levels (hierarchical mapping instead of linear mapping over a vector). Finally the entire algorithm is applied to a variety of general text classification tasks, and the performance is evaluated in comparison with well-known algorithms.
|
1053 |
Continuous space models with neural networks in natural language processingLe, Hai Son 20 December 2012 (has links) (PDF)
The purpose of language models is in general to capture and to model regularities of language, thereby capturing morphological, syntactical and distributional properties of word sequences in a given language. They play an important role in many successful applications of Natural Language Processing, such as Automatic Speech Recognition, Machine Translation and Information Extraction. The most successful approaches to date are based on n-gram assumption and the adjustment of statistics from the training data by applying smoothing and back-off techniques, notably Kneser-Ney technique, introduced twenty years ago. In this way, language models predict a word based on its n-1 previous words. In spite of their prevalence, conventional n-gram based language models still suffer from several limitations that could be intuitively overcome by consulting human expert knowledge. One critical limitation is that, ignoring all linguistic properties, they treat each word as one discrete symbol with no relation with the others. Another point is that, even with a huge amount of data, the data sparsity issue always has an important impact, so the optimal value of n in the n-gram assumption is often 4 or 5 which is insufficient in practice. This kind of model is constructed based on the count of n-grams in training data. Therefore, the pertinence of these models is conditioned only on the characteristics of the training text (its quantity, its representation of the content in terms of theme, date). Recently, one of the most successful attempts that tries to directly learn word similarities is to use distributed word representations in language modeling, where distributionally words, which have semantic and syntactic similarities, are expected to be represented as neighbors in a continuous space. These representations and the associated objective function (the likelihood of the training data) are jointly learned using a multi-layer neural network architecture. In this way, word similarities are learned automatically. This approach has shown significant and consistent improvements when applied to automatic speech recognition and statistical machine translation tasks. A major difficulty with the continuous space neural network based approach remains the computational burden, which does not scale well to the massive corpora that are nowadays available. For this reason, the first contribution of this dissertation is the definition of a neural architecture based on a tree representation of the output vocabulary, namely Structured OUtput Layer (SOUL), which makes them well suited for large scale frameworks. The SOUL model combines the neural network approach with the class-based approach. It achieves significant improvements on both state-of-the-art large scale automatic speech recognition and statistical machine translations tasks. The second contribution is to provide several insightful analyses on their performances, their pros and cons, their induced word space representation. Finally, the third contribution is the successful adoption of the continuous space neural network into a machine translation framework. New translation models are proposed and reported to achieve significant improvements over state-of-the-art baseline systems.
|
1054 |
Generierung von natürlichsprachlichen Texten aus semantischen Strukturen im Prozeß der maschinellen Übersetzung - Allgemeine Strukturen und AbbildungenRosenpflanzer, Lutz, Karl, Hans-Ulrich 14 December 2012 (has links) (PDF)
0 VORWORT
Bei der maschinellen Übersetzung natürlicher Sprache dominieren mehrere Probleme. Man hat es immer mit sehr großen Datenmengen zu tun. Auch wenn man nur einen kleinen Text übersetzen will, ist diese Aufgabe in umfänglichen Kontext eingebettet, d.h. alles Wissen über Quell- und Zielsprache muß - in möglichst formalisierter Form - zur Verfügung stehen. Handelt es sich um gesprochenes Wort treten Spracherkennungs- und Sprachausgabeaufgaben sowie harte Echtzeitforderungen hinzu. Die Komplexität des Problems ist - auch unter Benutzung moderner Softwareentwicklungskonzepte - für jeden, der eine Implementation versucht, eine nicht zu unterschätzende Herausforderung.
Ansätze, die die Arbeitsprinzipien und Methoden der Informatik konsequent nutzen, stellen ihre Ergebnisse meist nur prototyisch für einen sehr kleinen Teil der Sprache -etwa eine Phrase, einen Satz bzw. mehrere Beispielsätze- heraus und folgern mehr oder weniger induktiv, daß die entwickelte Lösung auch auf die ganze Sprache erfolgreich angewendet werden kann, wenn man nur genügend „Lemminge“ hat, die nach allen Seiten ausschwärmend, die „noch notwendigen Routinearbeiten“ schnell und bienenfleißig ausführen könnten.
|
1055 |
Syntactic and Semantic Analysis and Visualization of Unstructured English TextsKarmakar, Saurav 14 December 2011 (has links)
People have complex thoughts, and they often express their thoughts with complex sentences using natural languages. This complexity may facilitate efficient communications among the audience with the same knowledge base. But on the other hand, for a different or new audience this composition becomes cumbersome to understand and analyze. Analysis of such compositions using syntactic or semantic measures is a challenging job and defines the base step for natural language processing.
In this dissertation I explore and propose a number of new techniques to analyze and visualize the syntactic and semantic patterns of unstructured English texts.
The syntactic analysis is done through a proposed visualization technique which categorizes and compares different English compositions based on their different reading complexity metrics. For the semantic analysis I use Latent Semantic Analysis (LSA) to analyze the hidden patterns in complex compositions. I have used this technique to analyze comments from a social visualization web site for detecting the irrelevant ones (e.g., spam). The patterns of collaborations are also studied through statistical analysis.
Word sense disambiguation is used to figure out the correct sense of a word in a sentence or composition. Using textual similarity measure, based on the different word similarity measures and word sense disambiguation on collaborative text snippets from social collaborative environment, reveals a direction to untie the knots of complex hidden patterns of collaboration.
|
1056 |
Un entorno para la extracción incremental de conocimiento desde texto en lenguaje naturalValencia García, Rafael 22 April 2005 (has links)
La creciente necesidad de enriquecer la Web con grandes cantidades de ontologías que capturen el conocimiento del dominio ha generado multitud de estudios e investigaciones en metodologías para poder salvar el cuello de botella que supone la construcción manual de ontologías. Esta necesidad ha conducido a definir una nueva línea de investigación denominada Ontology Learning. La solución que proponemos en este trabajo se basa en el desarrollo de un nuevo entorno para extracción incremental de conocimiento desde texto en lenguaje natural. Se ha adoptado el punto de vista de la ingeniería ontológica, de modo que el conocimiento adquirido se representa por medio de ontologías. Este trabajo aporta un nuevo método para la construcción semiautomática de ontologías a partir de textos en lenguaje natural que no sólo se centra en la obtención de jerarquías de conceptos, sino que tiene en cuenta también un amplio conjunto de relaciones semánticas entre conceptos. / The need for enriching fue Web with large amounts of ontologies has increased. This need for domain models has generated several studies and research on methodologies capable of overcoming the bottleneck provoked by fue manual construction of ontologies. This need has led towards a new research area to obtain semiautomatic methods to build ontologies, which is called, Ontology Learning. The solution proposed in this work is based on the development of a new environment for incremental knowledge extraction from naturallanguage texts. F or this purpose, an ontological engineering perspective has been adopted. Hence, fue knowledge acquired through fue developed environment is represented by means of ontologies. This work presents a new method for fue semiautomatic construction of ontologies from naturallanguage texts. This method is not only based on obtaining hierarchies of concepts, but it uses a set of semantic relations between concepts.
|
1057 |
Unsupervised learning of relation detection patternsGonzàlez Pellicer, Edgar 01 June 2012 (has links)
L'extracció d'informació és l'àrea del processament de llenguatge natural l'objectiu de la qual és l'obtenir dades
estructurades a partir de la informació rellevant continguda en fragments textuals.
L'extracció d'informació requereix una quantitat considerable de coneixement lingüístic. La especificitat d'aquest
coneixement suposa un inconvenient de cara a la portabilitat dels sistemes, ja que un canvi d'idioma, domini o estil té un
cost en termes d'esforç humà. Durant dècades, s'han aplicat tècniques d'aprenentatge automàtic per tal de superar aquest
coll d'ampolla de portabilitat, reduint progressivament la supervisió humana involucrada. Tanmateix, a mida que augmenta
la disponibilitat de grans col·leccions de documents, esdevenen necessàries aproximacions completament nosupervisades
per tal d'explotar el coneixement que hi ha en elles.
La proposta d'aquesta tesi és la d'incorporar tècniques de clustering a l'adquisició de patrons per a extracció d'informació,
per tal de reduir encara més els elements de supervisió involucrats en el procés En particular, el treball se centra en el
problema de la detecció de relacions. L'assoliment d'aquest objectiu final ha requerit, en primer lloc, el considerar les
diferents estratègies en què aquesta combinació es podia dur a terme; en segon lloc, el desenvolupar o adaptar algorismes
de clustering adequats a les nostres necessitats; i en tercer lloc, el disseny de procediments d'adquisició de patrons que
incorporessin la informació de clustering.
Al final d'aquesta tesi, havíem estat capaços de desenvolupar i implementar una aproximació per a l'aprenentatge de
patrons per a detecció de relacions que, utilitzant tècniques de clustering i un mínim de supervisió humana, és competitiu i
fins i tot supera altres aproximacions comparables en l'estat de l'art. / Information extraction is the natural language processing area whose goal is to obtain structured data from the relevant
information contained in textual fragments.
Information extraction requires a significant amount of linguistic knowledge. The specificity of such knowledge supposes a
drawback on the portability of the systems, as a change of language, domain or style demands a costly human effort.
Machine learning techniques have been applied for decades so as to overcome this portability bottleneck¿progressively
reducing the amount of involved human supervision. However, as the availability of large document collections increases,
completely unsupervised approaches become necessary in order to mine the knowledge contained in them.
The proposal of this thesis is to incorporate clustering techniques into pattern learning for information extraction, in order to
further reduce the elements of supervision involved in the process. In particular, the work focuses on the problem of relation
detection. The achievement of this ultimate goal has required, first, considering the different strategies in which this
combination could be carried out; second, developing or adapting clustering algorithms suitable to our needs; and third,
devising pattern learning procedures which incorporated clustering information.
By the end of this thesis, we had been able to develop and implement an approach for learning of relation detection patterns
which, using clustering techniques and minimal human supervision, is competitive and even outperforms other comparable
approaches in the state of the art.
|
1058 |
Concept Mining: A Conceptual Understanding based ApproachShehata, Shady January 2009 (has links)
Due to the daily rapid growth of the information, there are
considerable needs to extract and discover valuable knowledge from
data sources such as the World Wide Web. Most of the common
techniques in text mining are based on the statistical analysis of a
term either word or phrase. These techniques consider documents as
bags of words and pay no attention to the meanings of the document
content. In addition, statistical analysis of a term frequency
captures the importance of the term within a document only. However,
two terms can have the same frequency in their documents, but one
term contributes more to the meaning of its sentences than the other
term. Therefore, there is an intensive need for a model that
captures the meaning of linguistic utterances in a formal structure.
The underlying model should indicate terms that capture the
semantics of text. In this case, the model can capture terms that
present the concepts of the sentence, which leads to discover the
topic of the document.
A new concept-based model that analyzes terms on the sentence,
document and corpus levels rather than the traditional analysis of
document only is introduced. The concept-based model can effectively
discriminate between non-important terms with respect to sentence
semantics and terms which hold the concepts that represent the
sentence meaning.
The proposed model consists of concept-based statistical analyzer,
conceptual ontological graph representation, concept extractor and
concept-based similarity measure. The term which contributes to the
sentence semantics is assigned two different weights by the
concept-based statistical analyzer and the conceptual ontological
graph representation. These two weights are combined into a new
weight. The concepts that have maximum combined weights are selected
by the concept extractor. The similarity between documents is
calculated based on a new concept-based similarity measure. The
proposed similarity measure takes full advantage of using the
concept analysis measures on the sentence, document, and corpus
levels in calculating the similarity between documents.
Large sets of experiments using the proposed concept-based model on
different datasets in text clustering, categorization and retrieval
are conducted. The experiments demonstrate extensive comparison
between traditional weighting and the concept-based weighting
obtained by the concept-based model. Experimental results in text
clustering, categorization and retrieval demonstrate the substantial
enhancement of the quality using: (1) concept-based term frequency
(tf), (2) conceptual term frequency (ctf), (3) concept-based
statistical analyzer, (4) conceptual ontological graph, (5)
concept-based combined model.
In text clustering, the evaluation of results is relied on two
quality measures, the F-Measure and the Entropy. In text
categorization, the evaluation of results is relied on three quality
measures, the Micro-averaged F1, the Macro-averaged F1 and the Error
rate. In text retrieval, the evaluation of results relies on three
quality measures, the precision at 10 documents retrieved P(10), the
preference measure (bpref), and the mean uninterpolated average
precision (MAP). All of these quality measures are improved when the
newly developed concept-based model is used to enhance the quality
of the text clustering, categorization and retrieval.
|
1059 |
Concept Mining: A Conceptual Understanding based ApproachShehata, Shady January 2009 (has links)
Due to the daily rapid growth of the information, there are
considerable needs to extract and discover valuable knowledge from
data sources such as the World Wide Web. Most of the common
techniques in text mining are based on the statistical analysis of a
term either word or phrase. These techniques consider documents as
bags of words and pay no attention to the meanings of the document
content. In addition, statistical analysis of a term frequency
captures the importance of the term within a document only. However,
two terms can have the same frequency in their documents, but one
term contributes more to the meaning of its sentences than the other
term. Therefore, there is an intensive need for a model that
captures the meaning of linguistic utterances in a formal structure.
The underlying model should indicate terms that capture the
semantics of text. In this case, the model can capture terms that
present the concepts of the sentence, which leads to discover the
topic of the document.
A new concept-based model that analyzes terms on the sentence,
document and corpus levels rather than the traditional analysis of
document only is introduced. The concept-based model can effectively
discriminate between non-important terms with respect to sentence
semantics and terms which hold the concepts that represent the
sentence meaning.
The proposed model consists of concept-based statistical analyzer,
conceptual ontological graph representation, concept extractor and
concept-based similarity measure. The term which contributes to the
sentence semantics is assigned two different weights by the
concept-based statistical analyzer and the conceptual ontological
graph representation. These two weights are combined into a new
weight. The concepts that have maximum combined weights are selected
by the concept extractor. The similarity between documents is
calculated based on a new concept-based similarity measure. The
proposed similarity measure takes full advantage of using the
concept analysis measures on the sentence, document, and corpus
levels in calculating the similarity between documents.
Large sets of experiments using the proposed concept-based model on
different datasets in text clustering, categorization and retrieval
are conducted. The experiments demonstrate extensive comparison
between traditional weighting and the concept-based weighting
obtained by the concept-based model. Experimental results in text
clustering, categorization and retrieval demonstrate the substantial
enhancement of the quality using: (1) concept-based term frequency
(tf), (2) conceptual term frequency (ctf), (3) concept-based
statistical analyzer, (4) conceptual ontological graph, (5)
concept-based combined model.
In text clustering, the evaluation of results is relied on two
quality measures, the F-Measure and the Entropy. In text
categorization, the evaluation of results is relied on three quality
measures, the Micro-averaged F1, the Macro-averaged F1 and the Error
rate. In text retrieval, the evaluation of results relies on three
quality measures, the precision at 10 documents retrieved P(10), the
preference measure (bpref), and the mean uninterpolated average
precision (MAP). All of these quality measures are improved when the
newly developed concept-based model is used to enhance the quality
of the text clustering, categorization and retrieval.
|
1060 |
Consistency of Probabilistic Context-Free GrammarsStüber, Torsten 10 May 2012 (has links) (PDF)
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
|
Page generated in 0.141 seconds