Spelling suggestions: "subject:"semantics -- data processing."" "subject:"semantics -- mata processing.""
1 |
Flexible semantic matching of rich knowledge structuresYeh, Peter Zei-Chan 28 August 2008 (has links)
Not available / text
|
2 |
Geometric and topological approaches to semantic text retrieval. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
In the first part of this thesis, we present a new understanding of the latent semantic space of a dataset from the dual perspective, which relaxes the above assumed conditions and leads naturally to a unified kernel function for a class of vector space models. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also show that the new methods possess the stable property on the rank choice, i.e., even if the selected rank is quite far away from the optimal one, the retrieval performance will not degrade much. The experimental results of our methods on the standard test sets are promising. / In the second part of this thesis, we propose that the mathematical structure of simplexes can be attached to a term-document matrix in the vector-space model (VSM) for information retrieval. The Q-analysis devised by R. H. Atkin may then be applied to effect an analysis of the topological structure of the simplexes and their corresponding dataset. Experimental results of this analysis reveal that there is a correlation between the effectiveness of LSI and the topological structure of the dataset. By using the information obtained from the topological analysis, we develop a new query expansion method. Experimental results show that our method can enhance the performance of VSM for datasets over which LSI is not effective. Finally, the notion of homology is introduced to the topological analysis of datasets and its possible relation to word sense disambiguation is studied through a simple example. / With the vast amount of textual information available today, the task of designing effective and efficient retrieval methods becomes more important and complex. The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and the Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, in which some underlying latent semantic structures in the dataset are assumed. However, their assumptions about where the semantic structure locates are a bit strong. Moreover, the performance of LSI can be very different for various datasets and the questions of what characteristics of a dataset and why these characteristics contribute to this difference have not been fully understood. The present thesis focuses on providing answers to these two questions. / Li , Dandan. / "August 2007." / Adviser: Chung-Ping Kwong. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 118-120). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
3 |
Aktionsart coercionBarber, Nicole January 2008 (has links)
This study aimed to investigate English Aktionsart coercion, particularly novel coercion, through corpora-based research. Novel coercions are those which need some contextual support in order to make sense of or be grammatical. Due to the nature of the data, a necessary part of the study was the design of a program to help in the process of tagging corpora for Aktionsart. This thesis starts with a discussion of five commonly accepted Aktionsarten: state, activity, achievement, accomplishment, and semelfactive. One significant contribution of the thesis is that it offers a comprehensive review and discussion of various theories that have been proposed to account for Aktionsart or aspectual coercion, as there is no such synthesis available in the literature. Thus the thesis moves on to a review of many of the more prominent works in the area of Aktionsart coercion, including Moens and Steedman (1988), Pustejovsky (1995), and De Swart (1998). I also present a few theories drawn from less prominent studies by authors in the area who have different or interesting views on the topic, such as Bickel (1997), Krifka (1998), and Xiao and McEnery (2004). In order to study the Aktionsart coercion of verbs in large corpora, examples of Aktionsart coercion needed to be collected. I aimed to design a computer program that could ideally perform a large portion of this task automatically. I present the methods I used in designing the program, as well as the process involved in using it to collect data. Some major steps in my research were the tagging of corpora, counting of coercion 3 frequency by type, and the selection of representative examples of different types of coercion for analysis and discussion. All of the examples collected from the corpora, both by my Aktionsart-tagging program and manually, were conventional coercions. As such there was no opportunity for an analysis of novel coercions. I nevertheless discuss the examples of conventional coercion that I gathered from the corpora analysis, with particular reference to Moens and Steedmans (1988) theory. Three dominant types of coercion were identified in the data: from activities into accomplishments, activities into states, and accomplishments into states. There were two main ways coercions taking place in the data: from activity to accomplishment through the addition of an endpoint, and from various Aktionsarten into state by coercing the event into being a property of someone/something. Many of the Aktionsart coercion theories are supported at least in part by the data found in natural language. One of the most prominent coercions that is underrepresented in the data is from achievement to accomplishment through the addition of a preparatory process. I conclude that while there are reasons for analysing Aktionsart at verb phrase or sentence level, this does not mean the possibility of analyses at the lexical level should be ignored.
|
4 |
Coping with uncertainty : noun phrase interpretation and early semantic analysisMellish, Christopher Stuart January 1981 (has links)
A computer program which can "understand" natural language texts must have both syntactic knowledge about the language concerned and semantic knowledge of how what is written relates to its internal representation of the world. It has been a matter of some controversy how these sources of information can best be integrated to translate from an input text to a formal meaning representation. The controversy has concerned largely the question as to what degree of syntactic analysis must be performed before any semantic analysis can take place. An extreme position in this debate is that a syntactic parse tree for a complete sentence must be produced before any investigation of that sentence's meaning is appropriate. This position has been criticised by those who see understanding as a process that takes place gradually as the text is read, rather than in sudden bursts of activity at the ends of sentences. These people advocate a model where semantic analysis can operate on fragments of text before the global syntactic structure is determined - a strategy which we will call early semantic analysis. In this thesis, we investigate the implications of early semantic analysis in the interpretation of noun phrases. One possible approach is to say that a noun phrase is a self-contained unit and can be fully interpreted by the time it has been read. Thus it can always be determined what objects a noun phrase refers to without consulting much more than the structure of the phrase itself. This approach was taken in part by Winograd [Winograd 72], who saw the constraint that a noun phrase have a referent as a valuable aid in resolving local syntactic ambiguity. Unfortunately, Winograd's work has been criticised by Ritchie, because it is not always possible to determine what a noun phrase refers to purely on the basis of local information. In this thesis, we will go further than this and claim that, because the meaning of a noun phrase can be affected by so many factors outside the phrase itself, it makes no sense to talk about "the referent" as a function of -a noun phrase. Instead, the notion of "referent" is something defined by global issues of structure and consistency. Having rejected one approach to the early semantic analysis of noun phrases, we go on to develop an alternative, which we call incremental evaluation. The basic idea is that a noun phrase does provide some information about what it refers to. It should be possible to represent this partial information and gradually refine it as relevant implications of the context are followed up. Moreover, the partial information should be available to an inference system, which, amongst other things, can detect the absence of a referent and provide the advantages of Winograd's system. In our system, noun phrase interpretation does take place locally, but the point is that it does not finish there. Instead, the determination of the meaning of a noun phrase is spread over the subsequent analysis of how it contributes to the meaning of the text as a whole.
|
5 |
Semantic Similarity of Spatial ScenesNedas, Konstantinos A. January 2006 (has links) (PDF)
No description available.
|
6 |
Using semantic knowledge to improve compression on log filesOtten, Frederick John 19 November 2008 (has links)
With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. / TeX / pdfTeX-1.40.3
|
7 |
Semantic annotation of Chinese texts with message structures based on HowNetWong, Ping-wai., 黃炳蔚. January 2007 (has links)
published_or_final_version / abstract / Humanities / Doctoral / Doctor of Philosophy
|
8 |
Extracting causation knowledge from natural language texts.January 2002 (has links)
Chan Ki, Cecia. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 95-99). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Our Contributions --- p.4 / Chapter 1.2 --- Thesis Organization --- p.5 / Chapter 2 --- Related Work --- p.6 / Chapter 2.1 --- Using Knowledge-based Inferences --- p.7 / Chapter 2.2 --- Using Linguistic Techniques --- p.8 / Chapter 2.2.1 --- Using Linguistic Clues --- p.8 / Chapter 2.2.2 --- Using Graphical Patterns --- p.9 / Chapter 2.2.3 --- Using Lexicon-syntactic Patterns of Causative Verbs --- p.10 / Chapter 2.2.4 --- Comparisons with Our Approach --- p.10 / Chapter 2.3 --- Discovery of Extraction Patterns for Extracting Relations --- p.11 / Chapter 2.3.1 --- Snowball system --- p.12 / Chapter 2.3.2 --- DIRT system --- p.12 / Chapter 2.3.3 --- Comparisons with Our Approach --- p.13 / Chapter 3 --- Semantic Expectation-based Knowledge Extraction --- p.14 / Chapter 3.1 --- Semantic Expectations --- p.14 / Chapter 3.2 --- Semantic Template --- p.16 / Chapter 3.2.1 --- Causation Semantic Template --- p.16 / Chapter 3.3 --- Sentence Templates --- p.17 / Chapter 3.4 --- Consequence and Reason Templates --- p.22 / Chapter 3.5 --- Causation Knowledge Extraction Framework --- p.25 / Chapter 3.5.1 --- Template Design --- p.25 / Chapter 3.5.2 --- Sentence Screening --- p.27 / Chapter 3.5.3 --- Semantic Processing --- p.28 / Chapter 4 --- Using Thesaurus and Pattern Discovery for SEKE --- p.33 / Chapter 4.1 --- Using a Thesaurus --- p.34 / Chapter 4.2 --- Pattern Discovery --- p.37 / Chapter 4.2.1 --- Use of Semantic Expectation-based Knowledge Extraction --- p.37 / Chapter 4.2.2 --- Use of Part of Speech Information --- p.39 / Chapter 4.2.3 --- Pattern Representation --- p.39 / Chapter 4.2.4 --- Constructing the Patterns --- p.40 / Chapter 4.2.5 --- Merging the Patterns --- p.43 / Chapter 4.3 --- Pattern Matching --- p.44 / Chapter 4.3.1 --- Matching Score --- p.46 / Chapter 4.3.2 --- Support of Patterns --- p.48 / Chapter 4.3.3 --- Relevancy of Sentence Templates --- p.48 / Chapter 4.4 --- Applying the Newly Discovered Patterns --- p.49 / Chapter 5 --- Applying SEKE on Hong Kong Stock Market Domain --- p.52 / Chapter 5.1 --- Template Design --- p.53 / Chapter 5.1.1 --- Semantic Templates --- p.53 / Chapter 5.1.2 --- Sentence Templates --- p.53 / Chapter 5.1.3 --- Consequence and Reason Templates: --- p.55 / Chapter 5.2 --- Pattern Discovery --- p.58 / Chapter 5.2.1 --- Support of Patterns --- p.58 / Chapter 5.2.2 --- Relevancy of Sentence Templates --- p.58 / Chapter 5.3 --- Causation Knowledge Extraction Result --- p.58 / Chapter 5.3.1 --- Evaluation Approach --- p.61 / Chapter 5.3.2 --- Parameter Investigations --- p.61 / Chapter 5.3.3 --- Experimental Results --- p.65 / Chapter 5.3.4 --- Knowledge Discovered --- p.68 / Chapter 5.3.5 --- Parameter Effect --- p.75 / Chapter 6 --- Applying SEKE on Global Warming Domain --- p.80 / Chapter 6.1 --- Template Design --- p.80 / Chapter 6.1.1 --- Semantic Templates --- p.81 / Chapter 6.1.2 --- Sentence Templates --- p.81 / Chapter 6.1.3 --- Consequence and Reason Templates --- p.83 / Chapter 6.2 --- Pattern Discovery --- p.85 / Chapter 6.2.1 --- Support of Patterns --- p.85 / Chapter 6.2.2 --- Relevancy of Sentence Templates --- p.85 / Chapter 6.3 --- Global Warming Domain Result --- p.85 / Chapter 6.3.1 --- Evaluation Approach --- p.85 / Chapter 6.3.2 --- Experimental Results --- p.88 / Chapter 6.3.3 --- Knowledge Discovered --- p.89 / Chapter 7 --- Conclusions and Future Directions --- p.92 / Chapter 7.1 --- Conclusions --- p.92 / Chapter 7.2 --- Future Directions --- p.93 / Bibliography --- p.95 / Chapter A --- Penn Treebank Part of Speech Tags --- p.100
|
9 |
Grammar-Based Semantic Parsing Into Graph RepresentationsBauer, Daniel January 2017 (has links)
Directed graphs are an intuitive and versatile representation of natural language meaning because they can capture relationships between instances of events and entities, including cases where entities play multiple roles. Yet, there are few approaches in natural language processing that use graph manipulation techniques for semantic parsing. This dissertation studies graph-based representations of natural language meaning, discusses a formal-grammar based approach to the semantic construction of graph representations, and develops methods for open-domain semantic parsing into such representations. To perform string-to-graph translation I use synchronous hyperedge replacement grammars (SHRG). The thesis studies this grammar formalism from a formal, linguistic, and algorithmic perspective. It proposes a new lexicalized variant of this formalism (LSHRG), which is inspired by tree insertion grammar and provides a clean syntax/semantics interface. The thesis develops a new method for automatically extracting SHRG and LSHRG grammars from annotated “graph banks”, which uses existing syntactic derivations to structure the extracted grammar. It also discusses a new method for semantic parsing with large, automatically extracted grammars, that translates syntactic derivations into derivations of the synchronous grammar, as well as initial work on parse reranking and selection using a graph model. I evaluate this work on the Abstract Meaning Representation (AMR) dataset. The results show that the grammar-based approach to semantic analysis shows promise as a technique for semantic parsing and that string-to-graph grammars can be induced efficiently. Taken together, the thesis lays the foundation for future work on graph methods in natural language semantics.
|
10 |
Using web texts for word sense disambiguationWang, Yuanyong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
In all natural languages, ambiguity is a universal phenomenon. When a word has multiple meaning depending on its contexts it is called an ambiguous word. The process of determining the correct meaning of a word (formally named word sense) in a given context is word sense disambiguation(WSD). WSD is one of the most fundamental problems in natural language processing. If properly addressed, it could lead to revolutionary advancement in many other technologies such as text search engine technology, automatic text summarization and classification, automatic lexicon construction, machine translation and automatic learning agent technology. One difficulty that has always confronted WSD researchers is the lack of high quality sense specific information. For example, if the word "power" Immediately preceds the word "plant", it would strongly constrain the meaning of "plant" to be "an industrial facility". If "power" is replaced by the phrase "root of a", then the sense of "plant" is dictated to be "an organism" of the kingdom Planate. It is obvious that manually building a comprehensive sense specific information base for each sense of each word is impractical. Researchers also tried to extract such information from large dictionaries as well as manually sense tagged corpora. Most of the dictionaries used for WSD are not built for this purpose and have a lot of inherited peculiarities. While manual tagging is slow and costly, automatic tagging is not successful in providing a reliable performance. Furthermore, it is often the case that for a randomly chosen word (to be disambiguated), the sense specific context corpora that can be collected from dictionaries are not large enough. Therefore, manually building sense specific information bases or extraction of such information from dictionaries are not effective approaches to obtain sense specific information. A web text, due to its vast quantity and wide diversity, becomes an ideal source for extraction of large quantity of sense specific information. In this thesis, the impacts of Web texts on various aspects of WSD has been investigated. New measures and models are proposed to tame enormous amount of Web texts for the purpose of WSD. They are formally evaluated by experimenting their disambiguation performance on about 70 ambiguous nouns. The results are very encouraging and have helped revealing the great potential of using Web texts for WSD. The results are published in three papers at Australia national and international level (Wang&Hoffmann,2004,2005,2006)[42][43][44].
|
Page generated in 0.0985 seconds