• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 278
  • 31
  • 25
  • 22
  • 9
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 432
  • 208
  • 162
  • 157
  • 150
  • 136
  • 112
  • 102
  • 92
  • 80
  • 77
  • 74
  • 73
  • 71
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Σχεδιασμός, κατασκευή και αξιολόγηση ελληνικού γραμματικού διορθωτή

Γάκης, Παναγιώτης 07 May 2015 (has links)
Στόχος της παρούσας διδακτορικής διατριβής είναι ο σχεδιασμός και η υλοποίηση ενός ηλεκτρονικού εύχρηστου εργαλείου (γραμματικού διορθωτή) που θα προβαίνει στη μορφολογική και συντακτική ανάλυση φράσεων, προτάσεων και λέξεων με σκοπό τη διόρθωση γραμματικών και υφολογικών λαθών. Βάση για την αντιμετώπιση όλων αυτών των ζητημάτων συνιστούν οι ρυθμίσεις της Γραμματικής (αναπροσαρμογή της Μικρής Νεοελληνικής Γραμματικής του Μανόλη Τριανταφυλλίδη), η οποία αποτελεί την επίσημη, από το 1976, γραμματική κωδικοποίηση της νεοελληνικής γλώσσας. (Κατά την εκπόνηση της διατριβής δεν έχουν ληφθεί υπόψη οι -ελάχιστες- διαφορές της νέας σχολικής γραμματικής Ε΄ και Στ΄ Δημοτικού). Με δεδομένη την απουσία ενός τέτοιου εργαλείου για τα ελληνικά, η ανάπτυξη του προϊόντος θα βασίζεται καταρχήν στη λεπτομερή καταγραφή, ανάλυση και τυποποίηση των λαθών του γραπτού λόγου και στη συνέχεια στην επιλογή του λογισμικού εκείνου που θα περιγράφει φορμαλιστικά τα γραμματικά λάθη. Η διατριβή παρουσιάζει στατιστικά στοιχεία που αφορούν τη σχέση των λαθών με το φύλο ή με το κειμενικό είδος των κειμένων στα οποία και συναντούνται όπως επίσης και την αναγνώρισή τους από μαθητές. Στην παρούσα έρευνα παρουσιάζεται ο φορμαλισμός υλοποίησης που χρησιμοποιήθηκε (Mnemosyne) και παρουσιάζονται οι ιδιαιτερότητες της ελληνικής γλώσσας που δυσχεραίνουν την υπολογιστική επεξεργασία της. Ο φορμαλισμός αυτός έχει ήδη χρησιμοποιηθεί για αναγνώριση πολυλεκτικών όρων καθώς και για την υλοποίηση ηλεκτρονικών εργαλείων (γραμματικών) με στόχο την αυτόματη εξαγωγή πληροφορίας. Με αυτό τον τρόπο όλοι οι χρήστες της γλώσσας (και όχι μόνο αυτοί που έχουν την ελληνική ως μητρική γλώσσα) μπορούν να κατανοήσουν καλύτερα όχι μόνον τη λειτουργία των διαφόρων μερών του συστήματος της γλώσσας αλλά και τον τρόπο με τον οποίο λειτουργούν οι μηχανισμοί λειτουργίας του γλωσσικού συστήματος κατά τη γλωσσική ανάλυση . 14 Οι βασικές περιοχές γραμματικών λαθών όπου θα παρεμβαίνει ο γραμματικός διορθωτής θα είναι: 1) θέματα τονισμού και στίξης, 2) τελικό -ν, 3) υφολογικά ζητήματα (ρηματικοί τύποι σε περιπτώσεις διπλοτυπίας, κλιτικοί τύποι), 4) ζητήματα καθιερωμένης γραφής λέξεων ή φράσεων της νέας ελληνικής γλώσσας (στερεότυπες φράσεις, λόγιοι τύποι), 5) ζητήματα κλίσης (λανθασμένοι κλιτικοί τύποι ονομάτων ή ρημάτων είτε λόγω άγνοιας είτε λόγω σύγχυσης), 6) ζητήματα λεξιλογίου (περιπτώσεις εννοιολογικής σύγχυσης, ελληνικές αποδόσεις ξένων λέξεων, πλεονασμός, χρήση εσφαλμένης φράσης ή λέξης), 7) ζητήματα ορθογραφικής σύγχυσης (ομόηχες λέξεις), 8) ζητήματα συμφωνίας (θέματα ασυμφωνίας στοιχείων της ονοματικής ή της ρηματικής φράσης), 9) ζητήματα σύνταξης (σύνταξη ρημάτων) και 10) περιπτώσεις λαθών που απαιτούν πιο εξειδικευμένη διαχείριση ορθογραφικής διόρθωσης. Βάση για την υλοποίηση του λεξικού αποτελεί το ηλεκτρονικό μορφολογικό λεξικό Neurolingo Lexicon1, ένα λεξικό χτισμένο σε ένα μοντέλο 5 επιπέδων με τουλάχιστον 90.000 λήμματα που παράγουν 1.200.000 κλιτικούς τύπους. Οι τύποι αυτοί φέρουν πληροφορία: α) ορθογραφική (ορθή γραφή του κλιτικού τύπου), β) μορφηματική (το είδος των μορφημάτων: πρόθημα, θέμα, επίθημα, κατάληξη, που απαρτίζουν τον κλιτικό τύπο), γ) μορφοσυντακτική (μέρος του λόγου, γένος, πτώση, πρόσωπο κτλ.), δ) υφολογική (τα υφολογικά χαρακτηριστικά του τύπου: προφορικό, λόγιο κτλ.) και ε) ορολογική (επιπλέον πληροφορία για το αν ο τύπος αποτελεί μέρος ειδικού λεξιλογίου). Το λεξικό αυτό αποτελεί και τον θεμέλιο λίθο για την υποστήριξη του γραμματικού διορθωτή (Grammar Checker). Η αξία και ο ρόλος του μορφολογικού λεξικού για την υποστήριξη ενός γραμματικού διορθωτή είναι αυτονόητη, καθώς η μορφολογία είναι το πρώτο επίπεδο γλώσσας που εξετάζεται και το συντακτικό επίπεδο βασίζεται και εξαρτάται από τη μορφολογία των λέξεων. Μείζον πρόβλημα αποτέλεσε η λεξική ασάφεια, προϊόν της πλούσιας μορφολογίας της ελληνικής γλώσσας. Με δεδομένο αυτό το πρόβλημα σχεδιάστηκε ο σχολιαστής (tagger) με αμιγώς γλωσσολογικά κριτήρια για τις περιπτώσεις εκείνες όπου η λεξική ασάφεια αποτελούσε εμπόδιο στην αποτύπωση λαθών στη χρήση της ελληνικής γλώσσας. Στον γραμματικό διορθωτή δόθηκαν προς διόρθωση κείμενα που είχαν διορθωθεί από άνθρωπο. Σε ένα πολύ μεγάλο ποσοστό ο διορθωτής προσεγγίζει τη διόρθωση του ανθρώπου με μόνη διαφοροποίηση εκείνα τα λάθη που αφορούν τη συνοχή του κειμένου και κατ’ επέκταση όλα τα νοηματικά λάθη. / The aim of this thesis is to design and then to implement a useful and friendly electronic tool (grammar checker) which will carry out the morphological and syntactic analysis of sentences, phrases and words in order to correct syntactic, grammatical and stylistic errors. Our foundation so as to deal with all these issues, is the settings of Grammar (adaptation of Little Modern Grammar of Manolis Triantafyllidis), which is the formalconstituted codified grammar of Modern Greek, since 1976. (In the presentation of this thesis it has not been taken into account the -minimum- differences that appear in the new Greek grammar book of the fifth and sixth grade of the elementary school). Bearing in mind that there is a total absence of such a tool in Greek language, the development of the product is based on the detailed record, on the analysis and on the formulation of the errors of writing speech. Additionally, for its development the right software is chosen in order to describe the grammatical errors. In this thesis the statistics demonstrate the link between the errors and the students’ gender or between the errors and the textual type in which these errors appear. Finnally, through the statistics, the link among the errors and their recognition by the students is presented . This research presents the formalism used (the Mnemosyne) and also the particularities of the Greek language that hinder the computational processing. The formalism has already been used to identify multi-word terms and to phrase grammars, aiming to the automatic information extraction. In this way, all speakers (native or not) will be able to understand better not only the function of various parts of the system of the Greek language but also the way the mechanisms of linguistic analysis operate in the conquest and more broadly in the linguistic realization. The main areas of the grammatical errors with which the grammar checker will interfere, are: 1) Punctuation problems, 2) Final -n, 3) Stylistic issues (verb forms in cases of duplicates, inflectional types), 4) Standardization issues (stereotyped phrases, words of literary origin), 5) Inclination issues (incorrect declension of names or verbs either through ignorance or because of confusion) 6) Vocabulary issues (cases of conceptual confusion, Greek translation of foreign words, redundancy and use of incorrect word or phrase), 7) Orthographic confusion issues (homonymous words), 8) Agreement issues (cases of elements of nominal or verbal phrase disagreement), 9) Syntax issues (verbs) and 10) Cases of errors that require more specialized management of the spelling correction. The basis for the implementation is the electronic morphological lexicon (Neurolingo Lexicon), a 5-level lexicon which consists of, at least 90,000 entries that produce ~1,200,000 inflection types. These types carry information: a) spelling (write spelling of inflectional type), b) morpheme information (type of morphemes: prefix, theme, suffix, ending), c) morphosyntactic information (part of speech, gender, case, person, etc.), d) stylistic information (the stylistic characteristics of the type: oral, archaic, etc.) and e) terminology (additional information about whether the word form is part of a special vocabulary).This electronic lexicon is the foundation that supports the grammar checker. The value and the key role of the morphological lexicon in supporting the Greek grammar checker is obvious, since the first level in which the language is examined is the morphology level and since the structural level is not only based but also depends on the morphology of the words. A major problem in processing the natural language was the lexical ambiguity, a product of the highly morphology of the Greek language. Given that the major problem of modern Greek is the lexical ambiguity we designe the Greek tagger grounded on linguistic criteria for those cases where the lexical ambiguity impede the imprint of the errors in Greek language. The texts that were given for correction to the grammar checker were also corrected by a person. In a very large percentage the grammar checker approximates in accuracy the human-corrector. Only when the grammar checker had to deal with mistakes concerning the coherence of the text or with meaning errors, the humman corrector was the only accurate corrector.
112

Generalized Probabilistic Topic and Syntax Models for Natural Language Processing

Darling, William Michael 14 September 2012 (has links)
This thesis proposes a generalized probabilistic approach to modelling document collections along the combined axes of both semantics and syntax. Probabilistic topic (or semantic) models view documents as random mixtures of unobserved latent topics which are themselves represented as probabilistic distributions over words. They have grown immensely in popularity since the introduction of the original topic model, Latent Dirichlet Allocation (LDA), in 2004, and have seen successes in computational linguistics, bioinformatics, political science, and many other fields. Furthermore, the modular nature of topic models allows them to be extended and adapted to specific tasks with relative ease. Despite the recorded successes, however, there remains a gap in combining axes of information from different sources and in developing models that are as useful as possible for specific applications, particularly in Natural Language Processing (NLP). The main contributions of this thesis are two-fold. First, we present generalized probabilistic models (both parametric and nonparametric) that are semantically and syntactically coherent and contain many simpler probabilistic models as special cases. Our models are consistent along both axes of word information in that an LDA-like component sorts words that are semantically related into distinct topics and a Hidden Markov Model (HMM)-like component determines the syntactic parts-of-speech of words so that we can group words that are both semantically and syntactically affiliated in an unsupervised manner, leading to such groups as verbs about health care and nouns about sports. Second, we apply our generalized probabilistic models to two NLP tasks. Specifically, we present new approaches to automatic text summarization and unsupervised part-of-speech (POS) tagging using our models and report results commensurate with the state-of-the-art in these two sub-fields. Our successes demonstrate the general applicability of our modelling techniques to important areas in computational linguistics and NLP.
113

The Effect of Natural Language Processing in Bioinspired Design

Burns, Madison Suzann 1987- 14 March 2013 (has links)
Bioinspired design methods are a new and evolving collection of techniques used to extract biological principles from nature to solve engineering problems. The application of bioinspired design methods is typically confined to existing problems encountered in new product design or redesign. A primary goal of this research is to utilize existing bioinspired design methods to solve a complex engineering problem to examine the versatility of the method in solving new problems. Here, current bioinspired design methods are applied to seek a biologically inspired solution to geoengineering. Bioinspired solutions developed in the case study include droplet density shields, phosphorescent mineral injection, and reflective orbiting satellites. The success of the methods in the case study indicates that bioinspired design methods have the potential to solve new problems and provide a platform of innovation for old problems. A secondary goal of this research is to help engineers use bioinspired design methods more efficiently by reducing post-processing time and eliminating the need for extensive knowledge of biological terminology by applying natural language processing techniques. Using the complex problem of geoengineering, a hypothesis is developed that asserts the usefulness of nouns in creating higher quality solutions. A designation is made between the types of nouns in a sentence, primary and spatial, and the hypothesis is refined to state that primary nouns are the most influential part of speech in providing biological inspiration for high quality ideas. Through three design experiments, the author determines that engineers are more likely to develop a higher quality solution using the primary noun in a given passage of biological text. The identification of primary nouns through part of speech tagging will provide engineers an analogous biological system without extensive analysis of the results. The use of noun identification to improve the efficiency of bioinspired design method applications is a new concept and is the primary contribution of this research.
114

Efficient prediction of relational structure and its application to natural language processing

Riedel, Sebastian January 2009 (has links)
Many tasks in Natural Language Processing (NLP) require us to predict a relational structure over entities. For example, in Semantic Role Labelling we try to predict the ’semantic role’ relation between a predicate verb and its argument constituents. Often NLP tasks not only involve related entities but also relations that are stochastically correlated. For instance, in Semantic Role Labelling the roles of different constituents are correlated: we cannot assign the agent role to one constituent if we have already assigned this role to another. Statistical Relational Learning (also known as First Order Probabilistic Logic) allows us to capture the aforementioned nature of NLP tasks because it is based on the notions of entities, relations and stochastic correlations between relationships. It is therefore often straightforward to formulate an NLP task using a First Order probabilistic language such as Markov Logic. However, the generality of this approach comes at a price: the process of finding the relational structure with highest probability, also known as maximum a posteriori (MAP) inference, is often inefficient, if not intractable. In this work we seek to improve the efficiency of MAP inference for Statistical Relational Learning. We propose a meta-algorithm, namely Cutting Plane Inference (CPI), that iteratively solves small subproblems of the original problem using any existing MAP technique and inspects parts of the problem that are not yet included in the current subproblem but could potentially lead to an improved solution. Our hypothesis is that this algorithm can dramatically improve the efficiency of existing methods while remaining at least as accurate. We frame the algorithm in Markov Logic, a language that combines First Order Logic and Markov Networks. Our hypothesis is evaluated using two tasks: Semantic Role Labelling and Entity Resolution. It is shown that the proposed algorithm improves the efficiency of two existing methods by two orders of magnitude and leads an approximate method to more probable solutions. We also give show that CPI, at convergence, is guaranteed to be at least as accurate as the method used within its inner loop. Another core contribution of this work is a theoretic and empirical analysis of the boundary conditions of Cutting Plane Inference. We describe cases when Cutting Plane Inference will definitely be difficult (because it instantiates large networks or needs many iterations) and when it will be easy (because it instantiates small networks and needs only few iterations).
115

Determining non-urgent emergency room use factors from primary care data and natural language processing: a proof of concept

St-Maurice, Justin 28 March 2012 (has links)
The objective of this study was to discover biopsychosocial concepts from primary care that were statistically related to inappropriate emergency room use by using natural language processing tools. De-identified free text was extracted from a clinic in Guelph, Ontario and analyzed with MetaMap and GATE. Over 10 million concepts were extracted from 13,836 patient records. There were 77 codes that fell within the realm of biopsychosocial, were very statistically significant (p < 0.001) and had an OR > 2.0. Thematically, these codes involved mental health and pain related biopsychosocial concepts. Similar to other literature, pain and mental health problems are seen to be important factors of inappropriate emergency room use. Despite sources error in the NLP procedure, the study demonstrates the feasibly of combining natural language processing and primary care data to analyze the issue of inappropriate emergency room use. This technique could be used to analyze other, more complex problems. / Graduate
116

Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification Applications

AlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability. This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing. Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
117

Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification Applications

AlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability. This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing. Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
118

The use of systems engineering principles for the integration of existing models and simulations

Luff, Robert January 2017 (has links)
With the rise in computational power, the prospect of simulating a complex engineering system with a high degree of accuracy and in a meaningful way is becoming a real possibility. Modelling and simulation have become ubiquitous throughout the engineering life cycle, as a consequence there are many thousands of existing models and simulations that are potential candidates for integration. This work is concerned with ascertaining if systems engineering principles are of use in the support of virtual testing, from desire to test, designing experiments, specifying simulations, selecting models and simulations, integrating component parts, verifying that the work is as specified, and validating that any outcomes are meaningful. A novel representation of systems engineering framework is proposed and forms the bases for the methods that were developed. It takes the core systems engineering principles and expresses them in a way that can be implemented in a variety of ways. An end to end process for virtual testing with the potential to use existing models and simulations is proposed, it provides structure and order to the testing task. A key part of the proposed process is the recognition that models and simulations requirements are different from those of the system being designed, and hence a modelling and simulation specific writing guide is produced. The automation of any engineering task has the potential to reduce the time to market of the final product, for this reason the potential of natural language processing technology to hasten the proposed processes was investigated. Two case studies were selected to test and demonstrate the potential of the novel approach, the first being an investigation into material selection for a squash ball, and the second being automotive in nature concerned with combining steering and braking systems. The processes and methods indicated their potential value, especially in the automotive case study where inconsistences were identified that could have otherwise affected the successful integration. This capability, combined with the verification stages, improves the confidence of any model and simulation integration. The NLP proof of concept software also demonstrated that such technology has value in the automation of integration. With further testing and development there is the possibility to create a software package to guide engineers through the difficult task of virtual testing. Such a tool would have the potential to drastically reduce the time to market of complex products.
119

Semantic Text Matching Using Convolutional Neural Networks

Wang, Run Fen January 2018 (has links)
Semantic text matching is a fundamental task for many applications in NaturalLanguage Processing (NLP). Traditional methods using term frequencyinversedocument frequency (TF-IDF) to match exact words in documentshave one strong drawback which is TF-IDF is unable to capture semanticrelations between closely-related words which will lead to a disappointingmatching result. Neural networks have recently been used for various applicationsin NLP, and achieved state-of-the-art performances on many tasks.Recurrent Neural Networks (RNN) have been tested on text classificationand text matching, but it did not gain any remarkable results, which is dueto RNNs working more effectively on texts with a short length, but longdocuments. In this paper, Convolutional Neural Networks (CNN) will beapplied to match texts in a semantic aspect. It uses word embedding representationsof two texts as inputs to the CNN construction to extract thesemantic features between the two texts and give a score as the output ofhow certain the CNN model is that they match. The results show that aftersome tuning of the parameters the CNN model could produce accuracy,prediction, recall and F1-scores all over 80%. This is a great improvementover the previous TF-IDF results and further improvements could be madeby using dynamic word vectors, better pre-processing of the data, generatelarger and more feature rich data sets and further tuning of the parameters.
120

Deep learning for medical report texts

Nelsson, Mikael January 2018 (has links)
Data within the medical sector is often stored as free text entries. This is especially true for report texts, which are written after an examination. To be able to automatically gather data from these texts they need to be analyzed and classified to show what findings the examinations had. This thesis compares three state of the art deep learning approaches to classify short medical report texts. This is done for two types of examinations, so the concept of transfer learning plays a role in the evaluation. An optimal model should learn concepts that are applicable for more than one type of examinations, since we can expect the texts to be similar. The two data set from the examinations are also of different sizes, and both have an uneven distribution among the target classes. One of the models is based on techniques traditionally used for language processing using deep learning. The two other models are based on techniques usually used for image recognition and classification. The latter models proves to be the best across the different metrics, not least in the sense of transfer learning as they improve the results when learning from both types of examinations. This becomes especially apparent for the lowest frequent class from the smaller data set as none of the models correctly predict this class without using transfer learning.

Page generated in 0.0514 seconds