• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 6
  • 6
  • 4
  • Tagged with
  • 48
  • 17
  • 12
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Neurale netwerke as moontlike woordafkappingstegniek vir Afrikaans

Fick, Machteld 09 1900 (has links)
Text in Afrikaans / Summaries in Afrikaans and English / In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met 5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk getoets en 98,75% van moontlike posisies is korrek geklassifiseer. / In Afrikaans, like in Dutch and German, compound words are written as one word. New words are therefore created by simply joining words. Word hyphenation during typesetting by computer is a problem, because the source of reference changes all the time. Several algorithms and techniques for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT). A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The neural network was refined by heuristically finding a suitable training algorithm and transfer function for the problem as well as determining the optimal number of layers and number of neurons in each layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of possible points in these words correctly as either valid or invalid hyphenation points. Furthermore, 510 words from articles in a magazine were tested with the neural network and 98,75% of possible positions were classified correctly. / Computing / M.Sc. (Operasionele Navorsing)
42

A decision support system for the reading of ancient documents

Roued-Cunliffe, Henriette January 2011 (has links)
The research presented in this thesis is based in the Humanities discipline of Ancient History and begins by attempting to understand the interpretation process involved in reading ancient documents and how this process can be aided by computer systems such as Decision Support Systems (DSS). The thesis balances between the use of IT tools to aid Humanities research and the understanding that Humanities research must involve human beings. It does not attempt to develop a system that can automate the reading of ancient documents. Instead it seeks to demonstrate and develop tools that can support this process in the five areas: remembering complex reasoning, searching huge datasets, international collaboration, publishing editions, and image enhancement. This research contains a large practical element involving the development of a DSS prototype. The prototype is used to illustrate how a DSS, by remembering complex reasoning, can aid the process of interpretation that is reading ancient documents. It is based on the idea that the interpretation process goes through a network of interpretation. The network of interpretation illustrates a recursive process where scholars move between reading levels such as ‘these strokes look like the letter c’ or ‘these five letters must be the word primo’. Furthermore, the thesis demonstrates how technology such as Web Services and XML can be used to make a DSS even more powerful through the development of the APPELLO word search Web Service. Finally, the conclusion includes a suggestion for a future development of a working DSS that incorporates the idea of a layer-based system and focuses strongly on user interaction.
43

Αυτόματη μάθηση συντακτικών εξαρτήσεων και ανάπτυξη γραμματικών της ελληνικής γλώσσας / Learning of syntactic dependencies and development of modern Greek grammars

Κερμανίδου, Κάτια Λήδα 25 June 2007 (has links)
Η παρούσα διατριβή έχει ως σκοπό της, πρώτον, την ανάκτηση συντακτικής πληροφορίας (αναγνώριση συμπληρωμάτων ρημάτων, ανάκτηση πλαισίων υποκατηγοριοποίησης (ΠΥ) ρημάτων, αναγνώριση των ορίων και του είδους των προτάσεων) αυτόματα μέσα από ελληνικά και αγγλικά σώματα κειμένων με την χρήση ποικίλων και καινοτόμων τεχνικών μηχανικής μάθησης και, δεύτερον, την θεωρητική περιγραφή της ελληνικής σύνταξης μέσω τυπικών γλωσσολογικών φορμαλισμών, όπως η γραμματική Ενοποίησης και η γραμματική Φραστικής Δομής Οδηγούμενη από τον Κύριο Όρο. Η διατριβή κινήθηκε πάνω στους εξής καινοτόμους άξονες: 1. Η προεπεξεργασία των σωμάτων κειμένων βασίστηκε σε ελάχιστους γλωσσολογικούς πόρους για να είναι δυνατή η μεταφορά των μεθόδων σε γλώσσες φτωχές σε υποδομή. 2. Η αντιμετώπιση του θορύβου που υπεισέρχεται στα δεδομένα εξ αιτίας της χρήσης ελάχιστων πόρων πραγματοποιείται με Μονόπλευρη Δειγματοληψία. Εντοπίζονται αυτόματα παραδείγματα δεδομένων που δεν προσφέρουν στην μάθηση και αφαιρούνται. Τα τελικά δεδομένα είναι πιο καθαρά και η απόδοση της μάθησης βελτιώνεται πολύ. 3. Αποδεικνύεται η χρησιμότητα της εξαχθείσας πληροφορίας. Η χρησιμότητα των συμπληρωμάτων φαίνεται από την αύξηση της απόδοσης της διαδικασίας ανάκτησης ΠΥ με την χρήση τους. Η χρησιμότητα των εξαγόμενων ΠΥ φαίνεται από την αύξηση της απόδοσης ενός ρηχού συντακτικού αναλυτή με την χρήση τους. 4. Οι μέθοδοι εφαρμόζονται και στα Αγγλικά και στα Ελληνικά για να φανεί η μεταφερσιμότητά τους σε διαφορετικές γλώσσες και για να πραγματοποιηθεί μια ενδιαφέρουσα σχετική σύγκριση ανάμεσα στις δύο γλώσσες. Τα αποτελέσματα είναι πολύ ενθαρρυντικά, συγκρίσιμα με, και σε πολλές περιπτώσεις καλύτερα από, προσεγγίσεις που χρησιμοποιούν εξελιγμένα εργαλεία προεπεξεργασίας. / The thesis aims firstly at the acquisition of syntactic information (detection of verb complements, acquisition of verb subcategorization frames (SF), detection of the boundaries and the semantic type of clauses) automatically from Modern Greek and English text corpora with the use of various state-of-the-art and novel machine learning techniques, and, secondly, at the theoretical description of the Greek syntax through formal grammatical theories like Unification Grammar and Head-driven Phrase Structure Grammar. The thesis has been based on the following novel axes: 1. Corpus pre-processing has been limited to the use of minimum linguistic resources to ensure the portability of the presented methodologies to languages that are poorly equipped with resources. 2. Due to the low pre-processing level, a significant amount of noise appears in the data, which is dealt with One-sided Sampling. Examples that do not contribute to the learning process are detected and removed. The final data set is clean and learning performance improves significantly. 3. The importance of the acquired information is proven. The importance of complements is shown by the improvement in the performance of the SF acquisition process after the incorporation of complement information. The importance of the acquired SF lexicon is shown by its incorporation in a shallow syntactic parser and the increase of the performance of the latter. 4. The methods are applied on Modern Greek and on English to show their portability across different languages and to allow for an interesting rough comparison between the two languages. The results are very satisfactory, comparable to, and in some cases better than, approaches utilizing sophisticated resources for pre-processing.
44

A corpus driven computational intelligence framework for deception detection in financial text

Minhas, Saliha Z. January 2016 (has links)
Financial fraud rampages onwards seemingly uncontained. The annual cost of fraud in the UK is estimated to be as high as £193bn a year [1] . From a data science perspective and hitherto less explored this thesis demonstrates how the use of linguistic features to drive data mining algorithms can aid in unravelling fraud. To this end, the spotlight is turned on Financial Statement Fraud (FSF), known to be the costliest type of fraud [2]. A new corpus of 6.3 million words is composed of102 annual reports/10-K (narrative sections) from firms formally indicted for FSF juxtaposed with 306 non-fraud firms of similar size and industrial grouping. Differently from other similar studies, this thesis uniquely takes a wide angled view and extracts a range of features of different categories from the corpus. These linguistic correlates of deception are uncovered using a variety of techniques and tools. Corpus linguistics methodology is applied to extract keywords and to examine linguistic structure. N-grams are extracted to draw out collocations. Readability measurement in financial text is advanced through the extraction of new indices that probe the text at a deeper level. Cognitive and perceptual processes are also picked out. Tone, intention and liquidity are gauged using customised word lists. Linguistic ratios are derived from grammatical constructs and word categories. An attempt is also made to determine ‘what’ was said as opposed to ‘how’. Further a new module is developed to condense synonyms into concepts. Lastly frequency counts from keywords unearthed from a previous content analysis study on financial narrative are also used. These features are then used to drive machine learning based classification and clustering algorithms to determine if they aid in discriminating a fraud from a non-fraud firm. The results derived from the battery of models built typically exceed classification accuracy of 70%. The above process is amalgamated into a framework. The process outlined, driven by empirical data demonstrates in a practical way how linguistic analysis could aid in fraud detection and also constitutes a unique contribution made to deception detection studies.
45

Neurale netwerke as moontlike woordafkappingstegniek vir Afrikaans

Fick, Machteld 09 1900 (has links)
Text in Afrikaans / Summaries in Afrikaans and English / In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met 5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk getoets en 98,75% van moontlike posisies is korrek geklassifiseer. / In Afrikaans, like in Dutch and German, compound words are written as one word. New words are therefore created by simply joining words. Word hyphenation during typesetting by computer is a problem, because the source of reference changes all the time. Several algorithms and techniques for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT). A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The neural network was refined by heuristically finding a suitable training algorithm and transfer function for the problem as well as determining the optimal number of layers and number of neurons in each layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of possible points in these words correctly as either valid or invalid hyphenation points. Furthermore, 510 words from articles in a magazine were tested with the neural network and 98,75% of possible positions were classified correctly. / Computing / M.Sc. (Operasionele Navorsing)
46

'n Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)
47

Generating and simplifying sentences / Génération et simplification des phrases

Narayan, Shashi 07 November 2014 (has links)
Selon la représentation d’entrée, cette thèse étudie ces deux types : la génération de texte à partir de représentation de sens et à partir de texte. En la première partie (Génération des phrases), nous étudions comment effectuer la réalisation de surface symbolique à l’aide d’une grammaire robuste et efficace. Cette approche s’appuie sur une grammaire FB-LTAG et prend en entrée des arbres de dépendance peu profondes. La structure d’entrée est utilisée pour filtrer l’espace de recherche initial à l’aide d’un concept de filtrage local par polarité afin de paralléliser les processus. Afin nous proposons deux algorithmes de fouille d’erreur: le premier, un algorithme qui exploite les arbres de dépendance plutôt que des données séquentielles et le second, un algorithme qui structure la sortie de la fouille d’erreur au sein d’un arbre afin de représenter les erreurs de façon plus pertinente. Nous montrons que nos réalisateurs combinés à ces algorithmes de fouille d’erreur améliorent leur couverture significativement. En la seconde partie (Simplification des phrases), nous proposons l’utilisation d’une forme de représentations sémantiques (contre à approches basées la syntaxe ou SMT) afin d’améliorer la tâche de simplification de phrase. Nous utilisons les structures de représentation du discours pour la représentation sémantique profonde. Nous proposons alors deux méthodes de simplification de phrase: une première approche supervisée hybride qui combine une sémantique profonde à de la traduction automatique, et une seconde approche non-supervisée qui s’appuie sur un corpus comparable de Wikipedia / Depending on the input representation, this dissertation investigates issues from two classes: meaning representation (MR) to text and text-to-text generation. In the first class (MR-to-text generation, "Generating Sentences"), we investigate how to make symbolic grammar based surface realisation robust and efficient. We propose an efficient approach to surface realisation using a FB-LTAG and taking as input shallow dependency trees. Our algorithm combines techniques and ideas from the head-driven and lexicalist approaches. In addition, the input structure is used to filter the initial search space using a concept called local polarity filtering; and to parallelise processes. To further improve our robustness, we propose two error mining algorithms: one, an algorithm for mining dependency trees rather than sequential data and two, an algorithm that structures the output of error mining into a tree to represent them in a more meaningful way. We show that our realisers together with these error mining algorithms improves on both efficiency and coverage by a wide margin. In the second class (text-to-text generation, "Simplifying Sentences"), we argue for using deep semantic representations (compared to syntax or SMT based approaches) to improve the sentence simplification task. We use the Discourse Representation Structures for the deep semantic representation of the input. We propose two methods: a supervised approach (with state-of-the-art results) to hybrid simplification using deep semantics and SMT, and an unsupervised approach (with competitive results to the state-of-the-art systems) to simplification using the comparable Wikipedia corpus
48

Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)

Page generated in 0.0275 seconds