• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Arabic language processing for text classification : contributions to Arabic root extraction techniques, building an Arabic corpus, and to Arabic text classification techniques

Al-Nashashibi, May Yacoub Adib January 2012 (has links)
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations.
2

Arabic Language Processing for Text Classification. Contributions to Arabic Root Extraction Techniques, Building An Arabic Corpus, and to Arabic Text Classification Techniques.

Al-Nashashibi, May Y.A. January 2012 (has links)
The impact and dynamics of Internet-based resources for Arabic-speaking users is increasing in significance, depth and breadth at highest pace than ever, and thus requires updated mechanisms for computational processing of Arabic texts. Arabic is a complex language and as such requires in depth investigation for analysis and improvement of available automatic processing techniques such as root extraction methods or text classification techniques, and for developing text collections that are already labeled, whether with single or multiple labels. This thesis proposes new ideas and methods to improve available automatic processing techniques for Arabic texts. Any automatic processing technique would require data in order to be used and critically reviewed and assessed, and here an attempt to develop a labeled Arabic corpus is also proposed. This thesis is composed of three parts: 1- Arabic corpus development, 2- proposing, improving and implementing root extraction techniques, and 3- proposing and investigating the effect of different pre-processing methods on single-labeled text classification methods for Arabic. This thesis first develops an Arabic corpus that is prepared to be used here for testing root extraction methods as well as single-label text classification techniques. It also enhances a rule-based root extraction method by handling irregular cases (that appear in about 34% of texts). It proposes and implements two expanded algorithms as well as an adjustment for a weight-based method. It also includes the algorithm that handles irregular cases to all and compares the performances of these proposed methods with original ones. This thesis thus develops a root extraction system that handles foreign Arabized words by constructing a list of about 7,000 foreign words. The outcome of the technique with best accuracy results in extracting the correct stem and root for respective words in texts, which is an enhanced rule-based method, is used in the third part of this thesis. This thesis finally proposes and implements a variant term frequency inverse document frequency weighting method, and investigates the effect of using different choices of features in document representation on single-label text classification performance (words, stems or roots as well as including to these choices their respective phrases). This thesis applies forty seven classifiers on all proposed representations and compares their performances. One challenge for researchers in Arabic text processing is that reported root extraction techniques in literature are either not accessible or require a long time to be reproduced while labeled benchmark Arabic text corpus is not fully available online. Also, by now few machine learning techniques were investigated on Arabic where usual preprocessing steps before classification were chosen. Such challenges are addressed in this thesis by developing a new labeled Arabic text corpus for extended applications of computational techniques. Results of investigated issues here show that proposing and implementing an algorithm that handles irregular words in Arabic did improve the performance of all implemented root extraction techniques. The performance of the algorithm that handles such irregular cases is evaluated in terms of accuracy improvement and execution time. Its efficiency is investigated with different document lengths and empirically is found to be linear in time for document lengths less than about 8,000. The rule-based technique is improved the highest among implemented root extraction methods when including the irregular cases handling algorithm. This thesis validates that choosing roots or stems instead of words in documents representations indeed improves single-label classification performance significantly for most used classifiers. However, the effect of extending such representations with their respective phrases on single-label text classification performance shows that it has no significant improvement. Many classifiers were not yet tested for Arabic such as the ripple-down rule classifier. The outcome of comparing the classifiers' performances concludes that the Bayesian network classifier performance is significantly the best in terms of accuracy, training time, and root mean square error values for all proposed and implemented representations. / Petra University, Amman (Jordan)
3

Désambiguïsation de l’arabe écrit et interprétation sémantique / Word sense disambiguation of written arabic and semantic interpretation

Gzawi, Mahmoud 17 January 2019 (has links)
Cette thèse se situe à l’intersection des domaines de la recherche en linguistique et du traitement automatique de la langue. Ces deux domaines se croisent pour la construction d’outils de traitement de texte, et des applications industrielles intégrant des solutions de désambiguïsation et d’interprétation de la langue.Une tâche difficile et très peu abordée et appliqué est arrivée sur les travaux de l’entreprise Techlimed, celle de l’analyse automatique des textes écrits en arabe. De nouvelles ressources sont apparues comme les lexiques de langues et les réseaux sémantiques permettant à la création de grammaires formelles d’accomplir cette tâche.Une métadonnée importante pour l’analyse de texte est de savoir « qu’est-ce qui est dit, et que signifie-t-il ? ». Le domaine de linguistique computationnelle propose des méthodes très diverses et souvent partielle pour permettre à l’ordinateur de répondre à de telles questions.L’introduction et l’application des règles de grammaire descriptives de langues dans les langages formels spécifiques au traitement de langues par ordinateur est l’objet principal de cette thèse.Au-delà de la réalisation d’un système de traitement et d’interprétation de textes en langue arabe, basé aussi sur la modélisation informatique, notre intérêt s’est porté sur l’évaluation des phénomènes linguistiques relevés par la littérature et les méthodes de leur formalisation en informatique.Dans tous les cas, nos travaux de recherche ont été testés et validés dans un cadre expérimental rigoureux autour de plusieurs formalismes et outils informatiques.Nos expérimentations concernant l'apport de la grammaire syntaxico-sémantique, a priori, ont montré une réduction importante de l’ambiguïté linguistique dans le cas de l'utilisation d’une grammaire à état fini écrite en Java et une grammaire générativetransformationnelle écrite en Prolog, intégrant des composants morphologiques, syntaxiques et sémantiques.La mise en place de notre étude a requis la construction d’outils de traitement de texte et d’outils de recherche d’information. Ces outils ont été construits par nos soins et sont disponible en Open-source.La réussite de l’application de nos travaux à grande échelle s’est conclue par la condition d’avoir de ressources sémantiques riches et exhaustives. Nous travaux ont été redirigés vers une démarche de production de telles ressources, en termes de recherche d’informations et d’extraction de connaissances. Les tests menés pour cette nouvelle perspective ont étéfavorables à d’avantage de recherche et d’expérimentation. / This thesis lies at the frontier of the fields of linguistic research and the automatic processing of language. These two fields intersect for the construction of natural language processing tools, and industrial applications integrating solutions for disambiguation and interpretation of texts.A challenging task, briefly approached and applied, has come to the work of the Techlimed company, that of the automatic analysis of texts written in Arabic. Novel resources have emerged as language lexicons and semantic networks allowing the creation of formal grammars to accomplish this task.An important meta-data for text analysis is "what is being said, and what does it mean". The field of computational linguistics offers very diverse and, mostly, partial methods to allow the computer to answer such questions.The main purpose of this thesis is to introduce and apply the rules of descriptive language grammar in formal languages specific to computer language processing.Beyond the realization of a system of processing and interpretation of texts in Arabic language based on computer modeling, our interest has been devoted to the evaluation of the linguistic phenomena described by the literature and the methods of their formalization in computer science.In all cases, our research was tested and validated in a rigorous experimental framework around several formalisms and computer tools.The experiments concerning the contribution of syntaxico-semantic grammar, a priori, have demonstrated a significant reduction of linguistic ambiguity in the case of the use of a finite-state grammar written in Java and a transformational generative grammarwritten in Prolog, integrating morphological, syntactic and semantic components.The implementation of our study required the construction of tools for word processing, information retrieval tools. These tools were built by us and are available in Open-source.The success of the application of our work in large scale was concluded by the requirement of having rich and comprehensive semantic resources. Our work has been redirected towards a process of production of such resources, in terms of informationretrieval and knowledge extraction. The tests for this new perspective were favorable to further research and experimentation.

Page generated in 0.1018 seconds