• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 4
  • 1
  • Tagged with
  • 61
  • 18
  • 16
  • 14
  • 10
  • 9
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A model of neural sequence detectors for sentence processing

Schmidle, Wolfgang January 2007 (has links)
No description available.
22

An object-oriented architecture for the multilingual generation of instructions : supporting knowledge re-use and user task performance

Miliaev, Nestor Yurievich January 2003 (has links)
No description available.
23

An exploration of the rise and development of Seventh-Day Adventist spirituality : with special reference to the charismatic guidance of Ellen G. White, 1844-1915

Szalos-Farkas, Zoltan January 2004 (has links)
The fundamental question to which a thorough consideration has been given in this research is, ‘What was the Adventist spirituality like that matured from 1844 to 1915’? In answering this question, the current work has proposed to identify and give a documented description and analysis of the crucial features that are most specific to Seventh-day Adventism, determinative of its spirituality. Underlying the above-stated enquiry into Aventism’s spiritual identity, there is a correlated quest at the heart of the current thesis. This is the role which Ellen G. White (1827-1915) and her charismatic ministry played in the shaping of Seventh-day Adventist spirituality primarily within its American socio-cultural context. With regard to this form of piety, the study has thoroughly documented that it is recognisable by a set of distinctive and interrelated features. These characterised the personal and communal spirituality of those who perceived themselves to live within the temporal frame of which they thought to be a yet unprecedented Era of human history and of the history of salvation: the very Time of the End. Having identified the aforementioned fact of the Adventist perception of history, the research has yielded further evidence to substantiate the following conclusions. In Adventism one is faced with a form of Protestant apocalyptic piety of the modern age, identifiable by the following five characteristics: 1) collective consciousness of being the End-Time Remnant; 2) a sense of eschatological crisis; 3) historicist biblical hermeneutic; 4) apocalyptic gospel; 5) a set of three institutions - publishing, health, and education - to promote a specifically ‘Aventist’ lifestyle. Adventist lifestyle has been found to be a representative mode of witnessing to the Adventist faith, with healthy living, six days of diligent work followed by a work-free observance of, and liturgical celebration on, the seventh-day Sabbath (Saturday). From 1844 to the present time, Adventists pursued their spirituality as an act of obedience to the End-Time will and purposes of God. The research has documented that such an understanding of spirituality turned the apocalyptic corpus of the Bible into the prime source of Adventist piety and devotion.
24

The electrification of language : computer-assisted language analysis of the construction of Michael Faraday's ideas

Smith, David W. January 1998 (has links)
No description available.
25

Statistical models for unsupervised learning of morphology and POS tagging

Can, Burcu January 2011 (has links)
This thesis concentrates on two fields in natural language processing. The main contribution of the thesis is in the field of morphology learning. Morphology is the study of how words are formed combining different language constituents (called morphemes) and morphology learning is the process of analysing words, by splitting into these constituents. In the scope of this thesis, morphology is learned mainly by paradigmatic approaches, in which words are analysed in groups, called paradigms. Paradigms are morphological structures having the capability of generating various word forms. We propose approaches for capturing paradigms to perform morphological segmentation. One of the approaches proposed captures paradigms within a hierarchical tree structure. Using a hierarchical structure covers a wide range of paradigms by spotting morphological similarities. The second scope of the thesis is part-of-speech (POS) tagging. Parts-of-speech are linguistic categories, which group words having similar syntactic features, i.e. noun, adjective, verb etc. In the thesis, we investigate how to exploit POS tags to learn morphology. We propose a model to capture paradigms through syntactic categories. When syntactic categories are provided, the proposed system can capture paradigms well. Following this approach, we extend it for the case of having no syntactic categories provided. To this end, we propose a joint model, in which POS tags and morphology are learned simultaneously. Our results show that a joint model is possible for learning morphology and POS tagging. We also study morpheme labelling, for which we propose a clustering algorithm that groups morphemes showing similar features. The algorithm can capture morphemes having similar meanings.
26

Natural language descriptions for video streams

Khan, Muhammad Usman Ghani January 2012 (has links)
This thesis is concerned with the automatic generation of natural language descriptions that can be used for video indexing, retrieval and summarization applications. It is a step ahead of keyword based tagging as it captures relations between keywords associated with videos, thus clarifying the context between them. Initially, we prepare hand annotations consisting of descriptions for video segments crafted from a TREC Video dataset. Analysis of this data presents insights into humans interests on video contents. For machine generated descriptions, conventional image processing techniques are applied to extract high level features (HLFs) from individual video frames. Natural language description is then produced based on these HLFs. Although feature extraction processes are erroneous at various levels, approaches are explored to put them together for producing coherent descriptions. For scalability purpose, application of framework to several different video genres is also discussed. For complete video sequences, a scheme to generate coherent and compact descriptions for video streams is presented which makes use of spatial and temporal relations between HLFs and individual frames respectively. Calculating overlap between machine generated and human annotated descriptions concludes that machine generated descriptions capture context information and are in accordance with human's watching capabilities. Further, a task based evaluation shows improvement in video identification task as compared to keywords alone. Finally, application of generated natural language descriptions, for video scene classification is discussed.
27

A corpus based approach to generalising a chatbot system

Abu Shawar, Bayan Aref January 2005 (has links)
Chatbot tools are computer programs which interact with users using natural languages. Most developers built their systems aiming to fool users that they are talking with real humans. Up to now most chatbots serve as a tool to amuse users through chatting with a robot. However, the knowledge bases of almost all chatbots are edited manually which restricts users to specific languages and domains. This thesis shows that chatbot technology could be used in many different ways in addition to being a tool for fun. A chatbot could be used as a tool to learn or study a new language, a tool to access an information system, a tool to visualise the contents of a corpus and a tool to give answers to questions in a specific domain. Instead of being restricted to a specific domain or written language, a chatbot could be trained with any text in any language. Some of the differences between real human conversations and human-chatbot dialogues are presented. A Java program has been developed to read a text from a machine readaatbble text (corpus) and convert it to ALICE chatbot format language (AIML). The program was built to be general, the generality in this respect implies no restrictions on specific language, domain or structure. Different languages were tested: English, Arabic, Afrikaans, French and Spanish. At the same time different corpora structures were used: dialogue, monologue and structured text.
28

Extraction automatique de segments textuels, détection de rôles, de sujets et de polarités / non communiqué

Lavalley, Rémi 09 July 2012 (has links)
Dans cette thèse, nous présentons de nouvelles méthodes permettant l’extraction de chaînes de mots (segments textuels) relatives à des catégories (thématiques, rôles des locuteurs, opinions). Nous proposons, dans un premier temps, une méthode basée su rune métrique de recherche de collocations, que nous appliquons de manière distincte sur les documents liés à la même catégorie et qui, par itérations, nous permet d’obtenir des chaînes caractéristiques de cette catégorie. Ces chaînes sont alors employées pour améliorer les performances de systèmes de catégorisation de textes ou dans un but d’extraction de connaissances (faire ressortir des éléments textuels tels que des expressions employées par un type de locuteurs, des sous-thématiques liées à la catégorie,...). Nous proposons ensuite une seconde méthode permettant de rechercher, dans un corpus d’opinions, des n-grammes exprimant des jugements sur des sujets prédéfinis.Nous pouvons alors extraire des segments textuels représentant l’expression d’une opinion sur un des sujets cibles.Ces méthodes sont validées par un certain nombre d’expériences effectuées dans des contextes différents : écrits de blogs, transcriptions manuelles de parole spontanée,critiques de produits culturels, enquêtes de satisfaction EDF, en français ou en anglais, ... / Non communiqué
29

Automatic detection of screams and shouts in the metro / Détection automatique de cris dans le métro

Laffitte, Pierre 13 December 2017 (has links)
Ce travail s’appuie sur les principes de la reconnaissance de motifs sonores et de la modélisation statistique pour proposer un système capable de reconnaître et détecter automatiquement des cris de personnes à l’intérieur d’un métro. Utilisant des enregistrements provenant de reconstitutions de scènes d’agressions dans une rame de métro Parisien en fonctionnement, nous avons estimé des modèles statistiques issus de trois architectures de réseaux de neurones différentes (DNN, CNN et RNN/LSTM). Ces modèles ont été appris sur 3 catégories de sons à reconnaître dans un premier temps (cris, parole, et bruit environnant), puis sur des catégories introduisant des informations spécifiques au déplacement de la rame de métro (afin d’apporter une information contextuelle supplémentaire), considérant soit les événements sonores isolés soit le flux audio continu. Les résultats obtenus montrent que le modèle le plus efficace est le modèle RNN/LSTM qui permet de mieux prendre en compte la structure temporelle des événements sonores. La reconnaissance des trois catégories cris, parole et bruit de fond est probante, indépendamment du reste de l’environnement sonore, mais l’ajout d’information contextuelle permet d’améliorer les taux de reconnaissance. Nous concluons que le manque de données est un facteur limitant, qui pourrait être atténué en utilisant l’apprentissage par transfert, consistant à utiliser des réseaux plus complexes pré-appris sur des données différentes, ou des techniques d’augmentation, consistant à accroitre la taille de la base de données en créant des données artificielles à partir de celles existantes. / This study proposes a security/surveillance system capable of automatically recognizing and detecting screams and shouts in a metro, based on the theory of classification through statistical modeling. Using a database recorded from enactments of violent scenes inside a Paris metro running its course, we estimated statistical models from three different neural network architectures (DNN, CNN and RNN/LSTM). The models were first trained to recognize three categories of sounds (shouts, speech and background noise), then introducing more categories to describe the surrounding environment (in order to bring some contextual information), considering the data as isolated sound events or as a continuous audio stream. The results obtained speak to the higher modeling power of the temporal model which takes into account the temporal structure of sound events. The scores for the Classification of the three categories shout, speech and background turned out to be quite satisfying, regardless of the rest of the acoustic environment, and adding contextual information proved useful. During this study we observed that the lack of data is a major limiting factor, which could be circumvented by using transfer learning, which consists in using more complex networks pre-trained with different data, as well as data augmentation techniques, consisting in increasing the amount of data by creating synthetic data from existing ones.
30

Algorithm engineering : string processing

Berry, Thomas January 2002 (has links)
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text.

Page generated in 0.0104 seconds