• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 60
  • 50
  • 24
  • 11
  • 8
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 372
  • 136
  • 69
  • 51
  • 50
  • 40
  • 39
  • 38
  • 37
  • 34
  • 29
  • 29
  • 26
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Konverze ASP do ASP.NET / Translation of ASP into ASP.NET

Vilímek, Jan January 2007 (has links)
The goal of this dissertation is to implement an application for ASP to ASPX conversion. The ASP pages should be written in the VBScript language, the target language for ASPX will be C#. The application is developed on the .NET platform. The conversion process should be automatic. There should be no need to alter the converted files by a programmer. The first part of this dissertation introduces the whole problematic. It shows also current solutions. The next part is the analysis and the design of the application itself. The main part of this dissertation is the VBScript grammar conversion, problems while conversion and its solving.
342

A Novel Methodology for Timely Brain Formations of 3D Spatial Information with Application to Visually Impaired Navigation

Manganas, Spyridon 06 September 2019 (has links)
No description available.
343

Automated Grammatical Tagging of Language Samples from Children with and without Language Impairment

Millet, Deborah 01 January 2003 (has links) (PDF)
Grammatical classification ("tagging") of words in language samples is a component of syntactic analysis for both clinical and research purposes. Previous studies have shown that probability-based software can be used to tag samples from adults and typically-developing children with high (about 95%) accuracy. The present study found that similar accuracy can be obtained in tagging samples from school-aged children with and without language impairment if the software uses tri-gram rather than bi-gram probabilities and large corpora are used to obtain probability information to train the tagging software.
344

Эргонимы-реплиĸи в современном городсĸом ономастиĸоне (оценĸа и фунĸционирование по данным эĸсперимента) : магистерская диссертация / Conversational ergonym in the modern urban onomastic space (evaluation and functioning based on the experiment)

Темникова, Е. И., Temnikova, E. I. January 2021 (has links)
Данная магистерсĸая диссертация посвящена изучению эргонимов-реплиĸ с точĸи зрения их фунĸционирования в современной городсĸой среде. В первой главе работы описаны фунĸции эргонимов, выделены способы их образования. Особое внимание аĸцентируется на леĸсиĸо-синтаĸсичесĸом способе, на базе ĸоторого образуются эргонимы-реплиĸи, таĸже рассматриваются специфичесĸие особенности разговорной речи, единицы ĸоторой легли в основу номинаций. Во второй главе на материале исследуемой группы номинаций различных городов России, Уĸраины, Беларуси и Казахстана (192 единицы) представлена многоаспеĸтная ĸлассифиĸация эргонимов-реплиĸ, проведенная по разным основаниям. В третьей главе диссертации анализируются результаты опроса, выполненного методом лингвистичесĸого интервьюирования (13730 ответов-реаĸций). Опрос проведен с целью оценĸи прагматичесĸого потенциала эргонимов-реплиĸ, а таĸже выявления неофициальных вариантов их употребления в речи. В работе выделены приемы создания неофициальных эргонимов, отмечен широĸий потенциал создания вторичных номинаций. / This master‘s thesis devoted to the study of conversational ergonym with respect to their functioning in the modern urban space. In the first Chapter the functions of ergonyms are described, and the ways of their formation are highlighted. Special attention is focused on the lexico-syntactic method, on the basis of which conversational ergonyms are formed, and the specific features of colloquial speech, the units of which formed the basis of the nominations, are also considered. In the second chapter, based on the material of the studied group of nominations of various cities of Russia, Ukraine, Belarus and Kazakhstan (192 units), a multidimensional classification of conversational ergonyms is presented, conducted on different grounds. The third chapter of the dissertation analyzes the results of a survey conducted by the method of linguistic interviewing (13730 responses-reactions). The survey was conducted to assess the pragmatic potential of conversational ergonyms, as well as to identify informal variants of the use of them in speech. The study identified the methods of creating unofficial ergonyms, and there was broad the vast potential of the creation of derivative nominations.
345

Syntactic inductive biases for deep learning methods

Shen, Yikang 08 1900 (has links)
Le débat entre connexionnisme et symbolisme est l'une des forces majeures qui animent le développement de l'Intelligence Artificielle. L'apprentissage profond et la linguistique théorique sont les domaines d'études les plus représentatifs pour les deux écoles respectivement. Alors que la méthode d'apprentissage profond a fait des percées impressionnantes et est devenue la principale raison de la récente prospérité de l'IA pour l'industrie et les universités, la linguistique et le symbolisme occupent quelque domaines importantes, notamment l'interprétabilité et la fiabilité. Dans cette thèse, nous essayons de construire une connexion entre les deux écoles en introduisant des biais inductifs linguistiques pour les modèles d'apprentissage profond. Nous proposons deux familles de biais inductifs, une pour la structure de circonscription et une autre pour la structure de dépendance. Le biais inductif de circonscription encourage les modèles d'apprentissage profond à utiliser différentes unités (ou neurones) pour traiter séparément les informations à long terme et à court terme. Cette séparation fournit un moyen pour les modèles d'apprentissage profond de construire les représentations hiérarchiques latentes à partir d'entrées séquentielles, dont une représentation de niveau supérieur est composée et peut être décomposée en une série de représentations de niveau inférieur. Par exemple, sans connaître la structure de vérité fondamentale, notre modèle proposé apprend à traiter l'expression logique en composant des représentations de variables et d'opérateurs en représentations d'expressions selon sa structure syntaxique. D'autre part, le biais inductif de dépendance encourage les modèles à trouver les relations latentes entre les mots dans la séquence d'entrée. Pour le langage naturel, les relations latentes sont généralement modélisées sous la forme d'un graphe de dépendance orienté, où un mot a exactement un nœud parent et zéro ou plusieurs nœuds enfants. Après avoir appliqué cette contrainte à un modèle de type transformateur, nous constatons que le modèle est capable d'induire des graphes orientés proches des annotations d'experts humains, et qu'il surpasse également le modèle de transformateur standard sur différentes tâches. Nous pensons que ces résultats expérimentaux démontrent une alternative intéressante pour le développement futur de modèles d'apprentissage profond. / The debate between connectionism and symbolism is one of the major forces that drive the development of Artificial Intelligence. Deep Learning and theoretical linguistics are the most representative fields of study for the two schools respectively. While the deep learning method has made impressive breakthroughs and became the major reason behind the recent AI prosperity for industry and academia, linguistics and symbolism still holding some important grounds including reasoning, interpretability and reliability. In this thesis, we try to build a connection between the two schools by introducing syntactic inductive biases for deep learning models. We propose two families of inductive biases, one for constituency structure and another one for dependency structure. The constituency inductive bias encourages deep learning models to use different units (or neurons) to separately process long-term and short-term information. This separation provides a way for deep learning models to build the latent hierarchical representations from sequential inputs, that a higher-level representation is composed of and can be decomposed into a series of lower-level representations. For example, without knowing the ground-truth structure, our proposed model learns to process logical expression through composing representations of variables and operators into representations of expressions according to its syntactic structure. On the other hand, the dependency inductive bias encourages models to find the latent relations between entities in the input sequence. For natural language, the latent relations are usually modeled as a directed dependency graph, where a word has exactly one parent node and zero or several children nodes. After applying this constraint to a transformer-like model, we find the model is capable of inducing directed graphs that are close to human expert annotations, and it also outperforms the standard transformer model on different tasks. We believe that these experimental results demonstrate an interesting alternative for the future development of deep learning models.
346

"Aber immer alle sagen das" The Status of V3 in German: Use, Processing, and Syntactic Representation

Bunk, Oliver 11 November 2020 (has links)
Für das Deutsche wird gemeinhin eine strikte V2-Beschränkung angenommen, die für deklarative Hauptsätze besagt, dass sich vor dem finiten Verb genau eine Konstituente befinden muss. In der Literatur werden häufig Beispiele angeführt, in denen sich zwei Konstituenten vor dem finiten Verb befinden und die somit gegen die V2-Beschränkung verstoßen. Diese syntaktische Konfiguration, so das Argument, führt zu Ungrammatikalität: (1) *Gestern Johann hat getanzt. (Roberts & Roussou 2002:137) Die Bewertung in (1) fußt jedoch nicht auf empirischer Evidenz, sondern spiegelt ein introspektives Urteil der Autor*innen wider. Daten zum tatsächlichen Sprachgebrauch zeigen, dass Sätze wie in (2) im Deutschen durchaus verwendet werden: (2) Aber immer alle sagen das. [BSa-OB, #16] Die Dissertation beschäftigt sich mit dem Status dieser V3-Deklarativsätze im Deutschen. Der Status wird aus drei einander ergänzenden Perspektiven auf Sprache untersucht: Sprachverwendung, Akzeptabilität und Verarbeitung. Hierzu werden Daten, die in einer Korpus-, einer Akzeptabilitäts- und einer Lesezeitstudie erhoben wurden, ausgewertet. Basierend auf den empirischen Befunden diskutiere ich V3-Modellierungen aus generativer Sicht und entwickle einen Modellierungsvorschlag aus konstruktionsgrammatischer Sicht. Die Arbeit zeigt, dass die Einbeziehung von nicht-standardsprachlichen Mustern wichtige Einblicke in die sprachliche Architektur gibt. Insbesondere psycholinguistisch gewonnene Daten als empirische Basis sind essenziell, um mentale sprachliche Prozesse zu verstehen und abbilden zu können. Die Analyse von V3 zeigt, dass solche Ansätze möglich und nötig sind, um Grammatikmodelle zu prüfen und weiterzuentwickeln. Untersuchungen dieser Art stellen Grammatikmodelle in Frage, die oft einer standardsprachlichen Tradition heraus erwachsen sind und nur einen Ausschnitt der sprachlichen Realität erfassen. V3-Sätze entpuppen sich nach dieser Analyse als Strukturen, die fester Bestandteil der Grammatik sind. / German is usually considered to follow a strict V2-constraint. This means that exactly one constituent must precede the finite verb in declarative main clauses. There are many examples for sentences that exhibit two preverbal constituents in the literature, illustrating a violation of the V2-constraint. According to the literature, these configurations lead to ungrammatical structures. (1) *Gestern Johann hat getanzt. (Roberts & Roussou 2002:137) However, the evaluation in (1) is not based on empirical evidence but is introspective and thus might not reflect the linguistic reality. Empirical data from actual language use show that German speakers indeed use these kinds of sentences. (2) Aber immer alle sagen das. [BSa-OB, #16] The dissertation explores the status of these V3 declaratives in German, with ‘status’ comprising three complementary perspectives on language: language use, acceptability, and processing. To this end, I analyze data from three studies: a corpus study, an acceptability judgment study, and a reading time study. Based on the empirical evidence, I discuss existing analyses of V3 and V3-modeling from the generative perspective and develop an analysis taking a construction-based approach. The dissertation shows that including patterns from non-standard language allows for valuable insights into the architecture of language. In particular, psycholinguistic data as an empirical basis are essential to understand and model mental linguistic processes. The analyses presented in the dissertation show that it is possible to follow such an approach in the field of syntactic variation, and it is indeed necessary in order to challenge and further develop existing grammatical theories and our understanding of grammar. Most grammatical models strongly rely on standard language, which is why they only capture a snippet of the linguistic reality. Taking empirical evidence into account, however, V3 sentences turn out to form an integral part of the German grammar.
347

Bean Soup Translation: Flexible, Linguistically-motivated Syntax for Machine Translation

Mehay, Dennis Nolan 30 August 2012 (has links)
No description available.
348

DEEP LEARNING BASED METHODS FOR AUTOMATIC EXTRACTION OF SYNTACTIC PATTERNS AND THEIR APPLICATION FOR KNOWLEDGE DISCOVERY

Mdahsanul Kabir (16501281) 03 January 2024 (has links)
<p dir="ltr">Semantic pairs, which consist of related entities or concepts, serve as the foundation for comprehending the meaning of language in both written and spoken forms. These pairs enable to grasp the nuances of relationships between words, phrases, or ideas, forming the basis for more advanced language tasks like entity recognition, sentiment analysis, machine translation, and question answering. They allow to infer causality, identify hierarchies, and connect ideas within a text, ultimately enhancing the depth and accuracy of automated language processing.</p><p dir="ltr">Nevertheless, the task of extracting semantic pairs from sentences poses a significant challenge, necessitating the relevance of syntactic dependency patterns (SDPs). Thankfully, semantic relationships exhibit adherence to distinct SDPs when connecting pairs of entities. Recognizing this fact underscores the critical importance of extracting these SDPs, particularly for specific semantic relationships like hyponym-hypernym, meronym-holonym, and cause-effect associations. The automated extraction of such SDPs carries substantial advantages for various downstream applications, including entity extraction, ontology development, and question answering. Unfortunately, this pivotal facet of pattern extraction has remained relatively overlooked by researchers in the domains of natural language processing (NLP) and information retrieval.</p><p dir="ltr">To address this gap, I introduce an attention-based supervised deep learning model, ASPER. ASPER is designed to extract SDPs that denote semantic relationships between entities within a given sentential context. I rigorously evaluate the performance of ASPER across three distinct semantic relations: hyponym-hypernym, cause-effect, and meronym-holonym, utilizing six datasets. My experimental findings demonstrate ASPER's ability to automatically identify an array of SDPs that mirror the presence of these semantic relationships within sentences, outperforming existing pattern extraction methods by a substantial margin.</p><p dir="ltr">Second, I want to use the SDPs to extract semantic pairs from sentences. I choose to extract cause-effect entities from medical literature. This task is instrumental in compiling various causality relationships, such as those between diseases and symptoms, medications and side effects, and genes and diseases. Existing solutions excel in sentences where cause and effect phrases are straightforward, such as named entities, single-word nouns, or short noun phrases. However, in the complex landscape of medical literature, cause and effect expressions often extend over several words, stumping existing methods, resulting in incomplete extractions that provide low-quality, non-informative, and at times, conflicting information. To overcome this challenge, I introduce an innovative unsupervised method for extracting cause and effect phrases, PatternCausality tailored explicitly for medical literature. PatternCausality employs a set of cause-effect dependency patterns as templates to identify the key terms within cause and effect phrases. It then utilizes a novel phrase extraction technique to produce comprehensive and meaningful cause and effect expressions from sentences. Experiments conducted on a dataset constructed from PubMed articles reveal that PatternCausality significantly outperforms existing methods, achieving a remarkable order of magnitude improvement in the F-score metric over the best-performing alternatives. I also develop various PatternCausality variants that utilize diverse phrase extraction methods, all of which surpass existing approaches. PatternCausality and its variants exhibit notable performance improvements in extracting cause and effect entities in a domain-neutral benchmark dataset, wherein cause and effect entities are confined to single-word nouns or noun phrases of one to two words.</p><p dir="ltr">Nevertheless, PatternCausality operates within an unsupervised framework and relies heavily on SDPs, motivating me to explore the development of a supervised approach. Although SDPs play a pivotal role in semantic relation extraction, pattern-based methodologies remain unsupervised, and the multitude of potential patterns within a language can be overwhelming. Furthermore, patterns do not consistently capture the broader context of a sentence, leading to the extraction of false-positive semantic pairs. As an illustration, consider the hyponym-hypernym pattern <i>the w of u</i> which can correctly extract semantic pairs for a sentence like <i>the village of Aasu</i> but fails to do so for the phrase <i>the moment of impact</i>. The root cause of this limitation lies in the pattern's inability to capture the nuanced meaning of words and phrases in a sentence and their contextual significance. These observations have spurred my exploration of a third model, DepBERT which constitutes a dependency-aware supervised transformer model. DepBERT's primary contribution lies in introducing the underlying dependency structure of sentences to a language model with the aim of enhancing token classification performance. To achieve this, I must first reframe the task of semantic pair extraction as a token classification problem. The DepBERT model can harness both the tree-like structure of dependency patterns and the masked language architecture of transformers, marking a significant milestone, as most large language models (LLMs) predominantly focus on semantics and word co-occurrence while neglecting the crucial role of dependency architecture.</p><p dir="ltr">In summary, my overarching contributions in this thesis are threefold. First, I validate the significance of the dependency architecture within various components of sentences and publish SDPs that incorporate these dependency relationships. Subsequently, I employ these SDPs in a practical medical domain to extract vital cause-effect pairs from sentences. Finally, my third contribution distinguishes this thesis by integrating dependency relations into a deep learning model, enhancing the understanding of language and the extraction of valuable semantic associations.</p>
349

Using sentence Transcription testing : As a way to test the interference effects and dynamics of verbal-working memory

Bou Aram, Sinal January 2021 (has links)
The aim of this study was to examine the validity and feasibility of sentence transcription testing (STT) for the purpose of examining the interplay between verbal working memory and central processing. The general area of interest is to understand working memory as a dynamic system that involves the management and integration of information from several temporal distances. Due to the world-wide conditions at the time this study was conducted (2020), the testing was online and computerized, which severely limited the controllability of the procedure leading to a high amount of exclusions and dubious results. The testing of 17 subjects, 9 females, 8 males with the average age of 30.5 (SD = 9.5) yielded mixed results, excluding gender, impulsivity and age as likely factors for the variance. Following these results, a post hoc analysis was added to interpret if transcription data has validity as a tool for observing effects of interference on memory recall and the task at hand. This analysis did reveal patterns that reinforce the view of language processing as a multimodal task. The type of errors seems to follow tendencies of primacy, recency and availability, as well as proactive and retroactive interference. These tendencies of memory recall seem to work in unison or is a manifestation of syntactic, lexical, and presumably semantic processing and can be used to measure individual differences in language processing and the tendency to linguistically “fill in the gaps”. The variation seen within the sample does make transcription testing appealing for further studies. The main variance within the sample can be described as replacing words with other previously attended to information, and or forgetting words during transcription. These tendencies, might reveal properties about the interaction between executive function (EF) and verbal working memory (V-WM) as a source of individual difference. However, more validation studies are proposed for weaving out factors that might skew the results in this type of testing and modelling.
350

The Shona subject relation

Mhute, Isaac 23 September 2011 (has links)
This study delves into the syntactic notion of subject relation in Shona with the aim of characterizing and defining it. This is done through analysing data collected from two of the Shona speaking provinces in Zimbabwe, namely, Harare and Masvingo. The data collection procedures involved the tape recording of oral interviews as well as doing selective listening to different speeches. The data were then analysed using the projection principle, noun phrase movement transformational rule as well as the selectional principles established for the subject relation in the other well researched natural languages. The research found out that there is no one single rule that can be used to determine the subject of every possible Shona sentence. One has to make use of all the seven selectional principles established in the well-researched natural languages. The research managed to assess the applicability of the selectional rules in different sentences. The rules were then ranked according to their reliability in determining the subjects of each of the various Shona sentences. It also came to light that the Shona subject relation has a number of sub-categories as a result of the various selectional rules involved in determining them. These were also ranked in a hierarchy of importance as they apply in the language. For instance, whilst some are assigned to their host words at the deep structure or underlying level of syntax, some are assigned at the surface structure level and can be shifted easily. It also emerged that the freedom of the subject relation in the language varies with the sub-category of the relation. It came to light as well that in Shona both noun phrases (NPs) and non-NPs are assigned the subject role. / African Languages / D. Litt. et Phil. (African Languages)

Page generated in 0.0702 seconds