• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 929
  • 156
  • 74
  • 55
  • 27
  • 23
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1601
  • 1601
  • 1601
  • 622
  • 565
  • 464
  • 383
  • 376
  • 266
  • 256
  • 245
  • 228
  • 221
  • 208
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Detection of Naming Convention Violations in Process Models for Different Languages

Leopold, Henrik, Rami-Habib, Eid-Sabbagh, Mendling, Jan, Guerreiro Azevdo, Leonardo, Baião, Fernanda Araujo 12 1900 (has links) (PDF)
Companies increasingly use business process modeling for documenting and redesigning their operations. However, due to the size of such modeling initiatives, they often struggle with the quality assurance of their model collections. While many model properties can already be checked automatically, there is a notable gap of techniques for checking linguistic aspects such as naming conventions of process model elements. In this paper, we address this problem by introducing an automatic technique for detecting violations of naming conventions. This technique is based on text corpora and independent of linguistic resources such as WordNet. Therefore, it can be easily adapted to the broad set of languages for which corpora exist. We demonstrate the applicability of the technique by analyzing nine process model collections from practice, including over 27,000 labels and covering three different languages. The results of the evaluation show that our technique yields stable results and can reliably deal with ambiguous cases. In this way, this paper provides an important contribution to the field of automated quality assurance of conceptual models.
462

Génération modulaire de grammaires formelles / Modular generation of formal grammars

Petitjean, Simon 11 December 2014 (has links)
Les travaux présentés dans cette thèse visent à faciliter le développement de ressources pour le traitement automatique des langues. Les ressources de ce type prennent des formes très diverses, en raison de l’existence de différents niveaux d’étude de la langue (syntaxe, morphologie, sémantique,. . . ) et de différents formalismes proposés pour la description des langues à chacun de ces niveaux. Les formalismes faisant intervenir différents types de structures, un unique langage de description n’est pas suffisant : il est nécessaire pour chaque formalisme de créer un langage dédié (ou DSL), et d’implémenter un nouvel outil utilisant ce langage, ce qui est une tâche longue et complexe. Pour cette raison, nous proposons dans cette thèse une méthode pour assembler modulairement, et adapter, des cadres de développement spécifiques à des tâches de génération de ressources langagières. Les cadres de développement créés sont construits autour des concepts fondamentaux de l’approche XMG (eXtensible MetaGrammar), à savoir disposer d’un langage de description permettant la définition modulaire d’abstractions sur des structures linguistiques, ainsi que leur combinaison non-déterministe (c’est à dire au moyen des opérateurs logiques de conjonction et disjonction). La méthode se base sur l’assemblage d’un langage de description à partir de briques réutilisables, et d’après un fichier unique de spécification. L’intégralité de la chaîne de traitement pour le DSL ainsi défini est assemblée automatiquement d’après cette même spécification. Nous avons dans un premier temps validé cette approche en recréant l’outil XMG à partir de briques élémentaires. Des collaborations avec des linguistes nous ont également amené à assembler des compilateurs permettant la description de la morphologie de l’Ikota (langue bantoue) et de la sémantique (au moyen de la théorie des frames). / The work presented in this thesis aim at facilitating the development of resources for natural language processing. Resources of this type take different forms, because of the existence of several levels of linguistic description (syntax, morphology, semantics, . . . ) and of several formalisms proposed for the description of natural languages at each one of these levels. The formalisms featuring different types of structures, a unique description language is not enough: it is necessary to create a domain specific language (or DSL) for every formalism, and to implement a new tool which uses this language, which is a long a complex task. For this reason, we propose in this thesis a method to assemble in a modular way development frameworks specific to tasks of linguistic resource generation. The frameworks assembled thanks to our method are based on the fundamental concepts of the XMG (eXtensible MetaGrammar) approach, allowing the generation of tree based grammars. The method is based on the assembling of a description language from reusable bricks, and according to a unique specification file. The totality of the processing chain for the DSL is automatically assembled thanks to the same specification. In a first time, we validated this approach by recreating the XMG tool from elementary bricks. Some collaborations with linguists also brought us to assemble compilers allowing the description of morphology and semantics.
463

A verb learning model driven by syntactic constructions / Um modelo de aquisição de verbos guiado por construções sintáticas

Machado, Mario Lúcio Mesquita January 2008 (has links)
Desde a segunda metade do último século, as teorias cognitivas têm trazido algumas visões interessantes em relação ao aprendizado de linguagem. A aplicação destas teorias em modelos computacionais tem duplo benefício: por um lado, implementações computacionais podem ser usaas como uma forma de validação destas teorias; por outro lado, modelos computacionais podem alcançar uma performance melhorada a partir da adoção de estratégias de aprendizado cognitivamente plausíveis. Estruturas sintáticas são ditas fornecer uma pista importante para a aquisição do significado de verbos. Ainda, para um subconjunto particular de verbos muito frequentes e gerais - os assim-chamados light verbs - há uma forte ligação entre as estruturas sintáticas nas quais eles aparecem e seus significados. Neste trabalho, empregamos um modelo computacional para investigar estas propostas, em particular, considerando a tarefa de aquisição como um mapeamento entre um verbo desconhecido e referentes prototípicos para eventos verbais, com base na estrutura sintática na qual o verbo aparece. Os experimentos conduzidos ressaltaram alguns requerimentos para um aprendizado bem-sucedido, em termos de níveis de informação disponível para o aprendiz e da estratégia de aprendizado adotada. / Cognitive theories have been, since the second half of the last century, bringing some interesting views about language learning. The application of these theories on computational models has double benefits: in the one hand, computational implementations can be used as a form of validation of these theories; on the other hand, computational models can earn an improved performance from adopting some cognitively plausible learning strategies. Syntactic structures are said to provide an important cue for the acquisition of verb meaning. Yet, for a particular subset of very frequent and general verbs – the so-called light verbs – there is a strong link between the syntactic structures in which they appear and their meanings. In this work, we used a computational model, to further investigate these proposals, in particular looking at the acquisition task as a mapping between an unknown verb and prototypical referents for verbal events, on the basis of the syntactic structure in which the verb appears. The experiments conducted have highlighted some requirements for a successful learning, both in terms of the levels of information available to the learner and the learning strategies adopted.
464

Extração multilíngue de termos multipalavra em corpora comparáveis

Prestes, Kassius Vargas January 2015 (has links)
Este trabalho investiga técnicas de extração de termos multipalavra a partir de corpora comparáveis, que são conjuntos de textos em duas (ou mais) línguas sobre o mesmo domínio. A extração de termos, especialmente termos multipalavra é muito importante para auxiliar a criação de terminologias, ontologias e o aperfeiçoamento de tradutores automáticos. Neste trabalho utilizamos um corpus comparável português/inglês e queremos encontrar termos e seus equivalentes em ambas as línguas. Para isso começamos com a extração dos termos separadamente em cada língua, utilizando padrões morfossintáticos para identificar os n-gramas (sequências de n palavras) mais prováveis de serem termos importantes para o domínio. A partir dos termos de cada língua, utilizamos o contexto, isto é, as palavras que ocorrem no entorno dos termos para comparar os termos das diferentes línguas e encontrar os equivalentes bilíngues. Tínhamos como objetivos principais neste trabalho fazer a identificação monolíngue de termos, aplicar as técnicas de alinhamento para o português e avaliar os diferentes parâmetros de tamanho e tipo (PoS utilizados) de janela para a extração de contexto. Esse é o primeiro trabalho a aplicar essa metodologia para o Português e apesar da falta de alguns recursos léxicos e computacionais (como dicionários bilíngues e parsers) para essa língua, conseguimos alcançar resultados comparáveis com o estado da arte para trabalhos em Francês/Inglês. / This work investigates techniques for multiword term extraction from comparable corpora, which are sets of texts in two (or more) languages about the same topic. Term extraction, specially multiword terms is very important to help the creation of terminologies, ontologies and the improvement of machine translation. In this work we use a comparable corpora Portuguese/ English and want to find terms and their equivalents in both languages. To do this we start with separate term extraction for each language. Using morphossintatic patterns to identify n-grams (sequences of n words) most likely to be important terms of the domain. From the terms of each language, we use their context, i. e., the words that occurr around the term to compare the terms of different languages and to find the bilingual equivalents. We had as main goals in this work identificate monolingual terms, apply alignment techniques for Portuguese and evaluate the different parameters of size and type (used PoS) of window to the context extraction. This is the first work to apply this methodology to Portuguese and in spite of the lack of lexical and computational resources (like bilingual dictionaries and parsers) for this language, we achieved results comparable to state of the art in French/English.
465

Identificação e tratamento de expressões multipalavras aplicado à recuperação de informação / Identification and treatment of multiword expressions applied to information retrieval

Acosta, Otavio Costa January 2011 (has links)
A vasta utilização de Expressões Multipalavras em textos de linguagem natural requer atenção para um estudo aprofundado neste assunto, para que posteriormente seja possível a manipulação e o tratamento, de forma robusta, deste tipo de expressão. Uma Expressão Multipalavra costuma transmitir precisamente conceitos e ideias que geralmente não podem ser expressos por apenas uma palavra e estima-se que sua frequência, em um léxico de um falante nativo, seja semelhante à quantidade de palavras simples. A maioria das aplicações reais simplesmente ignora ou lista possíveis termos compostos, porém os identifica e trata seus itens lexicais individualmente e não como uma unidade de conceito. Para o sucesso de uma aplicação de Processamento de Linguagem Natural, que envolva processamento semântico, é necessário um tratamento diferenciado para essas expressões. Com o devido tratamento, é investigada a hipótese das Expressões Multipalavras possibilitarem uma melhora nos resultados de uma aplicação, tal como os sistemas de Recuperação de Informação. Os objetivos desse trabalho estão voltados ao estudo de técnicas de descoberta automática de Expressões Multipalavras, permitindo a criação de dicionários, para fins de indexação, em um mecanismo de Recuperação de Informação. Resultados experimentais apontaram melhorias na recuperação de documentos relevantes, ao identificar Expressões Multipalavras e tratá-las como uma unidade de indexação única. / The use of Multiword Expressions (MWE) in natural language texts requires a detailed study, to further support in manipulating and processing, robustly, these kinds of expression. A MWE typically gives concepts and ideas that usually cannot be expressed by a single word and it is estimated that the number of MWEs in the lexicon of a native speaker is similar to the number of single words. Most real applications simply ignore them or create a list of compounds, treating and identifying them as isolated lexical items and not as an individual unit. For the success of a Natural Language Processing (NLP) application, involving semantic processing, adequate treatment for these expressions is required. In this work we investigate the hypothesis that an appropriate identification of Multiword Expressions provide better results in an application, such as Information Retrieval (IR). The objectives of this work are to compare techniques of MWE extraction for creating MWE dictionaries, to be used for indexing purposes in IR. Experimental results show qualitative improvements on the retrieval of relevant documents when identifying MWEs and treating them as a single indexing unit.
466

Conditional Neural Networks for Speech and Language Processing

Sun, Pengfei 01 August 2017 (has links)
Neural networks based deep learning methods have gained significant success in several real world tasks: from machine translation to web recommendation, and it is also greatly improving the computer vision and the natural language processing. Compared with conventional machine learning techniques, neural network based deep learning do not require careful engineering and consideration domain expertise to design a feature extractor that transformed the raw data to a suitable internal representation. Its extreme efficacy on multiple levels of representation and feature learning ensures this type of approaches can process high dimensional data. It integrates the feature representation, learning and recognition into a systematical framework, which allows the learning starts at one level (i.e., being with raw input) and end at a higher slightly more abstract level. By simply stacking enough such transformations, very complex functions can be obtained. In general, high level feature representation facilitate the discrimination of patterns, and additionally can reduce the impact of irrelevant variations. However, previous studies indicate that deep composition of the networks make the training errors become vanished. To overcome this weakness, several techniques have been developed, for instance, dropout, stochastic gradient decent and residual network structures. In this study, we incorporates latent information into different network structures (e.g., restricted Boltzmann machine, recursive neural networks, and long short term memory). The conditional latent information reflects the high dimensional correlation existed in the data structure, and the typical network structure may not learn this kind of features due to limitation of the initial design (i.e., the network size the parameters). Similarly to residual nets, the conditional neural networks jointly learns the global features and local features, and the specifically designed network structure helps to incorporate the modulation derived from the probability distribution. The proposed models have been widely tested in different datasets, for instance, the conditional RBM has been applied to detect the speech components, and a language model based gated RBM has been used to recognize speech related EEG patterns. The conditional RNN has been tested in both general natural language modeling and medical notes prediction tasks. The results indicate that by introducing conditional branches in the conventional network structures, the latent features can be globally and locally learned.
467

Automated Risk Management Framework with Application to Big Maritime Data

Teske, Alexander 13 December 2018 (has links)
Risk management is an essential tool for ensuring the safety and timeliness of maritime operations and transportation. Some of the many risk factors that can compromise the smooth operation of maritime activities include harsh weather and pirate activity. However, identifying and quantifying the extent of these risk factors for a particular vessel is not a trivial process. One challenge is that processing the vast amounts of automatic identification system (AIS) messages generated by the ships requires significant computational resources. Another is that the risk management process partially relies on human expertise, which can be timeconsuming and error-prone. In this thesis, an existing Risk Management Framework (RMF) is augmented to address these issues. A parallel/distributed version of the RMF is developed to e ciently process large volumes of AIS data and assess the risk levels of the corresponding vessels in near-real-time. A genetic fuzzy system is added to the RMF's Risk Assessment module in order to automatically learn the fuzzy rule base governing the risk assessment process, thereby reducing the reliance on human domain experts. A new weather risk feature is proposed, and an existing regional hostility feature is extended to automatically learn about pirate activity by ingesting unstructured news articles and incident reports. Finally, a geovisualization tool is developed to display the position and risk levels of ships at sea. Together, these contributions pave the way towards truly automatic risk management, a crucial component of modern maritime solutions. The outcomes of this thesis will contribute to enhance Larus Technologies' Total::Insight, a risk-aware decision support system successfully deployed in maritime scenarios.
468

Menings- och dokumentklassficering för identifiering av meningar / Sentence and document classification for identification of sentences

Paulson, Jörgen, Huynh, Peter January 2018 (has links)
Detta examensarbete undersöker hur väl tekniker inom meningsklassificering och dokumentklassificering fungerar för att välja ut meningar som innehåller de variabler som använts i experiment som beskrivs i medicinska dokument. För meningsklassificering används tillståndsmaskiner och nyckelord, för dokumentklassificering används linjär SVM och Random forest. De textegenskaper som har valts ut är LIX (läsbarhetsindex) och ordmängd (word count). Textegenskaperna hämtas från en färdig datamängd som skapades av Abrahamsson (T.B.D) från artiklar som samlas in för denna studie. Denna datamängd används sedan för dokumentklassificering. Det som undersöks hos dokumentklassificeringsteknikerna är förmågan att skilja dokument av typerna vetenskapliga artiklar med experiment, vetenskapliga artiklar utan experiment, vetenskapliga artiklar med metaanalyser och dokument som inte är vetenskapliga artiklar åt. Dessa dokument behandlas med meningsklassificering för att undersöka hur väl denna hittar meningar sominnehåller definitioner av variabler. Resultatet från experimentet tydde på att teknikerna för meningsklassificering inte var dugliga för detta ändamål på grund av låg precision. För dokumentklassificering var Randomforest bäst lämpad men hade problem att skilja olika typer av vetenskapliga artiklar åt.
469

Linking clinical records to the biomedical literature

Alnazzawi, Noha Abdulkareem D. January 2016 (has links)
Narrative information in Electronic Health Records (EHRs) contains a wealth of clinical information about treatments, diagnosis, medication and family history. In addition, the scientific literature represents a rich source of information that summarises the latest results and new research findings relevant to different diseases. These two textual sources often contain different types of valuable phenotypic information that may be complementary to each other. Combining details from each source thus has the potential to be useful in uncovering new disease-phenotypic associations. In turn, these associations can help to identify patients with high risk factors, and they can be useful in developing solutions to control the causes responsible for the development of different diseases. However, clinicians at the point of care have limited time to review the large volume of potentially useful information that is locked away in unstructured text format. This in turn limits the utility of this “raw” information to clinical practitioners and computerised applications. Accordingly, the provision of automated and efficient means to extract, combine and present phenotype information that may be scattered amongst a large number of different textual sources in an easily digestible format is a prerequisite to the effective use and comprehensive understanding of details contained within both the records and the literature. The development of such facilities can in turn help in deriving information about disease correlations and supporting clinical decisions. This thesis is the first comprehensive study focussing on extracting and integrating phenotypic information from two different biomedical sources using Text Mining (TM) techniques. In this research, we describe our work on (1) extracting phenotypic information from both EHRs and the biomedical literature; (2) extracting the relations between phenotypic information and distilling them from EHRs using an event-based approach; and (3) using normalisation methods to link the phenotypic information found in EHRs with associated mentions found in the literature as a first step towards the automatic integration of information from these heterogeneous sources.
470

A verb learning model driven by syntactic constructions / Um modelo de aquisição de verbos guiado por construções sintáticas

Machado, Mario Lúcio Mesquita January 2008 (has links)
Desde a segunda metade do último século, as teorias cognitivas têm trazido algumas visões interessantes em relação ao aprendizado de linguagem. A aplicação destas teorias em modelos computacionais tem duplo benefício: por um lado, implementações computacionais podem ser usaas como uma forma de validação destas teorias; por outro lado, modelos computacionais podem alcançar uma performance melhorada a partir da adoção de estratégias de aprendizado cognitivamente plausíveis. Estruturas sintáticas são ditas fornecer uma pista importante para a aquisição do significado de verbos. Ainda, para um subconjunto particular de verbos muito frequentes e gerais - os assim-chamados light verbs - há uma forte ligação entre as estruturas sintáticas nas quais eles aparecem e seus significados. Neste trabalho, empregamos um modelo computacional para investigar estas propostas, em particular, considerando a tarefa de aquisição como um mapeamento entre um verbo desconhecido e referentes prototípicos para eventos verbais, com base na estrutura sintática na qual o verbo aparece. Os experimentos conduzidos ressaltaram alguns requerimentos para um aprendizado bem-sucedido, em termos de níveis de informação disponível para o aprendiz e da estratégia de aprendizado adotada. / Cognitive theories have been, since the second half of the last century, bringing some interesting views about language learning. The application of these theories on computational models has double benefits: in the one hand, computational implementations can be used as a form of validation of these theories; on the other hand, computational models can earn an improved performance from adopting some cognitively plausible learning strategies. Syntactic structures are said to provide an important cue for the acquisition of verb meaning. Yet, for a particular subset of very frequent and general verbs – the so-called light verbs – there is a strong link between the syntactic structures in which they appear and their meanings. In this work, we used a computational model, to further investigate these proposals, in particular looking at the acquisition task as a mapping between an unknown verb and prototypical referents for verbal events, on the basis of the syntactic structure in which the verb appears. The experiments conducted have highlighted some requirements for a successful learning, both in terms of the levels of information available to the learner and the learning strategies adopted.

Page generated in 0.1202 seconds