• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interrogation des bases de données XML probabilistes / Querying probabilistic XML

Souihli, Asma 21 September 2012 (has links)
XML probabiliste est un modèle probabiliste pour les bases de données incertaines semi-structurées, avec des applications telles que l'intégration incertaine de données, l'extraction d'informations ou le contrôle probabiliste de versions. Nous explorons dans cette thèse une solution efficace pour l'évaluation des requêtes tree-pattern avec jointures sur ces documents, ou, plus précisément, pour l'approximation de la probabilité d'une requête booléenne sur un document probabiliste. L'approche repose sur, d'une part, la production de la provenance probabiliste de la requête posée, et, d'autre part, la recherche d'une stratégie optimale pour estimer la probabilité de cette provenance. Cette deuxième partie s'inspire des approches des optimiseurs de requêtes: l'exploration de différents plans d'évaluation pour différentes parties de la formule et l'estimation du coût de chaque plan, suivant un modèle de coût établi pour les algorithmes de calcul utilisés. Nous démontrons l'efficacité de cette approche sur des jeux de données utilisés dans des travaux précédents sur l'interrogation des bases de données XML probabilistes, ainsi que sur des données synthétiques. / Probabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
2

Uma proposta de arquitetura para o protocolo NETCONF sobre SOAP / An architecture proposal for the NETCONF protocol over SOAP

Lacerda, Fabrizzio Cabral de 30 August 2007 (has links)
Orientador: Mauricio Ferreira Magalhães / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-10T08:14:26Z (GMT). No. of bitstreams: 1 Lacerda_FabrizzioCabralde_M.pdf: 1883686 bytes, checksum: c6ae7f1ec9d40594dd97e011ec07e59b (MD5) Previous issue date: 2007 / Resumo: A gerência de redes é formada por cinco áreas funcionais: Falha, Configuração, Contabilidade, Desempenho e Segurança. A área de configuração é responsável pela operação e manutenção da rede, acompanhando as mudanças de configuração realizadas em cada dispositivo da rede. As principais ferramentas de gerência disponíveis, CLI e SNMP, não atendem aos requisitos de configurações das redes atuais. Novas tecnologias Web estão se tornando comuns na gerência de redes, com destaque para o uso da linguagem XML e do protocolo HTTP. Com o objetivo de aplicar estas novas tecnologias na configuração de redes foi definido, no âmbito do IETF, um novo protocolo de gerência de configuração denominado NETCONF. Este trabalho faz um estudo do protocolo NETCONF procurando destacar as suas vantagens e limitações. Este trabalho propõe, também, uma arquitetura de implementação para o NETCONF baseada no protocolo de transporte SOAP sobre HTTP, ou sobre HTTPs. Com o objetivo de validar a arquitetura, apresentamos a implementação de um protótipo totalmente aderente à proposta NETCONF para o qual foi especificado um modelo de dados para configuração de VLANs de switches de fabricantes diferentes / Abstract: Network management is formed by five functional areas: Failure, Configuration, Accounting, Performance and Security. The configuration area is responsible for the network¿s operation and maintenance, following the configuration changes done in each network¿s device. The main management tools available, CLI and SNMP, do not take care of the configuration requirements of current networks. New Web technologies are becoming widespread in network management, with prominence of XML language and HTTP protocol. A new protocol of configuration management named NETCONF has been defined, in the scope of the IETF, in order to apply these new technologies for configuration of networks. This work studies the NETCONF protocol aiming to highlight its advantages and limitations. This work also proposes an architecture of implementation for the NETCONF based on the transport protocol SOAP over HTTP, or HTTPs. Aiming to validate such architecture, we present the implementation of a prototype fully adherent to the NETCONF proposal, for which it has specified a model of data for configuration of VLANs of switches from different manufacturers / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
3

Multilingual Zero-Shot and Few-Shot Causality Detection

Reimann, Sebastian Michael January 2021 (has links)
Relations that hold between causes and their effects are fundamental for a wide range of different sectors. Automatically finding sentences that express such relations may for example be of great interest for the economy or political institutions. However, for many languages other than English, a lack of training resources for this task needs to be dealt with. In recent years, large, pretrained transformer-based model architectures have proven to be very effective for tasks involving cross-lingual transfer such as cross-lingual language inference, as well as multilingual named entity recognition, POS-tagging and dependency parsing, which may hint at similar potentials for causality detection. In this thesis, we define causality detection as a binary labelling problem and use cross-lingual transfer to alleviate data scarcity for German and Swedish by using three different classifiers that make either use of multilingual sentence embeddings obtained from a pretrained encoder or pretrained multilingual language models. The source languages in most of our experiments will be English, for Swedish we however also use a small German training set and a combination of English and German training data.  We try out zero-shot transfer as well as making use of limited amounts of target language data either as a development set or as additional training data in a few-shot setting. In the latter scenario, we explore the impact of varying sizes of training data. Moreover, the problem of data scarcity in our situation also makes it necessary to work with data from different annotation projects. We also explore how much this would impact our result. For German as a target language, our results in a zero-shot scenario expectedly fall short in comparison with monolingual experiments, but F1-macro scores between 60 and 65 in cases where annotation did not differ drastically still signal that it was possible to transfer at least some knowledge. When introducing only small amounts of target language data, already notable improvements were observed and with the full German training data of about 3,000 sentences combined with the most suitable English data set, the performance for German in some scenarios even almost matches the state of the art for monolingual experiments on English. The best zero-shot performance on the Swedish data was even outperforming the scores achieved for German. However, due to problems with the additional Swedish training data, we were not able to improve upon the zero-shot performance in a few-shot setting in a similar manner as it was the case for German.
4

BERTie Bott’s Every Flavor Labels : A Tasty Guide to Developing a Semantic Role Labeling Model for Galician

Bruton, Micaella January 2023 (has links)
For the vast majority of languages, Natural Language Processing (NLP) tools are either absent entirely, or leave much to be desired in their final performance. Despite having nearly 4 million speakers, one such low-resource language is Galician. In an effort to expand available NLP resources, this project sought to construct a dataset for Semantic Role Labeling (SRL) and produce a baseline for future research to use in comparisons. SRL is a task which has shown success in amplifying the final output for various NLP systems, including Machine Translation and other interactive language models. This project was successful in that fact and produced 24 SRL models and two SRL datasets; one Galician and one Spanish. mBERT and XLM-R were chosen as the baseline architectures; additional models were first pre-trained on the SRL task in a language other than the target to measure the effects of transfer-learning. Scores are reported on a scale of 0.0-1.0. The best performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. The best performing Spanish SRL model achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025. A pre-processing method, verbal indexing, was also introduced which allowed for increased performance in the SRL parsing of highly complex sentences; effects were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but still visible even when only used during fine-tuning. / För de allra flesta språken saknas språkteknologiska verktyg (NLP) helt, eller för dem de var i finns tillgängliga är dessa verktygs prestanda minst sagt, sämre än medelmåttig. Trots sina nästan 4 miljoner talare, är galiciska ett språk med brist på tillräckliga resurser. I ett försök att utöka tillgängliga NLP-resurser för språket, konstruerades i detta projekt en uppsättning data för så kallat Semantic Role Labeling (SRL) som sedan användes för att utveckla grundläggande SRL-modeller att falla tillbaka på och jämföra  med i framtida forskning. SRL är en uppgift som har visat framgång när det gäller att förstärka slutresultatet för olika NLP-system, inklusive maskinöversättning och andra interaktiva språkmodeller. I detta avseende visade detta projekt på framgång och som del av det utvecklades 24 SRL-modeller och två SRL-datauppsåttningar; en galicisk och en spansk. mBERT och XLM-R valdes som baslinjearkitekturer; ytterligare modeller tränades först på en SRL-uppgift på ett språk annat än målspråket för att mäta effekterna av överföringsinlärning (Transfer Learning) Poäng redovisas på en skala från 0.0-1.0. Den galiciska SRL-modellen med bäst prestanda uppnådde ett f1-poäng på 0.74, vilket introducerar en baslinje för framtida galiciska SRL-system. Den bästa spanska SRL-modellen uppnådde ett f1-poäng på 0.83, vilket överträffade baslinjen +0.025 som sattes under CoNLL Shared Task 2009. I detta projekt introduceras även en ny metod för behandling av lingvistisk data, så kallad verbalindexering, som ökade prestandan av mycket komplexa meningar. Denna prestandaökning först märktes ytterligare i de scenarier och är en modell både förtränats och finjusterats på uppsättningar data som behandlats med metoden, men visade även på märkbara förbättringar då en modell endast genomgått finjustering. / Para la gran mayoría de los idiomas, las herramientas de procesamiento del lenguaje natural (NLP) están completamente ausentes o dejan mucho que desear en su desempeño final. A pesar de tener casi 4 millones de hablantes, el gallego continúa siendo un idioma de bajos recursos. En un esfuerzo por expandir los recursos de NLP disponibles, el objetivo de este proyecto fue construir un conjunto de datos para el Etiquetado de Roles Semánticos (SRL) y producir una referencia para que futuras investigaciones puedan utilizar en sus comparaciones. SRL es una tarea que ha tenido éxito en la amplificación del resultado final de varios sistemas NLP, incluida la traducción automática, y otros modelos de lenguaje interactivo. Este proyecto fue exitoso en ese hecho y produjo 24 modelos SRL y dos conjuntos de datos SRL; uno en gallego y otro en español. Se eligieron mBERT y XLM-R como las arquitecturas de referencia; previamente se entrenaron modelos adicionales en la tarea SRL en un idioma distinto al idioma de destino para medir los efectos del aprendizaje por transferencia. Las puntuaciones se informan en una escala de 0.0 a 1.0. El modelo SRL gallego con mejor rendimiento logró una puntuación de f1 de 0.74, introduciendo un objetivo de referencia para los futuros sistemas SRL gallegos. El modelo español de SRL con mejor rendimiento logró una puntuación de f1 de 0.83, superando la línea base establecida por la Tarea Compartida CoNLL de 2009 en 0.025. También se introdujo un método de preprocesamiento, indexación verbal, que permitió un mayor rendimiento en el análisis SRL de oraciones muy complejas; los efectos se amplificaron cuando el modelo primero se entrenó y luego se ajustó con los conjuntos de datos que utilizaban el método, pero los efectos aún fueron visibles incluso cuando se lo utilizó solo durante el ajuste.
5

Correlation-based Cross-layer Communication in Wireless Sensor Networks

Vuran, Mehmet Can 09 July 2007 (has links)
Wireless sensor networks (WSN) are event based systems that rely on the collective effort of densely deployed sensor nodes continuously observing a physical phenomenon. The spatio-temporal correlation between the sensor observations and the cross-layer design advantages are significant and unique to the design of WSN. Due to the high density in the network topology, sensor observations are highly correlated in the space domain. Furthermore, the nature of the energy-radiating physical phenomenon constitutes the temporal correlation between each consecutive observation of a sensor node. This unique characteristic of WSN can be exploited through a cross-layer design of communication functionalities to improve energy efficiency of the network. In this thesis, several key elements are investigated to capture and exploit the correlation in the WSN for the realization of advanced efficient communication protocols. A theoretical framework is developed to capture the spatial and temporal correlations in WSN and to enable the development of efficient communication protocols. Based on this framework, spatial Correlation-based Collaborative Medium Access Control (CC-MAC) protocol is described, which exploits the spatial correlation in the WSN in order to achieve efficient medium access. Furthermore, the cross-layer module (XLM), which melts common protocol layer functionalities into a cross-layer module for resource-constrained sensor nodes, is developed. The cross-layer analysis of error control in WSN is then presented to enable a comprehensive comparison of error control schemes for WSN. Finally, the cross-layer packet size optimization framework is described.
6

Large-Context Question Answering with Cross-Lingual Transfer

Sagen, Markus January 2021 (has links)
Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. Two of the most prominent currently are the lack of high performing non-English pre-trained models, and the limited number of words most trained models can incorporate for their context. Solving these problems would make NLP models more suitable for real-world applications, improving information retrieval, reading comprehension, and more. All previous research has focused on incorporating long-context for English language models. This thesis investigates the cross-lingual transferability between languages when only training for long-context in English. Training long-context models in English only could make long-context in low-resource languages, such as Swedish, more accessible since it is hard to find such data in most languages and costly to train for each language. This could become an efficient method for creating long-context models in other languages without the need for such data in all languages or pre-training from scratch. We extend the models’ context using the training scheme of the Longformer architecture and fine-tune on a question-answering task in several languages. Our evaluation could not satisfactorily confirm nor deny if transferring long-term context is possible for low-resource languages. We believe that using datasets that require long-context reasoning, such as a multilingual TriviaQAdataset, could demonstrate our hypothesis’s validity.

Page generated in 0.0216 seconds