• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Finding Provenance Data in Social Media

January 2011 (has links)
abstract: A statement appearing in social media provides a very significant challenge for determining the provenance of the statement. Provenance describes the origin, custody, and ownership of something. Most statements appearing in social media are not published with corresponding provenance data. However, the same characteristics that make the social media environment challenging, including the massive amounts of data available, large numbers of users, and a highly dynamic environment, provide unique and untapped opportunities for solving the provenance problem for social media. Current approaches for tracking provenance data do not scale for online social media and consequently there is a gap in provenance methodologies and technologies providing exciting research opportunities. The guiding vision is the use of social media information itself to realize a useful amount of provenance data for information in social media. This departs from traditional approaches for data provenance which rely on a central store of provenance information. The contemporary online social media environment is an enormous and constantly updated "central store" that can be mined for provenance information that is not readily made available to the average social media user. This research introduces an approach and builds a foundation aimed at realizing a provenance data capability for social media users that is not accessible today. / Dissertation/Thesis / Ph.D. Computer Science 2011
2

PROV-Process: proveniência de dados aplicada a processos de desenvolvimento de software

Dalpra, Humberto Luiz de Oliveira 23 August 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-16T17:35:36Z No. of bitstreams: 1 humbertoluizdeoliveiradalpra.pdf: 4521013 bytes, checksum: 48a27b3b030503c69b1b6dda2de4be97 (MD5) / Approved for entry into archive by Diamantino Mayra (mayra.diamantino@ufjf.edu.br) on 2017-01-31T10:32:02Z (GMT) No. of bitstreams: 1 humbertoluizdeoliveiradalpra.pdf: 4521013 bytes, checksum: 48a27b3b030503c69b1b6dda2de4be97 (MD5) / Made available in DSpace on 2017-01-31T10:32:02Z (GMT). No. of bitstreams: 1 humbertoluizdeoliveiradalpra.pdf: 4521013 bytes, checksum: 48a27b3b030503c69b1b6dda2de4be97 (MD5) Previous issue date: 2016-08-23 / O processo de desenvolvimento de software pode ser definido como um conjunto de atividades, métodos, práticas e transformações utilizadas para desenvolver e manter o software e seus produtos associados. A descrição simplificada deste processo é denominada modelo de processo, no qual definem-se as atividades para o desenvolvimento do software, as especificações dos produtos de cada atividade e a indicação dos papéis das pessoas envolvidas. A execução destes processos gera dados importantes sobre o mesmo. A análise devida do histórico destes dados pode resultar na descoberta de informações importantes, as quais podem contribuir para o entendimento de todo o processo e, consequentemente, colaborar para a melhoria deste. A palavra proveniência refere-se a origem, fonte, procedência de um determinado objeto. Em termos computacionais, proveniência é um registro histórico da derivação dos dados que pode auxiliar no entendimento do dado e/ou registro atual. Este trabalho apresenta a proposta de uma arquitetura que, através do uso de modelos de proveniência de dados, aliado a um modelo ontológico e técnicas de mineração de dados, visa identificar melhorias nos processos de desenvolvimento de software e apresentá-las ao gerente de projetos por meio de uma ferramenta. Esta ferramenta, através da importação dos dados de execução de processos, alimenta um banco de dados relacional, modelado conforme a especificação de um modelo de proveniência de dados. Estes dados são carregados em um modelo de Ontologia e em um arquivo de mineração de dados. Assim, os dados são submetidos a uma máquina de inferência, no modelo ontológico, e também a análise de um algoritmo que integra regras de classificação e associação, na mineração de dados. O resultado desta análise apresenta indícios de pontos de melhorias no processo de desenvolvimento de software. A arquitetura proposta baseia-se em trabalhos relacionados, os quais foram selecionados a partir da execução de uma revisão sistemática. / The software development process can be defined as a set of activities, methods, practices and transformations used to develop and maintain the software and its related products. A simplified description of this process is called process model, which defines the activities for the development of software, product specifications of each activity and the indication of the roles of the people involved. The implementation of these processes generates important data on it. The proper analysis of the history of this data may result in the discovery of important information, which can contribute to the understanding of the process and therefore contribute to its improvement. The word provenance refers to the origin or source a particular object. In computer terms, provenance is a historical record of the derivation of data that can assist in the understanding of the data and / or the current record. This dissertation presents a proposal for an architecture that, through the use of data source models, combined with an ontological model and data mining techniques, aims to identify improvements in software development processes and present them to the project manager. This tool, by importing the process execution data, feeds a relational database, modeled based on a provenance model. These data are loaded into an ontology model and into a data mining file. Upon this loading, the data are processed by an inference machine, considering the ontological model, and also by an algorithm that integrates classification and association rules in data mining. The result of this analysis can presents points to improvements in the software development process. The proposed architecture is based on related work, which selected from the execution of a systematic review.
3

Contributions à la modélisation et la conception des systèmes de gestion de provenance à large échelle / [Contributions to the modelling and conception of large-scale provenance management systems]

Sakka, Mohamed Amin 28 September 2012 (has links)
Les avancées dans le monde des réseaux et des services informatiques ont révolutionné les modes d’échange, de partage et de stockage de l’information. Nous migrons de plus en plus vers des échanges numériques ce qui implique un gain en terme de rapidité de transfert, facilité de partage et d’accès ainsi qu’une efficacité d’organisation et de recherche de l’information. Malgré ses avantages, l’information numérique a l’inconvénient d’être volatile et modifiable ce qui introduit des problèmes liés à sa provenance, son intégrité et sa valeur probante. Dans ce contexte, la provenance apparait comme une méta-donnée cléqui peut servir pour juger la qualité de l’information et pour vérifier si elle répond à un ensemble d’exigences métier, techniques et légales. Aujourd’hui, une grande partie des applications et des services qui traitent, échangent et gèrent des documents électroniques sur le web ou dans des environnements Cloud génèrent des données de provenance hétérogènes, décentralisées et non interopérables. L’objectif principal de cette thèse est de proposer des solutions génériques et interopérables pour la modélisation de l’information de provenance et de concevoir des architectures de systèmes de gestion de provenance passant à l'échelle tant au niveau du stockage et que de l’exploitation(interrogation). Dans la première partie de la thèse, nous nous intéressons à la modélisation de la provenance. Afin de pallier à l’hétérogénéité syntaxique et sémantique qui existe entre les différents modèles de provenance, nous proposons une approche globale et cohérente pour la modélisation de la provenance basée sur les technologies du web sémantique. Notre approche repose sur un modèle de domaine minimal assurant un niveau d’interprétation minimal et commun pour n’importe quelle source de provenance. Ce modèle peut ensuite être spécialisé en plusieurs modèles de domaine pour modéliser des concepts et des propriétés métier différentes. Cette spécialisation assure l’interopérabilité sémantique souhaitée et permet par la suite de générer des vues métiers différentes sur les mêmes données de provenance. Dans la deuxième partie de la thèse, nous nous focalisons sur la conception des systèmes de gestion de provenance (ou PMS). Nous proposons tout d’abord une architecture logique de PMS indépendante des choix technologiques d’implémentation et de déploiement. Cette architecture détaille les modules assurant les fonctionnalités requises par notre approche de modélisation et sert comme architecture de référence pour la conception d’un PMS. Par la suite, et afin de préserver l’autonomie des sources de provenance, nous proposons une architecture distribuée de PMS à base de médiateur. Ce médiateur a une vision globale sur l’ensemble des sources et possède des capacités de distribution et de traitement de requêtes. Finalement la troisième partie de la thèse valide nos propositions. La validation de notre approche de modélisation a été réalisée dans un cadre industriel chez Novapost, une entreprise proposant des services SaaS pour l’archivage de documents à valeur probante. Ensuite, l’aspect passage à l’ échelle de notre architecture a été testé par l’implémentation de deux prototypes de PMS sur deux technologies de stockage différentes : un système RDF (Sesame) et un SGBD NoSQL (CouchDB). Les tests de montée en charge effectués sur les données de provenance Novapost ont montré les limites de Sesame tant pour le stockage que pour l’interrogation en utilisant le langage de requêtes SPARQL, alors que la version CouchDB associée à un langage de requêtes basé sur map/reduce a démontré sa capacité à suivre la charge de manière linéaire en augmentant le nombre de serveurs / Provenance is a key metadata for assessing electronic documents trustworthiness. It allows to prove the quality and the reliability of its content. With the maturation of service oriented technologies and Cloud computing, more and more data is exchanged electronically and dematerialization becomes one of the key concepts to cost reduction and efficiency improvement. Although most of the applications exchanging and processing documents on the Web or in the Cloud become provenance aware and provide heterogeneous, decentralized and not interoperable provenance data, most of Provenance Management Systems (PMSs) are either dedicated to a specific application (workflow, database, ...) or a specific data type. Those systems were not conceived to support provenance over distributed and heterogeneous sources. This implies that end-users are faced with different provenance models and different query languages. For these reasons, modeling, collecting and querying provenance across heterogeneous distributed sources is considered today as a challenging task. This is also the case for designing scalable PMSs providing these features. In the fist part of our thesis, we focus on provenance modelling. We present a new provenance modelling approach based on semantic Web technologies. Our approach allows to import provenance data from heterogeneous sources, to enrich it semantically to obtain high level representation of provenance. It provides syntactic interoperability between those sources based on a minimal domain model (MDM), supports the construction of rich domain models what allows high level representations of provenance while keeping the semantic interoperability. Our modelling approch supports also semantic correlation between different provenance sources and allows the use of a high level semantic query language. In the second part of our thesis, we focus on the design, implementation and scalability issues of provenance management systems. Based on our modelling approach, we propose a centralized logical architecture for PMSs. Then, we present a mediator based architecture for PMSs aiming to preserve provenance sources distribution. Within this architecture, the mediator has a global vision on all provenance sources and possesses query processing and distribution capabilities. The validation of our modelling approach was performed in a document archival context within Novapost, a company offering SaaS services for documents archiving. Also, we propose a non-functional validation aiming to test the scalability of our architecture. This validation is based on two implementation of our PMS : he first uses an RDF triple store (Sesame) and the second a NoSQL DBMS coupled with the map-reduce parallel model (CouchDB). The tests we performed show the limits of Sesame in storing and querying large amounts of provenance data. However, the PMS based on CouchDB showed a good performance and a linear scalability
4

Um framework para análise e visualização de dados de proveniência

Oliveira, Weiner Esmério Batista de 01 September 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-11T14:23:04Z No. of bitstreams: 1 weineresmeriobatistadeoliveira.pdf: 1837068 bytes, checksum: 00992cd2cbc30abda6ffe4b76d1c6941 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-23T13:41:57Z (GMT) No. of bitstreams: 1 weineresmeriobatistadeoliveira.pdf: 1837068 bytes, checksum: 00992cd2cbc30abda6ffe4b76d1c6941 (MD5) / Made available in DSpace on 2018-01-23T13:41:57Z (GMT). No. of bitstreams: 1 weineresmeriobatistadeoliveira.pdf: 1837068 bytes, checksum: 00992cd2cbc30abda6ffe4b76d1c6941 (MD5) Previous issue date: 2017-09-01 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A proveniência é reconhecida hoje como um desafio central para estabelecer confiabilidade e prover segurança em sistemas computacionais. Em workflows científicos, a proveniência é considerada essencial para apoiar a reprodutibilidade dos experimentos, a interpretação dos resultados e o diagnóstico de problemas. Estes benefícios podem também ser utilizados em outros contextos, como, por exemplo, em processos de software. No entanto, para sua melhor compreensão e utilização, são necessários mecanismos eficientes e amigáveis. Pesquisas em visualização de software, ontologias e redes complexas podem ajudar neste processo, gerando novo conhecimento sobre os dados e informações estratégicas para tomada de decisão. Esta dissertação apresenta um framework chamado Visionary, para auxiliar na compreensão e uso dos dados de proveniência através de técnicas de visualização de software, ontologias e análise de redes complexas. O framework captura os dados de proveniência e gera novas informações usando ontologias e análise do grafo de proveniência. A visualização apresenta e destaca as inferências e os resultados obtidos com a análise. O Visionary é um framework livre de contexto que pode ser adaptado para qualquer sistema que utiliza o modelo PROV de proveniência. Com o objetivo de avaliar a proposta, foi realizado um estudo experimental que encontrou indícios que o framework auxilia na compreensão e análise dos dados de proveniência, dando suporte à tomada de decisão. / Provenance is recognized today as a central challenge to establish reliability and pro-vide security in computational systems. In scientific workflows, provenance is considered essential to support the reproducibility of experiments, interpretation of results and diagnosis of problems. We consider that these benefits can be used in new contexts, like software process. However, for a better understanding and use, efficient and friendly mechanisms are needed. Software visualization, ontology, and complex networks can help in this process by generating new data insights and strategic information for decision making. This dissertation presents a framework named Visionary, to assist in the understanding and use of provenance data through software visualization techniques, ontologies and analysis of complex networks. The framework captures the provenance data and generates new information using ontologies and analysis of provenance graph. The visualization presents and highlights the inferences and the results obtained with the analysis. Visionary is a context-free framework that can be adapted to any system that uses the PROV provenance model. In order to evaluate the proposal, an experimental study was carried out, which found indications that the framework assists in the understanding and analysis of provenance data, supporting decision making.
5

Apoiando a composição de serviços em um ecossistema de software científico

Marques, Phillipe Israel 23 August 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-10-23T17:43:12Z No. of bitstreams: 1 phillipeisraelmarques.pdf: 8922079 bytes, checksum: 6a86d6e40e9d80c77e61a71e2c42f8e5 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-11-09T13:52:35Z (GMT) No. of bitstreams: 1 phillipeisraelmarques.pdf: 8922079 bytes, checksum: 6a86d6e40e9d80c77e61a71e2c42f8e5 (MD5) / Made available in DSpace on 2017-11-09T13:52:35Z (GMT). No. of bitstreams: 1 phillipeisraelmarques.pdf: 8922079 bytes, checksum: 6a86d6e40e9d80c77e61a71e2c42f8e5 (MD5) Previous issue date: 2017-08-23 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A área de e-Science envolve a realização de experimentos científicos complexos, normalmente apoiados por workflows. Esses experimentos geralmente utilizam dados e recursos distribuídos, e podem ser apoiados por uma plataforma de ecossistema de software científico. Neste contexto, é necessário permitir que diferentes serviços web possam ser compostos, reutilizados, além de interoperarem na plataforma para tratar das complexidades dos experimentos. Entretanto, compor serviços em plataformas de ecossistemas é uma atividade complexa, considerando, sobretudo, os requisitos funcionais e não funcionais desses serviços. Diante disso, o objetivo deste trabalho é apresentar um mecanismo que busca apoiar a composição de serviços no contexto de um ecossistema de software científico. Para tanto, esse mecanismo é associado ao processo de criação de serviços da plataforma de ecossistema de software científico. Oferece elementos de visualização para representar os relacionamentos de dependência funcional e interoperabilidade entre os serviços. Além disso, utiliza a análise de redes sociais científicas para identificar potenciais colaboradores. Os pesquisadores identificados poderão interagir com o auxílio das visualizações existentes, no espaço de trabalho compartilhado, para avaliar as composições. Essa plataforma, denominada E-SECO, apoia as diferentes fases do ciclo de vida de um experimento científico. A partir desse mecanismo, cientistas interagem e analisam as relações entre serviços nas composições realizadas considerando, sobretudo, as métricas de dependência funcional e a interoperabilidade entre os serviços existentes em diferentes instâncias da plataforma. Visando avaliar o mecanismo para apoiar a composição de serviços, foram realizados estudos de caso na plataforma E-SECO. / The area of e-Science encompasses performing complex scientific experiments, usually supported by workflows. These experiments generally use distributed data and resources, and can be supported by a scientific software ecosystem platform. In this context, it is necessary to allow different web services to be composed, reused, and interoperate in the platform to deal with the complexities of the experiments. However, performing services composition on ecosystem platform is a complex activity which requires computational support, considering, above all, the functional and non-functional requirements of these services. Therefore, the goal of this work is to present a mechanism that aims to support services composition in scientific software ecosystem context. To this end, this mechanism is associated to the service construction process of the scientific software ecosystem platform. It also provides visualization elements to represent functional dependency and interoperability relationships between the services. In addition, it uses scientific social networks analysis to identify potential collaborators. The identified researchers may interact through the visualizations, in the shared workspace, to evaluate the compositions. This platform, named E-SECO, supports different phases of the scientific experiment life cycles. From this mechanism, scientists interact and analyze the relationships between services in compositions which were performed considering, above all, the functional dependency metrics and interoperability issues between existing services in different instances of the platform. In order to evaluate the mechanism to support services composition, case studies were carried out on the E-SECO platform.

Page generated in 0.0568 seconds