531 |
Supporting data quality assessment in eScience = a provenance based approach = Apoio à avaliação da qualidade de dados em eScience: uma abordagem baseada em proveniência / Apoio à avaliação da qualidade de dados em eScience : uma abordagem baseada em proveniênciaGonzales Malaverri, Joana Esther, 1981- 05 June 2013 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T01:02:06Z (GMT). No. of bitstreams: 1
GonzalesMalaverri_JoanaEsther_D.pdf: 4107657 bytes, checksum: f285cdfecf84c5d5cc51db0249035297 (MD5)
Previous issue date: 2013 / Resumo: Qualidade dos dados é um problema recorrente em todos os domínios da ciência. Os experimentos analisam e manipulam uma grande quantidade de conjuntos de dados gerando novos dados para serem (re) utilizados por outros experimentos. A base para a obtenção de bons resultados científicos está fortemente associada ao grau de qualidade de tais da- dos. No entanto, os dados utilizados nos experimentos são manipulados por uma diversa variedade de usuários, os quais visam interesses diferentes de pesquisa, utilizando seus próprios vocabulários, metodologias de trabalho, modelos, e necessidades de amostragem. Considerando este cenário, um desafio em ciência da computação é oferecer soluções que auxiliem aos cientistas na avaliação da qualidade dos seus dados. Diferentes esforços têm sido propostos abordando a avaliação de qualidade. Alguns trabalhos salientam que os atributos de proveniência dos dados poderiam ser utilizados para avaliar qualidade. No entanto, a maioria destas iniciativas aborda a avaliação de um atributo de qualidade específico, frequentemente focando em valores atômicos de dados. Isto reduz a aplicabilidade destas abordagens. Apesar destes esforços, há uma necessidade de novas soluções que os cientistas possam adotar para avaliar o quão bons seus dados são. Nesta pesquisa de doutorado, apresentamos uma abordagem para lidar com este problema, a qual explora a noção de proveniência de dados. Ao contrário de outras abordagens, nossa proposta combina os atributos de qualidade especificados dentro de um contexto pelos especialistas e os metadados que descrevem a proveniência de um conjunto de dados. As principais contribuições deste trabalho são: (i) a especificação de um framework que aproveita a proveniência dos dados para obter informação de qualidade, (ii) uma metodologia associada a este framework que descreve os procedimentos para apoiar a avaliação da qualidade, (iii) a proposta de dois modelos diferentes de proveniência que possibilitem a captura das informações de proveniência, para cenários fixos e extensíveis, e (iv) a validação dos itens (i) a (iii), com suas discussões via estudos de caso em agricultura e biodiversidade / Abstract: Data quality is a recurrent concern in all scientific domains. Experiments analyze and manipulate several kinds of datasets, and generate data to be (re)used by other experiments. The basis for obtaining good scientific results is highly associated with the degree of quality of such datasets. However, data involved with the experiments are manipulated by a wide range of users, with distinct research interests, using their own vocabularies, work methodologies, models, and sampling needs. Given this scenario, a challenge in computer science is to come up with solutions that help scientists to assess the quality of their data. Different efforts have been proposed addressing the estimation of quality. Some of these efforts outline that data provenance attributes should be used to evaluate quality. However, most of these initiatives address the evaluation of a specific quality attribute, frequently focusing on atomic data values, thereby reducing the applicability of these approaches. Taking this scenario into account, there is a need for new solutions that scientists can adopt to assess how good their data are. In this PhD research, we present an approach to attack this problem based on the notion of data provenance. Unlike other similar approaches, our proposal combines quality attributes specified within a context by specialists and metadata on the provenance of a data set. The main contributions of this work are: (i) the specification of a framework that takes advantage of data provenance to derive quality information; (ii) a methodology associated with this framework that outlines the procedures to support the assessment of quality; (iii) the proposal of two different provenance models to capture provenance information, for fixed and extensible scenarios; and (iv) validation of items (i) through (iii), with their discussion via case studies in agriculture and biodiversity / Doutorado / Ciência da Computação / Doutora em Ciência da Computação
|
532 |
Arcabouço para anotação de componentes de imagem / A framework for semantic annotation of image componentsMuraro, Émerson, 1986- 21 August 2018 (has links)
Orientador: Ricardo da Silva Torres / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-21T20:19:19Z (GMT). No. of bitstreams: 1
Muraro_Emerson_M.pdf: 4254243 bytes, checksum: dd239dd897e8a66aa289cbf5b61988d8 (MD5)
Previous issue date: 2012 / Resumo: Com a disseminação de dispositivos mais baratos para sua aquisição, armazenamento e disponibilização, imagens vêm sendo utilizadas em várias aplicações (tais como comerciais, científicas, e pessoais). O uso de imagens nessas aplicações tem motivado a criação de objetos digitais heterogêneos. Imagens não são usadas isoladamente e podem compor outros objetos digitais. Esses novos objetos digitais são conhecidos como Objetos Complexos. Esta dissertação apresenta um arcabouço para anotação semântica automática de componentes de imagem, visando o seu uso na construção de objetos complexos. Esta proposta utiliza diversas formas de busca para encontrar termos para anotação: ontologias, busca por palavras-chaves e por conteúdo visual. Os termos encontrados são ponderados por pesos que definem sua importância, e são combinados por técnicas de fusão de dados em uma única lista de sugestões. As principais contribuições deste trabalho são: especificação do processo de anotação semântica automática de componentes de imagem, que considera o conteúdo visual da imagem, palavras-chaves definidas, ontologias e possíveis combinações envolvendo estas alternativas e especificação e implementação parcial de um arcabouço para anotação de objetos complexos de imagens encapsulados em componentes / Abstract: Due to the dissemination of low-cost devices for acquisition, storage, and sharing, images have been used in several applications, (e.g., commercial, scientific, and personal). The use of images in those applications has motivated the creation of heterogeneous digital objects. Images are not longer used in isolation and are used to compose other digital objects, named Complex Objects. In this work, we present a new framework for automatic semantic annotation of image components, aiming at supporting their use in the construction of complex objects. Our proposal uses several approaches for defining appropriate terms to be used in the annotation process: ontologies, textual terms, and image content descriptions. Found terms are weighted according to their importance, and are combined using data fusion techniques. The main contributions of this work are: the specification of an automatic semantic annotation process for image components, that takes into account image visual properties, defined textual terms, ontologies, and their combination, and the specification and partial implementation of an infrastructure for annotating image complex objects encapsulated in components / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
533 |
CCS - Collect, Convert and Send : Designing and implementing a system for data portability and media migration to mobile devicesGustafsson, Jonas, Alserin, Fredrik January 2006 (has links)
In this thesis we will identify which are the desired features and functionalities for implementing a system capable of acting as an information bridge for content available in the “wired” Internet to be delivered to mobile devices. We will also explore how to design and build such a system based on the specifications within parts of the MUSIS project. The MUSIS’ system development is used as a base of the work described in this thesis and the experiences from those efforts are used in order to design a system with more focus on data portability and media migration. During the development of the MUSIS platform, problems related to system upgrading, i.e. adding new ad-hoc functionalities were discovered. Due to the fact that a user-centred design approach was taken, this was essential in the project. To solve some of these issues, we propose a new component-based system with a high level of scalability and re-usability. We name this system Collect, Convert and Send, CCS. The system shall be seen as a base that can be used as a core system for different projects where interoperability of content between different platforms, devices or systems is important. The implementation of the system is based on the use cases and those theoretical aspects and ideas related to component software, interoperability, media migration and metadata in a Web service context. The results of our efforts give some indications that the use of component software gives a foundation for a service-oriented architecture.
|
534 |
Wickrpedia : Integrering av sociala tjänsterEkström, Johan January 2006 (has links)
The web has evolved much through the years. From being a place where author and reader were clearly distinguished, it now invites everyone to take part in the development of both content and technology. Social services are central in what is called Web 2.0. Wikis, blogs and folksonomies are all examples of how the users and their communities are key to the development of services. Collaborative writing, tags and API:s are central. Social services are given an extra dimension through integration. The purpose of this study was to investigate whether it was possible to integrate an encyclopedia with a photosharing service. The issue was whether it was possible to find relevant images to the article they were connected to. The method for examining the issue was to create a service which functions was investigated through user tests. Wickrpedia was created, which is an integration of Wikipedia and Flickr. Wikipedia is an encyclopedia in the shape of a wiki, while Flickr is used to store, organize and share photos. The result shows that the images added someting to the encyclopedia; it became more entertaining and pleasant and the users’ knowledge was increased. The relevance of the images was good. The service can and should be improved. The conclusion is still that the service worked well and was seen as an improvement by the users. / Webben har förändrats mycket de senaste åren. Från att tidigare haft en tydlig uppdelning mellan läsare och författare inbjuds nu alla att delta i utvecklingen av både innehåll och teknik. Sociala tjänster är det centrala i det som benämns Web 2.0. Wikis, bloggar och folksonomies är alla exempel på hur användarna och deras gemenskap är nyckeln till utveckling av tjänster. Kollaborativt skrivande, taggar och API:er är centrala. Sociala tjänster får en ytterligare dimension genom integrering. Denna studies syfte var att utreda hur det gick att integrera ett uppslagsverk med en fotodelningstjänst. Frågan är om det gick att göra på ett sådant sätt att bilderna hade relevans för de artiklar de kopplades till. Metoden för att utreda frågan var att skapa en tjänst vars funktion undersöktes med hjälp av användartester. Wickrpedia skapades, vilket är en intregrering av Wikipedia och Flickr. Wikipedia är en encyklopedi i form av en wiki, medan Flickr används för att förvara, organisera och dela med sig av bilder. Resultatet visar att bilderna tillförde något till uppslagsverket; det blev roligare och trevligare och användarna fick en ökad kunskap. Relevansen hos bilderna var god. Tjänsten har brister, och den går att vidareutveckla. Slutsatsen var ändå att tjänsten fungerade och var en förbättring för användarna.
|
535 |
Contextual image browsing in connection with music listening - matching music with specific imagesSaha, Jonas January 2007 (has links)
This thesis discusses the possibility of combining music and images through the use of metadata. Test subjects from different usability tests say they are interested in seeing images of the band or artist they are listening too. Lyrics matching the actual song are also something they would like to see. As a result an application for cellphones is created with Flash Lite which shows that it is possible to listen to music and automatically get images from Flickr and lyrics from Lyrictracker which match the music and show them on a cellphone.
|
536 |
Découverte des relations dans les réseaux sociaux / Relationship discovery in social networksRaad, Elie 22 December 2011 (has links)
Les réseaux sociaux occupent une place de plus en plus importante dans notre vie quotidienne et représentent une part considérable des activités sur le web. Ce succès s’explique par la diversité des services/fonctionnalités de chaque site (partage des données souvent multimédias, tagging, blogging, suggestion de contacts, etc.) incitant les utilisateurs à s’inscrire sur différents sites et ainsi à créer plusieurs réseaux sociaux pour diverses raisons (professionnelle, privée, etc.). Cependant, les outils et les sites existants proposent des fonctionnalités limitées pour identifier et organiser les types de relations ne permettant pas de, entre autres, garantir la confidentialité des utilisateurs et fournir un partage plus fin des données. Particulièrement, aucun site actuel ne propose une solution permettant d’identifier automatiquement les types de relations en tenant compte de toutes les données personnelles et/ou celles publiées. Dans cette étude, nous proposons une nouvelle approche permettant d’identifier les types de relations à travers un ou plusieurs réseaux sociaux. Notre approche est basée sur un framework orientéutilisateur qui utilise plusieurs attributs du profil utilisateur (nom, age, adresse, photos, etc.). Pour cela, nous utilisons des règles qui s’appliquent à deux niveaux de granularité : 1) au sein d’un même réseau social pour déterminer les relations sociales (collègues, parents, amis, etc.) en exploitant principalement les caractéristiques des photos et leurs métadonnées, et, 2) à travers différents réseaux sociaux pour déterminer les utilisateurs co-référents (même personne sur plusieurs réseaux sociaux) en étant capable de considérer tous les attributs du profil auxquels des poids sont associés selon le profil de l’utilisateur et le contenu du réseau social. À chaque niveau de granularité, nous appliquons des règles de base et des règles dérivées pour identifier différents types de relations. Nous mettons en avant deux méthodologies distinctes pour générer les règles de base. Pour les relations sociales, les règles de base sont créées à partir d’un jeu de données de photos créées en utilisant le crowdsourcing. Pour les relations de co-référents, en utilisant tous les attributs, les règles de base sont générées à partir des paires de profils ayant des identifiants de mêmes valeurs. Quant aux règles dérivées, nous utilisons une technique de fouille de données qui prend en compte le contexte de chaque utilisateur en identifiant les règles de base fréquemment utilisées. Nous présentons notre prototype, intitulé RelTypeFinder, que nous avons implémenté afin de valider notre approche. Ce prototype permet de découvrir différents types de relations, générer des jeux de données synthétiques, collecter des données du web, et de générer les règles d’extraction. Nous décrivons les expériementations que nous avons menées sur des jeux de données réelles et syntéthiques. Les résultats montrent l’efficacité de notre approche à découvrir les types de relations. / In recent years, social network sites exploded in popularity and become an important part of the online activities on the web. This success is related to the various services/functionalities provided by each site (ranging from media sharing, tagging, blogging, and mainly to online social networking) pushing users to subscribe to several sites and consequently to create several social networks for different purposes and contexts (professional, private, etc.). Nevertheless, current tools and sites provide limited functionalities to organize and identify relationship types within and across social networks which is required in several scenarios such as enforcing users’ privacy, and enhancing targeted social content sharing, etc. Particularly, none of the existing social network sites provides a way to automatically identify relationship types while considering users’ personal information and published data. In this work, we propose a new approach to identify relationship types among users within either a single or several social networks. We provide a user-oriented framework able to consider several features and shared data available in user profiles (e.g., name, age, interests, photos, etc.). This framework is built on a rule-based approach that operates at two levels of granularity: 1) within a single social network to discover social relationships (i.e., colleagues, relatives, friends, etc.) by exploiting mainly photos’ features and their embedded metadata, and 2) across different social networks to discover co-referent relationships (same real-world persons) by considering all profiles’ attributes weighted by the user profile and social network contents. At each level of granularity, we generate a set of basic and derived rules that are both used to discover relationship types. To generate basic rules, we propose two distinct methodologies. On one hand, social relationship basic rules are generated from a photo dataset constructed using crowdsourcing. On the other hand, using all weighted attributes, co-referent relationship basic rules are generated from the available pairs of profiles having the same unique identifier(s) attribute(s) values. To generate the derived rules, we use a mining technique that takes into account the context of users, namely by identifying frequently used valid basic rules for each user. We present here our prototype, called RelTypeFinder, implemented to validate our approach. It allows to discover appropriately different relationship types, generate synthetic datesets, collect web data and photo, and generate mining rules. We also describe here the sets of experiments conducted on real-world and synthetic datasets. The evaluation results demonstrate the efficiency of the proposed relationship discovery approach.
|
537 |
Automated adaptation of Electronic Heath Record for secondary use in oncology / Adaptation automatique des données de prises en charge hospitalières pour une utilisation secondaire en cancérologieJouhet, Vianney 16 December 2016 (has links)
Avec la montée en charge de l’informatisation des systèmes d’information hospitaliers, une quantité croissante de données est produite tout au long de la prise en charge des patients. L’utilisation secondaire de ces données constitue un enjeu essentiel pour la recherche ou l’évaluation en santé. Dans le cadre de cette thèse, nous discutons les verrous liés à la représentation et à la sémantique des données, qui limitent leur utilisation secondaire en cancérologie. Nous proposons des méthodes basées sur des ontologies pour l’intégration sémantique des données de diagnostics. En effet, ces données sont représentées par des terminologies hétérogènes. Nous étendons les modèles obtenus pour la représentation de la maladie tumorale, et les liens qui existent avec les diagnostics. Enfin, nous proposons une architecture combinant entrepôts de données, registres de métadonnées et web sémantique. L’architecture proposée permet l’intégration syntaxique et sémantique d’un grand nombre d’observations. Par ailleurs, l’intégration de données et de connaissances (sous la forme d’ontologies) a été utilisée pour construire un algorithme d’identification de la maladie tumorale en fonction des diagnostics présents dans les données de prise en charge. Cet algorithme basé sur les classes de l’ontologie est indépendant des données effectivement enregistrées. Ainsi, il fait abstraction du caractère hétérogène des données diagnostiques initialement disponibles. L’approche basée sur une ontologie pour l’identification de la maladie tumorale, permet une adaptation rapide des règles d’agrégation en fonction des besoins spécifiques d’identification. Ainsi, plusieurs versions du modèle d’identification peuvent être utilisées avec des granularités différentes. / With the increasing adoption of Electronic Health Records (EHR), the amount of data produced at the patient bedside is rapidly increasing. Secondary use is there by an important field to investigate in order facilitate research and evaluation. In these work we discussed issues related to data representation and semantics within EHR that need to be address in order to facilitate secondary of structured data in oncology. We propose and evaluate ontology based methods for heterogeneous diagnosis terminologies integration in oncology. We then extend obtained model to enable tumoral disease representation and links with diagnosis as recorded in EHR. We then propose and implement a complete architecture combining a clinical data warehouse, a metadata registry and web semantic technologies and standards. This architecture enables syntactic and semantic integration of a broad range of hospital information System observation. Our approach links data with external knowledge (ontology), in order to provide a knowledge resource for an algorithm for tumoral disease identification based on diagnosis recorded within EHRs. As it based on the ontology classes, the identification algorithm is uses an integrated view of diagnosis (avoiding semantic heterogeneity). The proposed architecture leading to algorithm on the top of an ontology offers a flexible solution. Adapting the ontology, modifying for instance the granularity provide a way for adapting aggregation depending on specific needs
|
538 |
Contributions à une nouvelle approche de Recherche d'Information basée sur la métaphore de l'impédance et illustrée sur le domaine de la santé / Contributions to a new information retrieving approach based on the impedance metaphor and illustrated on the health domainGuemeida, Abdelbasset 16 October 2009 (has links)
Les récentes évolutions dans les technologies de l’information et de la communication, avec le développement de l’Internet, conduisent à l’explosion des volumes des sources de données. Des nouveaux besoins en recherche d’information émergent pour traiter l’information en relation aux contextes d’utilisation, augmenter la pertinence des réponses et l’usabilité des résultats produits, ainsi que les possibles corrélations entre sources de données, en rendant transparentes leurs hétérogénéités. Les travaux de recherche présentés dans ce mémoire apportent des contributions à la conception d’une Nouvelle Approche de Recherche d’Information (NARI) pour la prise de décision. NARI vise à opérer sur des grandes masses de données cataloguées, hétérogènes, qui peuvent être géo référencées. Elle est basée sur des exigences préliminaires de qualité (standardisation, réglementations), exprimées par les utilisateurs, représentées et gérées à l’aide des métadonnées. Ces exigences conduisent à pallier le manque de données ou leur insuffisante qualité, pour produire une information de qualité suffisante par rapport aux besoins décisionnels. En utilisant la perspective des utilisateurs, on identifie et/ou on prépare des sources de données, avant de procéder à l’étape d’intégration des contenus. L’originalité de NARI réside dans la métaphore de l’écart d’impédance (phénomène classique lorsque on cherche à connecter deux systèmes physiques hétérogènes). Cette métaphore, dont R. Jeansoulin est à l’origine, ainsi que l’attention portée au cadre réglementaire, en guident la conception. NARI est structurée par la dimension géographique (prise en compte de divers niveaux de territoires, corrélations entre plusieurs thématiques) : des techniques d’analyse spatiale supportent des tâches de la recherche d’information, réalisées souvent implicitement par les décideurs. Elle s’appuie sur des techniques d’intégration de données (médiation, entrepôts de données), des langages de représentation des connaissances et des technologies et outils relevant du Web sémantique, pour supporter la montée en charge, la généralisation et la robustesse théorique de l’approche. NARI est illustrée sur des exemples relevant de la santé / The recent developments in information and communication technologies along with the growth of the Internet have lead to the explosion of data source volumes. This has created many growing needs such as in information retrieval to: treat the information according to its usage context, to increase the relevance of answers and the usability of results, and to increase the potential correlations between results, which can be done by making the heterogeneities and source distribution transparent. Our contributions consist in designing a NARI (New Approach to Information Retrieval) for decision-making. NARI is designed to operate on large amounts of catalogued and heterogeneous data that can be geo-referenced. It is based on quality preliminary requirements expressed by users, which are represented and managed using metadata. These requirements lead to the lack of data or their insufficient quality in relation to decision-making needs. Using the users’ perspective, we identify and/or prepare the data sources, before integration step processing. NARI’s originality relies on the metaphor of the impedance mismatch (classical phenomenon when we try to connect two physical heterogeneous systems), due to R. Jeansoulin. This metaphor, as well as the attention paid to regulatory framework (standardization), guides the design of NARI. The geographical dimension structures NARI, taking into account various territorial levels, correlations between several themes. Thus, it takes advantage of spatial analysis techniques, by automating information retrieval tasks, often implicitly made by policy makers. NARI is based on data integration techniques (mediation, data warehouses), knowledge representation languages and a set of Semantic Web technologies and tools, adapted to support the scalability, robustness and generalization theory of the approach. NARI is illustrated on examples relevant to the health domain
|
539 |
Méthode d'indexation qualitative : application à un plan de veille relatif aux thérapies émergentes contre la maladie d'Alzheimer / Qualitative indexing process : applied to build a search strategy plan about stand out topics on Alzheimer's disease therapyVaugeois-Sellier, Nathalie 03 December 2009 (has links)
Dans le contexte de recherche et développement d’un nouveau traitement thérapeutique, le chercheur veut surveiller ses thématiques de recherche pour actualiser ses connaissances. Il a besoin d’accéder à l’information qui lui est utile directement sur son ordinateur. La prise en compte de la complexité d’un système biologique, révèle la très grande difficulté à traduire de façon linguistique toute une réflexion hypothétique. Nous proposons dans ce travail, un procédé détaché du système de langue. Pour ce faire, nous présentons une méthodologie basée sur une indexation qualitative en utilisant un filtrage personnalisé. L’index n’est plus d’ordre linguistique mais de type « liaisons de connaissances ». Cette méthode d’indexation qualitative appliquée à « l’information retrieval » contraste avec l’indexation documentaire et l’utilisation d’un thésaurus tel que le MeSH lorsqu’il s’agit d’exprimer une requête complexe. Le choix du sujet d’expérimentation sur la base de données Medline via PubMed constitue une démonstration de la complexité d’expression d’une problématique de recherche. Le thème principal est un traitement possible de la maladie d’Alzheimer. Cette expérience permet de mettre en avant des documents contenus dans Medline qui ne répondent pas ou peu à une indexation en mots-clés. Les résultats obtenus suggèrent qu’une « indexation en connaissances » améliore significativement la recherche d’information dans Medline par rapport à une simple recherche sur Google pratiquée habituellement par le chercheur. Assimilable à une veille scientifique, cette méthodologie ouvre une nouvelle collaboration entre professionnels de l’information et chercheurs / In the context of research and development for a new therapeutic treatment, the researcher seeks to monitor relevant research topics in order to update field-specific knowledge. Direct computer access to relevant information is required. The complexity of biological systems increases the great difficulty of translating some hypothetical reflections in a linguistic manner or by semiotics. In this study, we propose a detached process of the system of language. To do this, we will present a methodology based on a qualitative indexing using personalized filtering. The index is no longer of a linguistic nature but a sort of “connection of knowledge”. This method of qualitative indexing applied to information retrieval is in contrast with documentation indexing systems and the use of thesauruses such as MeSH when it pertains to formulating a complex request. The choice of the experimentation subject using Medline database via PubMed proves the complexity of research problem formulation. The main theme is a possible treatment of Alzheimer's disease. This experiment makes it possible to highlight the documents contained in Medline which provide few or no answers by indexing keywords. The results obtained suggest that an indexing knowledge significantly improves search results for information via Medline in comparison to “Google” searches habitually carried out by the researcher. Comparable to scientific awareness, this methodology opens new collaboration possibilities between information professionals and research
|
540 |
Democratização da informação a partir do uso de repositórios digitais institucionais : da comunicação científica às informações tecnológicas de patentesBrandão, Felipe Grando January 2016 (has links)
O presente estudo aborda a produção, a comunicação e o uso da informação científica e tecnológica no contexto dos repositórios digitais institucionais de universidades brasileiras, bem como a disseminação e o uso das informações contidas em documentos de patente. Verifica-se que o uso dessas informações ainda é insipiente no Brasil, mesmo nas universidades, e considera-se que um meio de promover esse tema é explorando os serviços prestados pelos repositórios na divulgação da propriedade intelectual gerada nessas instituições. Para tanto, tem-se como objetivo geral estudar a democratização do acesso à informação a partir dos repositórios digitais institucionais, considerando seus elementos aderentes e seu uso para a comunicação das informações tecnológicas de patentes. Trata-se de uma pesquisa exploratória e interpretativa, dividida em quatro etapas qualitativas: pesquisa do referencial bibliográfico; identificação dos campos de metadados sobre patentes nos repositórios; verificação da existência de depósitos de patentes no Instituto Nacional da Propriedade Industrial de titularidade das universidades pesquisadas; comparações e análises. Identificou-se que não é uma prática corrente a disponibilização das informações dos documentos de patente nos repositórios, bem como se constata uma baixa padronização em relação aos metadados utilizados ou aos valores a estes atribuídos. Propõe-se um conjunto de metadados para a descrição dos documentos de patente e promove-se uma discussão crítica a respeito dos temas abordados. / The present study deals with the production, communication and use of scientific and technological information in the context of institutional digital repositories of Brazilian universities, as well as the dissemination and use of the information contained in patent documents. It is verified that the use of this information is still insipient in Brazil, even in the universities, and it is considered that a means to promote this theme is exploring the services provided by the repositories in the divulgation of the intellectual property generated in these institutions. For this purpose, the general objective is to study the democratization of access to information from institutional digital repositories, considering their adherent elements and their use for the communication of technological patent information. This is an exploratory and interpretative research, divided in four qualitative stages: research of the bibliographic reference; identification of patent metadata fields in repositories; verification of the existence of patent deposits in the National Institute of Industrial Property owned by the researched universities; comparisons and analyzes. It was identified that it is not current practice to make patent document information available in repositories, as well as low standardization in relation to the metadata used or the values assigned to them. A set of metadata is proposed for the description of the patent documents and a critical discussion about the topics covered is promoted. / El presente estudio aborda la producción, la comunicación y el uso de la información científica y tecnológica en el contexto de los repositorios digitales institucionales de universidades brasileñas, así como la diseminación y el uso de la información contenida en documentos de patente. Se verifica que el uso de esas informaciones aún es insipiente en Brasil, incluso en las universidades, y se considera que un medio de promover ese tema es explorando los servicios prestados por los repositorios en la divulgación de la propiedad intelectual generada en esas instituciones. Para ello, se tiene como objetivo general estudiar la democratización del acceso a la información a partir de los repositorios digitales institucionales, considerando sus elementos adherentes y su uso para la comunicación de las informaciones tecnológicas de patentes. Se trata de una investigación exploratoria e interpretativa, dividida en cuatro etapas cualitativas: investigación del referencial bibliográfico; identificación de los campos de metadatos sobre patentes en los repositorios; verificación de la existencia de depósitos de patentes en el Instituto Nacional de la Propiedad Industrial de titularidad de las universidades investigadas; comparaciones y análisis. Se identificó que no es una práctica corriente la disponibilización de las informaciones de los documentos de patente en los repositorios, así como se constata una baja estandarización en relación a los metadatos utilizados oa los valores a éstos asignados. Se propone un conjunto de metadatos para la descripción de los documentos de patente y se promueve una discusión crítica sobre los temas abordados.
|
Page generated in 0.0443 seconds