Spelling suggestions: "subject:"esource description"" "subject:"desource description""
21 |
Vers une nouvelle architecture de l'information historique : L'impact du Web sémantique sur l'organisation du Répertoire du patrimoine culturel du QuébecMichon, Philippe January 2016 (has links)
Le Plan culturel numérique du Québec (PCNQ) souligne l’importance pour le domaine culturel québécois, auquel participe étroitement les historiens, de s’intéresser aux possibilités du Web sémantique. Dans cette idée, ce mémoire étudie les avantages et les inconvénients de l’association entre le Web sémantique et l’histoire. D’un côté, on retrouve une nouvelle configuration du Web sous forme de données liées qui tente de s’inscrire dans un cadre pratique et, de l’autre, une discipline qui souhaite comprendre et préserver les faits passés. La réunion des deux concepts nécessite une implication interdisciplinaire entre programmeurs, professionnels en sciences de l’information et historiens. Face à ce travail interdisciplinaire, quels sont les enjeux et le rôle de l’historien dans le développement d’une plate-forme sémantique sur le patrimoine québécois? Pour répondre à cette question, ce mémoire explique les liens étroits qui existent entre la discipline historique et les données liées. Après avoir défini un ensemble de concepts fondateurs tels que le Resource Description Framework (RDF), l’Uniform Resource Identifier (URI), les fichiers d’autorité et les ontologies, ce mémoire associe un corpus de personnes du Répertoire du patrimoine culturel du Québec (RPCQ) avec DBpedia, un joueur majeur du Web sémantique. Cette démonstration explique comment le patrimoine québécois s’articule dans le nuage des données liées. De cette expérimentation découle deux constats qui démontrent l’importance de l’implication historienne dans une structure sémantique. Le Québec n’a pas d’autorité sur ses propres données et on ne retrace actuellement que la grande histoire du Québec sans entrer dans ses particularités.
|
22 |
Méthodologie et composants pour la mise en oeuvre de workflows scientifiques / Methodology and components for scientific workflow buildingLin, Yuan 07 December 2011 (has links)
Les sciences relevant des sciences du vivant et de l'environnement (biologie, risques naturels, télédétection, etc.) ont depuis de nombreuses années accumulé les données d'observation et développé de grandes variétés de traitements.Les scientifiques de ces domaines doivent asseoir leur réflexion par des validations expérimentales. Celles-ci nécessitent la mise en œuvre de chaînes de traitements (ou protocoles expérimentaux) plus ou moins complexes.Le concept de "workflow" a été introduit de manière globale et raffiné en "workflow scientifique".Les systèmes actuels restent cependant difficiles à appréhender par des scientifiques dont les préoccupations ne relèvent pas directement de l'ingénierie informatique.L'approche suivie, en terme de méthodologie et de composants, propose une solution à ce problème.L'hypothèse initiale repose sur la vision utilisateur qui conçoit son travail en trois étapes :- La phase de planification, qui consiste à définir un modèle métier abstrait d'un workflow ;- La phase intermédiaire, qui consiste à concrétiser le modèle abstrait précédemment défini, en localisant les diverses ressources existantes au sein de ce que nous désignons comme contexte de travail. La définition, la vérification et la validation des modèles concrets reposent sur la connaissance des experts et sur la compatibilité des éléments du modèles ;- La phase dynamique, qui consiste à exécuter le modèle concret validé avec un moteur d'exécution.La thèse se focalise principalement sur les divers problèmes soulevés dans les deux premières phases (planification et intermédiaire).A partir d'une analyse des travaux existants, nous déclinons les divers maillons :méta modèle et langage de workflow, contexte de travail, graphe de ressources, traitement de cas d'incompatibilité de la proposition.La validation des travaux s'est effectuée dans plusieurs domaines cibles: biologie, risques naturels et télédétection.Un prototype a été développé, il propose les fonctionnalités suivantes :conception et sauvegarde de chaines de traitements abstraites,description et localisation de ressources, vérification de la validité des chaînes concrètes. / For many years in life and the environmental science domains (such asbiology, risk, remote sensing, etc.), observational data haveaccumulated and a great number of related applications have beenimplemented. Scientists working in these domains have to establish theirreflections and evaluations based on experimental validations, whichrequire a more or less complex workflow. The "workflow" has beenintroduced as a global and general concept, and defined as "scientificworkflow". However, the current complex systems remain difficult toaccess by scientist, whose expertise is not directly related to thedomain of computer science engineering.Within the following approach we propose a methodical solution for thisproblem.The initial hypothesis is based on the vision of an user, who conceiveshis work in three stages:1) The conception stage, which consists of constructing an abstractworkflow model;2) The intermediate stage, which represents an instantiation step of thepre-defined abstract model, by locating different existing resources inan environment, named "work context" in our approach. The definition,verification and validation of a concrete model depend on the experts'knowledge of his specialized domain and the compatibility of elements inthe model.3) The dynamic stage, which consists of establishing and executing thevalid concrete model by using a workflow execution engine.In this thesis we mainly concentrate on the different problems raised bythe first two stages (conception and intermediate). Based on an analysisof existing efforts we decline some elements such as meta model and theassociated workflow language, work context, resource graph, solution propositions for incompatible compositions.The validation for our approach has been carried out in various target domains such as biology, natural risk and remote sensing. A prototype has been developed, which provides the following functionalities:construction and saving the abstract workflow models, description and location of (data / application) resource, verification and validation of concrete workflow models.
|
23 |
Vizualizace RDF dat ve webových prohlížečích / RDF Data Visualization in Web BrowsersŠkrobánek, Kristián January 2021 (has links)
This diploma thesis focuses on graph database data visualization, where data is stored in RDF format. Standard visualisation of RDF data in tables does not offer sufficiently usable user view. One of the goals of this work is to show RDF data in interactive graph, which is ideal form of viewing data considering lucidity and information value. The graph gives good view of not only the data itself but also relationships between the data. Another goal is to test ability of browsers to visualize large amounts of data.
|
24 |
Das neue Zusammenrücken von Formal- und Sacherschließung: FRBR, RDA, GNDWiesenmüller, Heidrun 24 January 2011 (has links)
Während Formal- und Sacherschließung in der angloamerikanischen Tradition als etwas Zusammengehöriges betrachtet werden, besteht im deutschen Bibliothekswesen eine klare, zumeist auch personelle Trennung zwischen den beiden Bereichen. Jüngere Entwicklungen der internationalen Standardisierung könnten hier zu einem Umdenken führen: Das theoretische Modell "Functional Requirements for Bibliographic Records" (FRBR) hat eine neue Sicht auf das so genannte "bibliographische Universum" eingeläutet, das auch die Sacherschließung mit einbezieht. Auch "Resource Description and Access" (RDA), der Nachfolger von AACR2, versteht sich nicht mehr nur als ein Regelwerk für die Formalerschließung. Ganz konkrete Schritte für ein stärkeres Miteinander bringt das Projekt "Gemeinsame Normdatei" (GND) mit der Zusammenführung der Schlagwortnormdatei (SWD), der Personennamendatei (PND) und der Gemeinsamen Körperschaftsdate (GKD). Im Vortrag werden diese Entwicklungen näher beleuchtet und Überlegungen zu den damit verbundenen Chancen und Problemen angestellt.
|
25 |
Inhaltserschließung – Neues in der DNBBee, Guido 31 January 2011 (has links)
1. GND
2. RDA
3. CrissCross
4. MACS
5. Petrus
|
26 |
Linked Data and Libraries : How the Switch to Linked Data Has Affected Work Practices at the National Library of SwedenUnterstrasser, Julia January 2023 (has links)
This thesis explores how library practice has been impacted by linked (open) data. For libraries, adopting linked data principles means moving away from the long-established reality of MARC-formats and opening up their information resources to the internet. While the transformation of library systems to linked data is often described as the necessary next step for the library community promising enormous benefits, the reality of the transformation process is a challenging one. This thesis employs an interview study at the National Library of Sweden, the first national library worldwide that has adopted linked data as its core data-model, to provide deeper insights into how linked data is affecting the current work practices of library professionals from their own perspectives. The findings suggest that linked data is significantly impacting library practice in a multitude of ways, fundamentally changing knowledge and information organization in the digital age. While linked data is still in the beginning stages of its implementation in the library community as a whole, the interviewed library professionals are confident about the benefits the transformation will bring eventually. While there are still many challenges and obstacles to tackle there is a strong believe that the advertised promises of linked data will come true in time. Furthermore, the results of the study suggest that linked data is only part of a paradigm-shifting change currently happening in the knowledge and information organization community, accompanied by many other developments that are as a whole fundamentally changing how information is organized, managed, shared and even perceived in today’s digital information environment of the internet.
|
27 |
An approach to automate the adaptor software generation for tool integration in Application/ Product Lifecycle Management tool chains.Singh, Shikhar January 2016 (has links)
An emerging problem in organisations is that there exist a large number of tools storing data that communicate with each other too often, throughout the process of an application or product development. However, no means of communication without the intervention of a central entity (usually a server) or storing the schema at a central repository exist. Accessing data among tools and linking them is tough and resource intensive. As part of the thesis, we develop a software (also referred to as ‘adaptor’ in the thesis), which, when implemented in the lifecycle management systems, integrates data seamlessly. This will eliminate the need of storing database schemas at a central repository and make the process of accessing data within tools less resource intensive. The adaptor acts as a wrapper to the tools and allows them to directly communicate with each other and exchange data. When using the developed adaptor for communicating data between various tools, the data in relational databases is first converted into RDF format and is then sent or received. Hence, RDF forms the crucial underlying concept on which the software will be based. The Resource description framework (RDF) provides the functionality of data integration irrespective of underlying schemas by treating data as resource and representing it as URIs. The model of RDF is a data model that is used for exchange and communication of data on the Internet and can be used in solving other real world problems like tool integration and automation of communication in relational databases. However, developing this adaptor for every tool requires understanding the individual schemas and structure of each of the tools’ database. This again requires a lot of effort for the developer of the adaptor. So, the main aim of the thesis will be to automate the development of such adaptors. With this automation, the need for anyone to manually assess the database and then develop the adaptor specific to the database is eliminated. Such adaptors and concepts can be used to implement similar solutions in other organisations faced with similar problems. In the end, the output of the thesis is an approachwhich automates the process of generating these adaptors. / Resource Description Framework (RDF) ger funktionaliteten av dataintegration, oberoende av underliggande scheman genom att behandla uppgifter som resurs och representerar det som URI. Modellen för Resource Description Framework är en datamodell som används för utbyte och kommunikation av uppgifter om Internet och kan användas för att lösa andra verkliga problem som integrationsverktyg och automatisering av kommunikation i relationsdatabaser. Ett växande problem i organisationer är att det finns ett stort antal verktyg som lagrar data och som kommunicerar med varandra alltför ofta, under hela processen för ett program eller produktutveckling. Men inga kommunikationsmedel utan ingripande av en central enhet (oftast en server) finns. Åtkomst av data mellan verktyg och länkningar mellan dem är resurskrävande. Som en del av avhandlingen utvecklar vi en programvara (även hänvisad till som "adapter" i avhandlingen), som integrerar data utan större problem. Detta kommer att eliminera behovet av att lagra databasscheman på en central lagringsplats och göra processen för att hämta data inom verktyg mindre resurskrävande. Detta kommer att ske efter beslut om en särskild strategi för att uppnå kommunikation mellan olika verktyg som kan vara en sammanslagning av många relevanta begrepp, genom studier av nya och kommande metoder som kan hjälpa i nämnda scenarier. Med den utvecklade programvaran konverteras först datat i relationsdatabaserna till RDF form och skickas och tas sedan emot i RDF format. Således utgör RDF det viktiga underliggande konceptet för programvaran. Det främsta målet med avhandlingen är att automatisera utvecklingen av ett sådant verktyg (adapter). Med denna automatisering elimineras behovet att av någon manuellt behöver utvärdera databasen och sedan utveckla adaptern enligt databasen. Ett sådant verktyg kan användas för att implementera liknande lösningar i andra organisationer som har liknande problem. Således är resultatet av avhandlingen en algoritm eller ett tillvägagångssätt för att automatisera processen av att skapa adaptern.
|
28 |
Integrating XML and RDF concepts to achieve automation within a tactical knowledge management environmentMcCarty, George E., Jr. 03 1900 (has links)
Approved for public release, distribution is unlimited / Since the advent of Naval Warfare, Tactical Knowledge Management (KM) has been critical to the success of the On Scene Commander. Today's Tactical Knowledge Manager typically operates in a high stressed environment with a multitude of knowledge sources including detailed sensor deployment plans, rules of engagement contingencies, and weapon delivery assignments. However the WarFighter has placed a heavy reliance on delivering this data with traditional messaging processes while focusing on information organization vice knowledge management. This information oriented paradigm results in a continuation of data overload due to the manual intervention of human resources. Focusing on the data archiving aspect of information management overlooks the advantages of computational processing while delaying the empowerment of the processor as an automated decision making tool. Resource Description Framework (RDF) and XML provide the potential of increased machine reasoning within a KM design allowing the WarFighter to migrate from the dependency on manual information systems to a more computational intensive Knowledge Management environment. However the unique environment of a tactical platform requires innovative solutions to automate the existing naval message architecture while improving the knowledge management process. This thesis captures the key aspects for building a prototype Knowledge Management Model and provides an implementation example for evaluation. The model developed for this analysis was instantiated to evaluate the use of RDF and XML technologies in the Knowledge Management domain. The goal for the prototype included: 1. Processing required technical links in RDF/XML for feeding the KM model from multiple information sources. 2. Experiment with the visualization of Knowledge Management processing vice traditional Information Resource Display techniques. The results from working with the prototype KM Model demonstrated the flexibility of processing all information data under an XML context. Furthermore the RDF attribute format provided a convenient structure for automated decision making based on multiple information sources. Additional research utilizing RDF/XML technologies will eventually enable the WarFighter to effectively make decisions under a Knowledge Management Environment. / Civilian, SPAWAR System Center San Diego
|
29 |
Resource Description and Access (RDA): continuity in an ever-fluxing information age with reference to tertiary institutions in the Western Capevan Rensburg, Rachel Janse January 2018 (has links)
Magister Library and Information Studies - MLIS / Although Resource Description and Access (RDA) has been discussed extensively amongst the
ranks of cataloguers internationally, no research on the perceptions of South African
cataloguers was available at the time of this research.
The aim of this study was to determine how well RDA was faring during the study's timeframe,
to give a detailed description regarding cataloguer perceptions within a higher education setting
in South Africa. Furthermore, to determine whether the implementation of RDA has overcome
most of the limitations that AACR2 had within a digital environment, to identify advantages
and/or perceived limitations of RDA as well as to assist cataloguers to adopt and implement the
new standard effectively.
The study employed a qualitative research design assisted by a phenomenological philosophy
to gain insight into how cataloguers experienced the implementation and adoption of RDA by
means of two concurrent web-based questionnaires.
The study concluded that higher education cataloguing professionals residing in the Western
Cape were decidedly positive towards the new cataloguing standard. Although there were some
initial reservations, they were overcome to such an extent that ultimately no real limitations
were identified, and that RDA has indeed overcome most of the limitations displayed by
AACR2. Many advantages of RDA were identified, and participants expressed excitement
about the future capabilities of RDA as it continues toward a link-data milieu, making library
metadata more easily available.
|
30 |
科技政策網站內容分析之研究賴昌彥, Lai, Chang-Yen Unknown Date (has links)
面對全球資訊網(WWW)應用蓬勃發展,網際網路上充斥著各種類型的資訊資源。而如何有效地管理及檢索這些資料,就成為當前資訊管理的重要課題之一。在發掘資訊時,最常用的便是搜尋引擎,透過比對查詢字串與索引表格(index table),找出相關的網頁文件,並回傳結果。但因為網頁描述資訊的不足,導致其回覆大量不相關的查詢結果,浪費使用者許多時間。
為了解決上述問題,就資訊搜尋的角度而言,本研究提出以文字開採技術實際分析網頁內容,並將其轉換成維度資訊來描述,再以多維度資料庫方式儲存的架構。做為改進現行資訊檢索的參考架構。
就資訊描述的角度,本研提出採用RDF(Resource Description Framework)來描述網頁Metadata的做法。透過此通用的資料格式來描述網路資源,做為跨領域使用、表達資訊的標準,便於Web應用程式間的溝通。期有效改善現行網際網路資源描述之缺失,大幅提昇搜尋之品質。
|
Page generated in 0.0774 seconds