• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 24
  • 22
  • 21
  • 13
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 354
  • 354
  • 66
  • 63
  • 61
  • 53
  • 50
  • 47
  • 42
  • 41
  • 41
  • 38
  • 36
  • 33
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

High-Performance Persistent Identification for Research Data Management

Berber, Fatih 07 September 2018 (has links)
No description available.
162

Einsatz von Graphdatenbanken für das Produktdatenmanagement im Kontext von Industrie 4.0

Sauer, Christopher, Schleich, Benjamin, Wartzack, Sandro 03 January 2020 (has links)
Im Zuge der digitalen Transformation im Kontext von Industrie 4.0 tun sich eine Vielzahl neuer Datenquellen auf, die im Produktdatenmanagement berücksichtigt werden müssen. Ein Beispiel neuer Datenquellen sind Daten der Industrie 4.0, die zum Beispiel über Sensoren in der Fertigung erhoben werden. Kennzeichen dieser Datenquellen sind die zunehmende Heterogenität der Daten, die nicht mehr in einer Tabelle erfasst werden können. So könnten dies unter anderem Bilder einer optischen Bauteilprüfung sein oder Code zur Bauteilprüfung. Dieser Umstand führt zum Aufbau vieler einzelner neuer Silos, in denen die Daten separat und getrennt vom PDM-System ver-rbeitet werden müssen. Zudem werden dort abgeschottet von den restlichen Silos Daten gespeichert. Daneben führt eine Vielzahl neuer Autorensysteme (Prüfsoftware, Kundenmanagement, Anforderungsmanagement) zu einer gesteigerten Datenmenge, die nicht mehr in klassischen tabellenbasierten und rein-relationalen Datenbanksystemen sinnvoll erfasst werden können. Um an Informationen zu gelangen, sind im Fall rein-relationaler Datenbanksysteme oft komplizierte Abfragen nötig. Diese greifen dann auf mehrere unterschiedliche Tabellen innerhalb der Datenbank zu und stellen daraus wiederum relevante Informationen bereit. Je mehr größer jedoch diese Datenbanken werden und je mehr Informationen miteinander relational verbunden werden müssen, desto mehr Expertenwissen über das jeweilige Datenbanksystem wird benötigt. Somit büßen rein-relationale (SQL-basierte) Systeme auch einen Großteil der Vorteile ihres logischen strukturellen Aufbaus ein. Um den oben genannten Problemen zu begegnen, können neue Ansätze aus dem Bereich der Linked Data herangezogen werden. Bei Linked Data werden nicht nur die reinen Daten verwendet, sondern auch beschreibende und verknüpfende Informationen um die Daten zu interpretieren verwendet und weitergegeben. Durch diesen Mehrwert an Information wird es in einem ersten Schritt möglich, heterogene Produkt- und Prozessdaten, also Daten aus verschiedensten Quellen, wie zum Beispiel Konstruktion, Simulation und Qualitätssicherung, miteinander zu verknüpfen. Durch diese Verknüpfung kann eine höherwertige Darstellungsform geschaffen werden, die neben den reinen Daten auch die sinnvolle Verknüpfung enthält und so eine semantisch höherwertige Repräsentation darstellt. Die so entstehende, vernetzte Datenbank kann z.B. über eine graphenorientierte Datenbank oder Graphdatenbank implementiert werden. Im vorliegenden Beitrag wird untersucht, inwieweit die Modellierung mit gegenwärtig existierenden Lösungen für Graphdatenbanken möglich ist. Ausgehend von einem Beispiel mit einem vereinfachten Produkt- und Prozessdatenmodell der Blechmassivumformung, wird eine allgemeine Methode vorgestellt, durch die ein SQL-basiertes Datenbanksystem in eine Graphdatenbank überführt werden kann. Anhand dieser Methode wird dargestellt, wie bestehende Lösungen teilweise auch parallel zu neuartigen Linked Data Datenbanken existieren können, um diese Schritt für Schritt in eine Graphdatenbank zu überführen. Die Ergebnisse des Beitrags sind auf der einen Seite das allgemeine Vorgehensmodell zur Einführung von Graphdatenbanken und auf der anderen Seite Aussagen über die Nutzbarkeit der vorgestellten Lösung für das Produkt- & Prozessdatenmanagement. [... aus der Einleitung]
163

Master Data Management-studie om nästa entiteto och leverantör för Scania / Master Data Management study about the next entity and suppier for Scania

Oldelius, David, Pham, Douglas January 2018 (has links)
Stora företag har olika avdelningar där informationen från dessa måste hanteras. Master Data Management(MDM) är ett informationshanteringssystem för att hantera information från olika källor. En MDM-implementation sker med en entitet i taget. Arbetets problemställning är att rekommendera nästa entitet att inkludera i MDM-implementationen hos Scania samt vilken leverantör som passar till implementationen. En rekommendation av entitet framställs av material från Scania och intervjuer med anställda på Scania. Rekommendationen av leverantör framställs från material från leverantörer och intervjuer med leverantörerna. Entiteten som rekommenderas är produkt som individ för att informationen i området har behov av förbättrad hantering och entiteten är nära kärnverksamheten. Orchestra Networks är leverantören som rekommenderas för att de ligger i framkant inom MDM, de är nischade mot området och är starka inom produktinformation. / Enterprises has different departments and the information from them needs management. Master Data Management(MDM) is an information handling system for handling information from different sources.  One entity at the time is implemented to MDM. The work's problem is to recommend the next entity to include in the MDM implementation at Scania as well as which provider fits the implementation. A recommendation of entity is prepared from materials provided by Scania and interviews with employees at Scania. A recommendation of provider is prepared from materials from the providers and interviews with the providers. The recommended entity is product as individual because information in the area needs improved management. Orchestra Networks is the recommended supplier because they are a leader among the MDM providers, they are specialised in the area and they are strong in the product information area.
164

Clinicians' demands on monitoring support in an Intensive Care Unit : A pilot study, at Capio S:t Görans Hospital / Sjukvårdspersonals krav på övervakningssuport på en intensivvårdsavdelning : Förstudie på Capio S:t Görans Sjukhus

Callerström, Emma January 2017 (has links)
Patients treated at intensive care units (ICUs) are failing in one or several organs and requireappropriate monitoring and treatment in order to maintain a meaningful life. Today clinicians inintensive care units (ICUs) manage a large amount of data generated from monitoring devices.The monitoring parameters can either be noted down manually on a monitoring sheet or, for some parameters, transferred automatically to storage. In both cases the information is stored withthe aim to support clinicians throughout the intensive care and be easily accessible. Patient datamanagement systems (PDMSs) facilitate ICUs to retrieve and integrate data. Before managinga new configuration of patient data system, it is required that the ICU makes careful analysis ofwhat data desired to be registered. This pilot study provides knowledge of how the monitoringis performed in an Intensive Care Unit in an emergency hospital in Stockholm.The aim of this thesis project was to collect data about what the clinicians require and whatequipment they use today for monitoring. Requirement elicitation is a technique to collectrequirements. Methods used to collect data were active observations and qualitative interviews.Patterns have been found about what the assistant nurses, nurses and physicians’ require of systems supporting the clinician’s with monitoring parameters. Assistant nurses would like tobe released from tasks of taking notes manually. They also question the need for atomized datacollection since they are present observing the patient bed-side. Nurses describe a demanding burden of care and no more activities increasing that burden of care is required. Physicians require support in order to see how an intervention leads to a certain result for individual patients.The results also show that there is information about decision support but no easy way to applythem, better than the ones used today. Clinicians state that there is a need to be able to evaluatethe clinical work with the help of monitoring parameters. The results provide knowledge about which areas the clinicians needs are not supported enough by the exciting tools.To conclude results show that depending on what profession and experience the clinicians have the demands on monitoring support di↵ers. Monitoring at the ICU is performed while observing individual patients, parameters from medical devices, results from medical tests and physical examinations. Information from all these sources is considered by the clinicians and is desired to be supported accordingly before clinicians commit to action resulting in certain treatment,diagnosis and/or care. / Patienter som vårdas på intensivvårdsavdelningar har svikt i ett eller flera organ. Övervakning sker av patienterna för att kunna bidra till den vård som behövs för att upprätthålla ett meningsfullt liv. Idag hanterar sjukvårdpersonal en stor mängd data som genereras från övervakningsutrustning och system förknippade med övervakningsutrustning. Övervakningsparameterar kan antecknas förhand på ett övervakningspapper eller direkt sparas i digitalt format. Parameterarna sparas med syfte att vara ett lättillgängligt underlag under hela intensivvårdsprocessen. Patient data management systems (PDMSs) förenklar hämtning och integrering av data på exempelvis intensivvårdsavdelningar. Innan en ny konfiguration av ett patientdatasystem erhålls, är det eftersträvnadsvärt att intensivvårdsavdelningen analyserar vilken datasom skall hanteras. Detta examensarbete bidrog till kunskap om hur övervakning utförs på en intensivvårdsavdelning, på ett akutsjukhus i Stockholm. Målet med detta examensarbete var att insamla data om vad klinikerna behöver och vilken utrustning och system som de använder idag för att utföra övervakning. Behovsframkallning är en teknik som kan användas för att insamla krav. I detta projekt insamlades data genom aktivaobservationer och kvalitativa intervjuer. Mönster har hittats bland undersköterskornas, sjuksköterskornas och läkarnas behov av teknisksupport från system och utrustning som stödjer sjukvårdspersonalen under övervakningen av en patient. Undersköterskor uttrycker ett behov av att bli avlastade från uppgifter så som att manuellt skrivaner vitala parametervärden. De ifrågasätter behovet av automatiserad datahämtning eftersom de ständigt är närvarande bredvid patienten. Sjuksköterskor beskriver en hög vårdtyngd och önskaratt inte bli tillägnade fler aktiviteter som ökar den vårdtyngden. Läkare beskriver ett behov av ökat stöd för hur en interversion leder till resultat för individuella patienter. Resultaten visar attdet finns information om möjliga kliniska beslutsstöd utan givet sätt att applicera dessa, bättre än de sätt som används idag. Sjukvårdspersonalen hävdar att det det finns ett behov av att utvärdera det kliniska arbetet med hjälp av övervakningsparametrar. Resultaten utgör kunskap om vilka områden som sjukvårdpersonalens behov inte har stöd av nuvarnade verktyg. Resultaten visar att beroende på vilken profession och erfarenhet som sjukvårdspersonalen har, är behoven olika. På intensivvårdsavdelningen sker övervakning då enskilda patienter visuellt observeras såväl som övervakningsparametrar från medicintekniska produkter, resultat från medicinska tester och fysiska examinationer. Det finns behov att integrera och presenterainformation från dessa källor givet kunskap om att sjukvårdpersonalen fattar beslut på dessa som resulterar i behandling, diagnostik och/eller vård.
165

Real - time data and BIM: automated protocol for management and visualisation of data in real time : A case study in the "Teaching House" of the KTH campus / Realtidsdata och automatiserade BIM processer för hantering och visualisering av data i realtid : En fallstudie i "Undervisningshuset" KTH campus

Digregorio, Giuseppe January 2020 (has links)
Nowadays BIM and real-time data are becoming a central topic for the AECO (Architecture, Engineering, Construction and Operations) industry, they represent new powerful tools for the design and management of facilities.Building monitoring and real-time data can represent a solution to many important challenges like energy efficiency, indoor climate quality and cost management. Although it is clear the importance of data for a correct use of BIM technology and its potentiality, in literature, are not so common examples of complete workflows for a complete management of data from the input phase to the output one.The scope of the study is to design a protocol for entering, managing and exporting real-time data using Revit and Dynamo where the customers have a central role during the input phase and a dedicated mode for data display including a desktop version and an augmented reality one for a more immersive experience.In order to show the real potentiality of the project, the protocol has been utilised for the calculation of thermal comfort parameters of the “Teaching House” situated in KTH campus. All data entered from the students into a form online, via QR-code, have been inserted into Dynamo in order to calculate the desired parameters values which are successively stored into a database for further analysis, everything automatically. / Numera blir BIM och realtidsdata ett centralt ämne för AECO (Architecture, Engineering, Construction and Operations) industrin, de representerar nya kraftfulla verktyg för design och hantering av anläggningar.Byggnadsövervakning och realtidsdata kan vara en lösning på många viktiga utmaningar som energieffektivitet, inomhusklimatkvalitet och kostnadshantering. Även om det är tydligt är betydelsen av data för en korrekt användning av BIM-teknik och dess potential i litteraturen inte så vanliga exempel på fullständiga arbetsflöden för en fullständig hantering av data från inmatningsfasen till den utgående.Studiens omfattning är att utforma ett protokoll för inmatning, hantering och export av realtidsdata med Revit och Dynamo där kunderna har en central roll under inmatningsfasen och ett dedikerat läge för datavisning inklusive en stationär version och en förstärkt verklighet en för en mer uppslukande upplevelse.För att visa projektets verkliga potential har protokollet använts för beräkning av termiska komfortparametrar för ”Teaching House” beläget på KTH campus. Alla data som matats in från eleverna i ett formulär online, via QR-kod, har införts i Dynamo för att beräkna önskade parametervärden som successivt lagras i en databas för vidare analys, allt automatiskt.
166

Ein längeres Leben für Deine Daten! / Let your data live longer!

Schäfer, Felix 20 April 2016 (has links) (PDF)
Data life cycle and research data managemet plans are just two of many key-terms used in the present discussion about digital research data. But what do they mean - on the one hand for an individual scholar and on the other hand for a digital infrastructure like IANUS? The presentation will try to explain some of the terms and will show how IANUS is dealing with them in order to enhance the reusability of unique data. The presentation starts with an overview of the different disciplines, research methods and types of data, which together characterise modern research on ancient cultures. Nearly in all scientific processes digital data is produced and has gained a dominant role as the stakeholder-analysis and the evaluation of test data collections done by IANUS in 2013 clearly demonstrate. Nevertheless, inspite of their high relevance digital files and folders are in danger with regard to their accessability and reusability in the near and far future. Not only the storage devices, software applications and file formates become slowly but steadily obsolete, but also the relevant information (i.e. the metadata) to understand all the produced bits and bytes intellectually will get lost over the years. Therefore, urging questions concern the challenges how we can prevent – or at least reduce – a forseeable loss of digital information and what we will do with all the results, which do not find their way into publications? Being a disipline’s specific national center for research data of archaeology and ancient studies, IANUS tries to answer these questions and to establish different services in this context. The slides give an overview of the centre structure, its state of development and its planned targets. The primary service (scheduled for autumn 2016) will be the long-term preservation, curation and publication of digital research data to ensure its reusability and will be open for any person and institution. One already existing offer are the “IT-Empfehlungen für den nachhaltigen Umgang mit digitalen Daten in den Altertumswissenschaften“ which provide information and advice about data management, file formats and project documentation. Furthermore, it offers instructions on how to deposit data collections for archiving and disseminating. Here, external experts are cordially invited to contribute and write missing recommendations as new authors.
167

Distributed data management with a declarative rule-based language webdamlog / Gestion des données distribuées avec le langage de règles Webdamlog

Antoine, Emilien 05 December 2013 (has links)
Notre but est de permettre à un utilisateur du Web d’organiser la gestionde ses données distribuées en place, c’est à dire sans l’obliger à centraliserses données chez un unique hôte. Par conséquent, notre système diffèrede Facebook et des autres systèmes centralisés, et propose une alternativepermettant aux utilisateurs de lancer leurs propres pairs sur leurs machinesgérant localement leurs données personnelles et collaborant éventuellementavec des services Web externes.Dans ma thèse, je présente Webdamlog, un langage dérivé de datalogpour la gestion de données et de connaissances distribuées. Le langage étenddatalog de plusieurs manières, principalement avec une nouvelle propriété ladélégation, autorisant les pairs à échanger non seulement des faits (les données)mais aussi des règles (la connaissance). J’ai ensuite mené une étude utilisateurpour démontrer l’utilisation du langage. Enfin je décris le moteur d’évaluationde Webdamlog qui étend un moteur d’évaluation de datalog distribué nomméBud, en ajoutant le support de la délégation et d’autres innovations tellesque la possibilité d’avoir des variables pour les noms de pairs et des relations.J’aborde de nouvelles techniques d’optimisation, notamment basées sur laprovenance des faits et des règles. Je présente des expérimentations quidémontrent que le coût du support des nouvelles propriétés de Webdamlogreste raisonnable même pour de gros volumes de données. Finalement, jeprésente l’implémentation d’un pair Webdamlog qui fournit l’environnementpour le moteur. En particulier, certains adaptateurs permettant aux pairsWebdamlog d’échanger des données avec d’autres pairs sur Internet. Pourillustrer l’utilisation de ces pairs, j’ai implémenté une application de partagede photos dans un réseau social en Webdamlog. / Our goal is to enable aWeb user to easily specify distributed data managementtasks in place, i.e. without centralizing the data to a single provider. Oursystem is therefore not a replacement for Facebook, or any centralized system,but an alternative that allows users to launch their own peers on their machinesprocessing their own local personal data, and possibly collaborating with Webservices.We introduce Webdamlog, a datalog-style language for managing distributeddata and knowledge. The language extends datalog in a numberof ways, notably with a novel feature, namely delegation, allowing peersto exchange not only facts but also rules. We present a user study thatdemonstrates the usability of the language. We describe a Webdamlog enginethat extends a distributed datalog engine, namely Bud, with the supportof delegation and of a number of other novelties of Webdamlog such as thepossibility to have variables denoting peers or relations. We mention noveloptimization techniques, notably one based on the provenance of facts andrules. We exhibit experiments that demonstrate that the rich features ofWebdamlog can be supported at reasonable cost and that the engine scales tolarge volumes of data. Finally, we discuss the implementation of a Webdamlogpeer system that provides an environment for the engine. In particular, a peersupports wrappers to exchange Webdamlog data with non-Webdamlog peers.We illustrate these peers by presenting a picture management applicationthat we used for demonstration purposes.
168

Automated Discovery, Binding, and Integration Of GIS Web Services

Shulman, Lev 18 May 2007 (has links)
The last decade has demonstrated steady growth and utilization of Web Service technology. While Web Services have become significant in a number of IT domains such as eCommerce, digital libraries, data feeds, and geographical information systems, common portals or registries of Web Services require manual publishing for indexing. Manually compiled registries of Web Services have proven useful but often fail to include a considerable amount of Web Services published and available on the Web. We propose a system capable of finding, binding, and integrating Web Services into an index in an automated manner. By using a combination of guided search and web crawling techniques, the system finds a large number of Web Service providers that are further bound and aggregated into a single portal available for public use. Results show that this approach is successful in discovering a considerable number of Web Services in the GIS(Geographical Information Systems) domain, and demonstrate improvements over existing methods of Web Service Discovery.
169

Evaluation of an Experimental Data Management System for Program Data at the College Level

Nair, Hema 29 July 2016 (has links)
An experimental data management system has been designed, developed, and implemented in this dissertation. The system satisfies the requirements specifications of the Department of Curriculum and Instruction in the School of Education. The university in this study has installed some learning management systems and assessment systems, such as Banner®, Canvas®, TracDat®, and Taskstream® (university’s name is omitted for anonymity purposes). These systems individually do not perform the necessary data analysis and data management to generate appropriate reports. The system developed in this study can generate more metrics and quantitative measures for reporting purposes within a shorter time. These metrics provide credible evidence for accreditation. Leadership is concerned with improving the effectiveness, efficiency, accountability, and performance of educational programs. The continuity, sustainability, and financial support of programs depend on demonstrating the evidence that they are effective and efficient, that they meet their objectives, and that they contribute to the mission and the vision of the educational institution. Leadership has to employ all means at its disposal in order to collect such evidence. The data management system provides comprehensive data analysis that can be utilized as evidence by the leadership to accomplish its goals. The pilot system developed in this research is web-based and platform independent. It leverages the power of Java® at the front-endand combines the reliability and stability of Oracle® as the back-end database. It has been tested on-site by some members of the departmental faculty and one administrator from the Dean’s Office in the School of Education. This research is a mixed methods study with quasi-experimental treatment. It is a single case experimental study. There is no control group. The sample chosen is a convenient sample. The results of this study indicate that the system is highly usable for assessment work. The data analysis results generated by the system are also actionable. These results assist by identifying gaps in student performance and in curriculum and instruction practices. In the future, the system developed in this dissertation can be extended to other departments in the School of Education. Some implications are provided in the concluding chapter of this dissertation.
170

Ontologie naturalisée et ingénierie des connaissances / Naturalized ontology and Knowledge Engineering

Zarebski, David 15 November 2018 (has links)
«Qu’ai-je besoin de connaître minimalement d’une chose pour la connaître ?» Le fait que cette question aux allures de devinette s’avère cognitivement difficile à appréhender de par son degré de généralité explique sans peine la raison pour laquelle son élucidation demeura plusieurs millénaires durant l’apanage d’une discipline unique : la Philosophie. Dans ce contexte, énoncer des critères à même de distinguer les composants primitifs de la réalité – ou le "mobilier du monde" – ainsi que leurs relations revient à produire une Ontologie. Cet ouvrage s’attelle à la tâche d’élucider le tournant historique curieux, en apparence anodin, que constitue l’émergence de ce type de questionnement dans le champ de deux disciplines connexes que constituent l’Intelligence Artificielle et l’Ingénierie des Connaissances. Nous montrons plus particulièrement ici que leur import d’une forme de méthodologie ontologique appliquée à la cognition ou à la représentation des connaissances ne relève pas de la simple analogie mais soulève un ensemble de questions et d’enjeux pertinents tant sur un plan appliqué que spéculatif. Plus spécifiquement, nous montrons ici que certaines des solutions techniques au problème de la data-masse (Big Data) – i.e. la multiplication et la diversification des données en ligne – constitue un point d’entrée aussi nouveau qu’inattendu dans de nombreuses problématiques traditionnellement philosophiques relatives à la place du langage et des raisonnements de sens commun dans la pensée ou encore l’existence d’une structuration de la réalité indépendante de l’esprit humain. / «What do I need to know about something to know it ?». It is no wonder that such a general, hard to grasp and riddle-like question remained the exclusive domain of a single discipline for centuries : Philosophy. In this context, the distinction of the primitive components of reality – the so called "world’s furniture" – and their relations is called an Ontology. This book investigates the emergence of similar questions in two different though related fields, namely : Artificial Intelligence and Knowledge Engineering. We show here that the way these disciplines apply an ontological methodology to either cognition or knowledge representation is not a mere analogy but raises a bunch of relevant questions and challenges from both an applied and a speculative point of view. More specifically, we suggest that some of the technical answers to the issues addressed by Big Data invite us to revisit many traditional philosophical positions concerning the role of language or common sense reasoning in the thought or the existence of mind-independent structure in reality.

Page generated in 0.1085 seconds