Spelling suggestions: "subject:"data exchange"" "subject:"mata exchange""
51 |
The Road to a Nationwide Electronic Health Record System: Data Interoperability and Regulatory LandscapeHuang, Jiawei 01 January 2019 (has links)
This paper seeks to break down how a large scale Electronic Health Records system could improve quality of care and reduce monetary waste in the healthcare system. The paper further explores issues regarding regulations to data exchange and data interoperability. Due to the massive size of healthcare data, the exponential increase in the speed of data generation through innovative technologies, and the complexity of healthcare data types, the widespread of a large-scale EHR system has hit barriers. Much of the data available is unstructured or contained within a singular healthcare provider’s systems. To fully utilize all the data available, methods for making data interoperable and regulations for data exchange to protect and support patients must be made. Through angles addressing data exchange and interoperability, we seek to break down the constraints and issues that EHR systems still face and gain an understanding of the regulatory landscape.
|
52 |
資料交換與查詢在XML文件與關連資料庫之間 / Data Exchange and Query Language between XML Documents and Relational Databases王瑞娟 Unknown Date (has links)
隨著全球資訊網(World Wild Web,簡稱WWW或Web)的日趨普及,我們發現愈來愈多的資料是直接從網路上呈現與存取的。不同於過去關聯式資料庫(Relational Database Management Systems,RDBMS)的結構式資料(Structured Data)。現今許多資料都是直接以HTML(Hypertext Markup Language)格式呈現,然而HTML 的標籤只是在做資料的呈現。為了讓網際網路上的資料可真正順利傳達於組織間,新興的XML逐漸受到重視。相較於HTML,XML標籤是在做資料的定義,讓定義好的資料直接透過網際網路傳達於組織間,具有在組織間再使用(Reuse)的能力,因此現在逐漸成為組織間資料整合與轉換時一個好的解決方式。但面對傳統的關連式資料庫又該如何與XML文件整合(Data Integration)的動作,讓兩種不同來源的資料能夠相互運算(Interoperability),達成異質性資料的同質化(Homogeneous)的功效。使不同來源的資料可雙向的互相溝通,是目前急欲被探討的問題。因此本研究便發展了對關聯式資料庫與XML文件兩種來源相互轉換的溝通機制,讓資料能在這兩種來源間相互交換與利用。 / With the popularity of WWW( World Wild Web or Web ),we have seen large volume of data is available on the Web. Different from the data stored in traditional RDBMS (Relational Databases Managements Systems) which is structured data, huge data now is stored directly in the form of HTML (Hypertext Markup Language) pages. For representing data and interchange data between multiple data sources on the Web, XML (Extensible Markup Language) is a fast emerging as the dominant standard. Like HTML, XML is a subset of SGML. However, whereas HTML tags serve the primary purpose of describing how to display a data item, XML tags describe the data itself. The initial impetus for XML may have been primarily to enhance this ability of remote applications to interpret and operate on documents fetched over the Internet, so it has become the best solution now to solve the problems with data exchange and translation between the multiple sources. But XML also raises a problem: how to integrate XML documents with data stored in the traditional RDBMS. The objective is to communicate bi-directional data sources between RDBMS data and XML documents, and has the ability to interoperate data between multiple data sources. Finally, reach the purpose of heterogeneous data become homogeneous. In this research, we try to develop a translation model between RDBMS data and XML documents, in order to exchange and reuse data between different sources.
|
53 |
Interoperability of Digital Rights Management Systems via the Exchange of XML-based Rights ExpressionsGuth, Susanne 02 1900 (has links) (PDF)
The dissertation deals with the cutting-edge subject of electronic contracts, which have the potential to automatically process and control the access rights for (electronic) goods. The dissertation shows the design and the implementation of a rights expression exchange framework. The framework enables digital rights management systems to exchange electronic contracts with each other and thus, provides DRM system compatibility. The electronic contracts, which are formulated in a standardized rights expression language, serve as exchange format between different DRM systems. The dissertation introduces a methodology for the standardized composition, exchange and processing of electronic contracts respectively rights expressions. (author´s abstract)
|
54 |
Inter-Area Data Exchange Performance Evaluation and Complete Network Model ImprovementSu, Chun-Lien 20 June 2001 (has links)
A power system is typically one small part of a larger interconnected network and is affected to a varying degree, by contingencies external to itself as well as by the reaction of external network to its own contingencies. Thus, the accuracy of a complete interconnected network model would affect the results of many transmission level analyses. In an interconnected power system, the real-time network security and power transfer capability analyses require a ¡§real-time¡¨ complete network base case solution. In order to accurately assess the system security and the inter-area transfer capability, it is highly desirable that any available information from all areas is used. With the advent of communications among operations control center computers, real-time telemetered data can be exchanged for complete network modeling. Measurement time skew should be considered in the complete network modeling when combining large area data received via a data communication network.
In this dissertation, several suggestions aiming toward the improvement of complete network modeling are offered. A discrete event simulation technique is used to assess the performance of a data exchange scheme that uses Internet interface to the SCADA system. Performance modeling of data exchange on the Internet is established and a quantitative analysis of the data exchange delay is presented. With the prediction mechanisms, the effect of time skew of interchanged data among utilities can be minimized, and consequently, state estimation (SE) could provide the accurate real-time complete network models of the interconnected network for security and available transfer capability analyses.
In order to accommodate the effects of randomly varying arrival of measurement data and setup a base case for more accurate analyses of network security and transfer capability, an implementation of a stochastic Extended Kalman Filter (EKF) algorithm is proposed to provide optimal estimates of interconnected network states for systems in which some or all measurements are delayed. To have an accurate state estimation of a complete network, it is essential to have the capability of detecting bad data in the model. An efficient information debugging methodology based on the stochastic EKF algorithm is used for the detection, diagnosis and elimination of bad data.
|
55 |
Ontology-based approach to enable feature interoperability between CAD systemsTessier, Sean Michael 23 May 2011 (has links)
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The standards for CAD data exchange have remained largely restricted to geometric representations, causing the design intent portrayed through construction history, features, parameters, and constraints to be discarded in the exchange process. In this thesis, an ontology-based framework is proposed to allow for the full exchange of semantic feature data. A hybrid ontology approach is proposed, where a shared base ontology is used to convey the concepts that are common amongst different CAD systems, while local ontologies are used to represent the feature libraries of individual CAD systems as combinations of these shared concepts. A three-branch CAD feature model is constructed to reduce ambiguity in the construction of local ontology feature data. Boundary representation (B-Rep) data corresponding to the output of the feature operation is incorporated into the feature data to enhance data exchange.
The Ontology Web Language (OWL) is used to construct a shared base ontology and a small feature library, which allows the use of existing ontology reasoning tools to infer new relationships and information between heterogeneous data. A combination of OWL and SWRL (Semantic Web Rule Language) rules are developed to allow a feature from an arbitrary source system expressed via the shared base ontology to be automatically classified and translated into the target system. These rules relate input parameters and reference types to expected B-Rep objects, allowing classification even when feature definitions vary or when little is known about the source system. In cases when the source system is well known, this approach also permits direct translation rules to be implemented. With such a flexible framework, a neutral feature exchange format could be developed.
|
56 |
Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projectsPathni, Charu 09 November 2015 (has links) (PDF)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.
|
57 |
Approximation of OLAP queries on data warehousesCao, Phuong Thao 20 June 2013 (has links) (PDF)
We study the approximate answers to OLAP queries on data warehouses. We consider the relative answers to OLAP queries on a schema, as distributions with the L1 distance and approximate the answers without storing the entire data warehouse. We first introduce three specific methods: the uniform sampling, the measure-based sampling and the statistical model. We introduce also an edit distance between data warehouses with edit operations adapted for data warehouses. Then, in the OLAP data exchange, we study how to sample each source and combine the samples to approximate any OLAP query. We next consider a streaming context, where a data warehouse is built by streams of different sources. We show a lower bound on the size of the memory necessary to approximate queries. In this case, we approximate OLAP queries with a finite memory. We describe also a method to discover the statistical dependencies, a new notion we introduce. We are looking for them based on the decision tree. We apply the method to two data warehouses. The first one simulates the data of sensors, which provide weather parameters over time and location from different sources. The second one is the collection of RSS from the web sites on Internet.
|
58 |
Adaptabilité des flux multimédia appliquée au télé-diagnostic médical / Adaptability of multimedia streams applied to medical telediagnosisMuthada Pottayya, Ronnie 08 December 2017 (has links)
Dans le domaine médical, la plupart des établissements (hôpitaux, cliniques, …) utilisent des applications distribuées dans le cadre de la télémédecine. Comme la sécurité de l’information est primordiale dans ces établissements,ces applications doivent pouvoir traverser les barrières de sécurité (passerelles sécurisées comme les proxies Web, les pare-feu, …). Le protocole UDP (User Datagram Protocol en anglais), qui est classiquement recommandé pour les applications de vidéo conférence ou toutes autres données soumises à la contrainte temps-réel, n’est pas utilisable par ces dispositifs de sécurité (sauf si des ports fixes sont explicitement configurés : ce qui est considéré comme une violation de sécurité au sein de ces établissements). Dans cette thèse, nous proposons une nouvelle plateforme appelée VAGABOND (Video Adaptation framework, crossing security GAteways, Based ON transcoDing) qui fonctionne de manière très efficace et originale sur la base du protocole TCP (Transmission Control Protocol). VAGABOND est composé de proxies d’adaptation,appelés des AP (pour Adaptation Proxy), qui ont été conçus pour prendre en considération les préférences utilisateurs des professionnels de santé, les hétérogénéités des périphériques,et les variations dynamiques de la bande passante dans un réseau. VAGABOND est capable de s’adapter tout aussi bien au niveau utilisateur qu’au niveau réseau. La loi binômiale et l’inférence bayésienne sur une proportion binômiale sont utilisées pour déclencher des adaptations de profils utilisateurs. Ainsi, nous souhaitons être plus tolérants aux fortes variations de la bande passante d'un réseau. Avec une précision plus fine et grâce à ces lois de probabilité,l'adaptation du profil utilisateur n'est déclenchée que lorsque des congestions réseau sévères surviennent.Enfin, TCP étant un protocole de transport fiable et en mode connecté,nous avons eu besoin de concevoir et d'utiliser de nouvelles stratégies d'adaptation intelligentes avec la transmission de données afin de faire face aux problèmes de latence et à la temporisation des sockets. / In the medical area, most of medical facilities (hospitals, clinics, ldots) use distributed applications in the context of telemedecine.As information security is mandatory, these applications must be able to cross the security protocols (secured gateways like proxies, firewalls, ldots). User Datagram Protocol (UDP), which is classically recommended for videoconferencing applications, does not cross firewalls or proxies unless explicitly configured fixed ports are declared. These fixed ports are considered as a security breach.In this thesis, we propose a novel platform called VAGABOND (Video Adaptation framework, crossing security GAteways, Based ON transcoDing) which works, in a very efficient and original way; on TCP (Transmission Control Protocol). VAGABOND is composed of Adaptation Proxies (APs), which have been designed to take into consideration medical experts videoconferencing preferences, device heterogeneities, and network dynamic bandwidth variations. VAGABOND is able to adapt itself at the user and network levels.The cumulative binomial probability law and the Bayesian inference on a binomial proportion are used to trigger user profile adaptations. In fact, we aim at being more tolerant to severe network bandwidth variations. With a finer precision and following these probability laws, user profile adaptation is only triggered when severe network congestions arise. However, as TCP is a reliable transport protocol, we needed to design and to employ new intelligent adaptation strategies together with data transmission in order to cope with latency issues and sockets timeout.
|
59 |
An analysis of a data grid approach for spatial data infrastructuresCoetzee, Serena Martha 27 September 2009 (has links)
The concept of grid computing has permeated all areas of distributed computing, changing the way in which distributed systems are designed, developed and implemented. At the same time ‘geobrowsers’, such as Google Earth, NASA World Wind and Virtual Earth, along with in-vehicle navigation, handheld GPS devices and maps on mobile phones, have made interactive maps and geographic information an everyday experience. Behind these maps lies a wealth of spatial data that is collated from a vast number of different sources. A spatial data infrastructure (SDI) aims to make spatial data from multiple sources available to as wide an audience as possible. Current research indicates that, due to a number of reasons, data sharing in these SDIs is still not common. This dissertation presents an analysis of the data grid approach for SDIs. Starting off, two imaginary scenarios spell out for the first time how data grids can be applied to enable the sharing of address data in an SDI. The work in this dissertation spans two disciplines: Computer Science (CS) and Geographic Information Science (GISc). A study of related work reveals that the data grid approach in SDIs is both a novel application for data grids (CS), as well as a novel technology in SDI environments (GISc), and this dissertation advances mutual understanding between the two disciplines. The novel evaluation framework for national address databases in an SDI is used to evaluate existing information federation models against the data grid approach. This evaluation, as well as an analysis of address data in an SDI, confirms that there are similarities between the data grid approach and the requirement for consolidated address data in an SDI. The evaluation further shows that where a large number of organizations are involved, such as for a national address database, and where there is a lack of a single organization tasked with the management of a national address database, the data grid is an attractive alternative to other models. The Compartimos (Spanish for ‘we share’) reference model was developed to identify the components with their capabilities and relationships that are required to grid-enable address data sharing in an SDI. The definition of an address in the broader sense (i.e. not only for postal delivery), the notion of an address as a reference and the definition of an addressing system and its comparison to a spatial reference system contribute towards the understanding of what an address is. A novel address data model shows that it is possible to design a data model for sharing and exchange of address data, despite diverse addressing systems and without impacting on, or interfering with, local laws for address allocation. The analysis in this dissertation confirms the need for standardization of domain specific geographic information, such as address data, and their associated services in order to integrate data from distributed heterogeneous sources. In conclusion, results are presented and recommendations for future work, drawn from the experience on the work in this dissertation, are made. / Thesis (PhD)--University of Pretoria, 2009. / Computer Science / unrestricted
|
60 |
Computergestützte Simulationsschnittstelle – Optimierte Systementwicklung in der MechatronikZekeyo, S. Jewoh, Nezhat, S., Schropp, C., Miller, S. 08 June 2017 (has links)
Das Verbinden der Konstruktions- mit der Simulationsdisziplin für Mehrkörpersimulationen ermöglicht es, Produkte in der Konstruktionsumgebung zu gestalten und anschließend in der Simulationsumgebung unter gewünschten Einflüssen auszulegen. Disziplinübergreifende Kommunikation zwischen entsprechenden Softwareprodukten erfolgt meist indirekt, manuell und unter Integritätsverlusten. Zwischen verschiedenen CAD Systemen und MATLAB besteht keine direkte Verbindung, weshalb die SANEON GmbH zusammen mit dem Institut für Flugsystemdynamik der Technischen Universität München eine Schnittstelle zum automatischen, bidirektionalen und integritätsverlustfreien Austausch zwischen den Systemen entwickelt hat. So kann in MATLAB und Simscape Multibody (früher SimMechanics) vollautomatisiert ein Mehrkörpersimulationsmodell auf Basis eines CAD-Modells aufgebaut werden. Darüber hinaus können Daten zurück an die CAD-Umgebung gesendet werden und somit Daten bidirektional ausgetauscht werden. Das hieraus entstandene Alleinstellungsmerkmal des vollautomatisierten und bidirektionalen Austausches unserer Innovation ist ein Novum auf dem Markt.
Mit CASIN (Computer Aided Simulation INterface) stellen wir eine neuartige Plattform bereit, die Ihnen den domänenübergreifenden Transfer von CAD-Daten zwischen Konstruktions- und Simulationsumgebung erlaubt. So kann auf Bauteil- und Baugruppeninformationen per Knopfdruck unmittelbar in MATLAB zugegriffen werden. Benutzerdefinierte CAD-Parameter können aus MATLAB heraus modifiziert werden. Somit ist die Grundlage für einen iterativen Datenaustausch zwischen den Disziplinen geschaffen.
|
Page generated in 0.0625 seconds