• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 16
  • 14
  • 13
  • 13
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

資料交換機制之研究─以我國動植物防檢局為例 / A Study of Data Exchange Mechanisms─The Case of BAPHIQ in Taiwan

葉耿志, Yeh, Ken-Chih Unknown Date (has links)
由於我國加入世界貿易組織後,使得大量的動植物及其產品的進出口,將會迅速地增加,國外動植物疫情隨貨品引入的機會也大大增加了許多,所以行政院農業委員會動植物防疫檢疫局所演的角色便相當重要。因為它必須和國外重要貿易國家作資料的交換,而目前的方式有電話、傳真及紙本等傳統方式。隨著網際網路的盛行與普及,使用網際網路傳輸的資料的可行性增大,而究竟可以如何透過網際網路傳輸所需要的資料呢? 近年來我國政府推動大力「電子化政府」的政策,身為政府單位的防檢局當然也不例外,目前也正準備逐年更新其電子檢疫發證資訊系統,並且隨著各相關局處內外電子化的完成,和國外貿易國家防檢疫單位間,透過網際網路交換防檢疫資料的可能性,也大幅地向上提升許多。 本研究即是針對國外目前所使用的封閉性網路,以EDI為主;其他開放性網路下以XML為基礎的點對點方式與電子中心的方式,做一個探討。並試圖去思考如何參考國外目前資料交換的傳輸模式及電子商務上資料交換的經驗,並藉由本研究的資料交換模式設計,來進行資料交換的實現。用以解決點對點間的資料交換、異質系統間的資料傳遞及資料傳遞的安全性等,以建立一套共同的資料交換機制IQDE-Hub,使資料的交換成為可行。 / Recently it contains many types of data exchages in the application of electronic commerce. ECs are B-to-B (Business to Business), B-to-C (Business to Consumer) and C-to-C (Consumer to Consumer). So there are many types of data exchages in the world. They contain EDI of private network, XML based and E-Hub of public network. This study will establish a methodology. It provides many other countries to share common data with each other. The data exchange mechanisms of this study can solve three main problems of other data exchange mechanisms. They are point-to-point data exchange, data exchage between different information systems and security of data exchange. The exchange mechanisms of this study called IQDE-Hub (Inspection and Quaratine Data Exchange-Hub). In this study, we can see that it provides a method to exchange data between other countries. So we have not to use other methods which include telephone, fax, mail and so on by human. We can exchange data by using electronic type.
62

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
63

Approximation of OLAP queries on data warehouses / Approximation aux requêtes OLAP sur les entrepôts de données

Cao, Phuong Thao 20 June 2013 (has links)
Nous étudions les réponses proches à des requêtes OLAP sur les entrepôts de données. Nous considérons les réponses relatives aux requêtes OLAP sur un schéma, comme les distributions avec la distance L1 et rapprocher les réponses sans stocker totalement l'entrepôt de données. Nous présentons d'abord trois méthodes spécifiques: l'échantillonnage uniforme, l'échantillonnage basé sur la mesure et le modèle statistique. Nous introduisons également une distance d'édition entre les entrepôts de données avec des opérations d'édition adaptées aux entrepôts de données. Puis, dans l'échange de données OLAP, nous étudions comment échantillonner chaque source et combiner les échantillons pour rapprocher toutes requêtes OLAP. Nous examinons ensuite un contexte streaming, où un entrepôt de données est construit par les flux de différentes sources. Nous montrons une borne inférieure de la taille de la mémoire nécessaire aux requêtes approximatives. Dans ce cas, nous avons les réponses pour les requêtes OLAP avec une mémoire finie. Nous décrivons également une méthode pour découvrir les dépendances statistique, une nouvelle notion que nous introduisons. Nous recherchons ces dépendances en basant sur l'arbre de décision. Nous appliquons la méthode à deux entrepôts de données. Le premier simule les données de capteurs, qui fournissent des paramètres météorologiques au fil du temps et de l'emplacement à partir de différentes sources. Le deuxième est la collecte de RSS à partir des sites web sur Internet. / We study the approximate answers to OLAP queries on data warehouses. We consider the relative answers to OLAP queries on a schema, as distributions with the L1 distance and approximate the answers without storing the entire data warehouse. We first introduce three specific methods: the uniform sampling, the measure-based sampling and the statistical model. We introduce also an edit distance between data warehouses with edit operations adapted for data warehouses. Then, in the OLAP data exchange, we study how to sample each source and combine the samples to approximate any OLAP query. We next consider a streaming context, where a data warehouse is built by streams of different sources. We show a lower bound on the size of the memory necessary to approximate queries. In this case, we approximate OLAP queries with a finite memory. We describe also a method to discover the statistical dependencies, a new notion we introduce. We are looking for them based on the decision tree. We apply the method to two data warehouses. The first one simulates the data of sensors, which provide weather parameters over time and location from different sources. The second one is the collection of RSS from the web sites on Internet.
64

Ingéniérie des Systèmes d'Information Coopératifs, Application aux Systèmes d'Information Hospitaliers

Azami, Ikram El 20 March 2012 (has links)
Dans cette thèse, nous traitons les systèmes d’information hospitaliers (SIH), nous analysons leurs problématiques de conception, d’interopérabilité et de communication, dans l’objectif de contribuer à la conception d’un SIH canonique, coopératif, et communicant, ainsi de modéliser les échanges entre ses composants et également avec les autres systèmes impliqués dans la prise en charge du patient dans un réseau de soin. Nous proposons une structure et un modèle de conception d’un SIH canonique en se basant sur trois concepts principaux responsables de la production de l’information médicale, à savoir, le cas pathologique, le Poste de Production de l’Information Médicale (PPIM) et l’activité médicale elle même. Cette dernière, étant modélisée sur la notion d’arbre, permettra une meilleure structuration du processus de soin.Autant, dans l’optique d'assurer la continuité de soins, nous fournissons un modèle d’échange de données médicales à base du standard XML. Ce modèle consiste en un ensemble de données pertinentes organisées autours de cinq catégories : les données du patient, les données sur les antécédents du patient, les données de l’activité médicale, les données des prescriptions médicales et les données sur les documents médicaux (images, compte rendu…).Enfin, nous décrivons une solution d’intégration des systèmes d’information hospitaliers. La solution est inspirée de l’ingénierie des systèmes d’information coopératifs et consiste en une architecture de médiation structurée en trois niveaux : le niveau système d’information, le niveau médiation, et le niveau utilisateur. L’architecture propose une organisation modulaire des systèmes d'information hospitaliers et contribue à satisfaire l’intégration des données, des fonctions et du workflow de l’information médicale. / In this thesis, we deal with hospital information systems (HIS), we analyze their design issues, interoperability and communication, with the aim of contributing to the design of a canonical, cooperative, and communicative HIS, and model the exchanges between its components and also with other systems involved in the management of patient in a healthcare network.We propose a structure and a conceptual model of a canonical HIS based on three main concepts involved in the production of healthcare data, namely, the pathological case, the Production Post of Healthcare Data (PPHD) and medical activity itself. The latter, being modeled as a tree, will allow better structuring of the care process.However, in view of ensuring continuity of care, we provide an XML-based model for exchanging medical data. This model consists of a set of relevant data organized around five categories: patient data, data on patient history, data of medical activity, data of medical prescriptions and medical records data (images, reporting ...).Finally, we describe a solution for integrating hospital information systems. The solution is inspired by the engineering of cooperatives information systems and consists of mediation-based architecture, structured into three levels: the level of information systems, the level of mediation, and the user level. The architecture offers a modular organization of hospital information systems and helps to insure data, function and workflow integration.
65

Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects

Pathni, Charu 30 September 2015 (has links)
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.
66

Automated and adaptive geometry preparation for ar/vr-applications

Dammann, Maximilian Peter, Steger, Wolfgang, Stelzer, Ralph 25 January 2023 (has links)
Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail (LOD) can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.
67

台灣證券市場跨組織資訊系統之個案研究 / A Case Study of Interorganizational Systems in Taiwan Securities Market

連錫祥, Lien, David Unknown Date (has links)
傳統上,由於受到組織及技術的限制,資訊科技(Information Technology)的應用只局限在單一組織之內。然而,隨著科技、經濟、組織及策略等因素的改變,這種情況已經改觀。   在國外有許多成功的個案將資訊科技視為策略性武器,並運用在產業競爭上。也有學者探討電子市場系統(Electronic Marketplaces or Electronic Market System),認為資訊科技也可以充當買賣交易過程中的中介系統(intermediary system),提供買賣雙方交換價格和產品資訊,降低買方尋找成本,創造經濟價值。   本研究主要目的在探討台灣證券市場跨組織資訊系統(Interorganizational System;IOS)之演進、系統的促成與限制因素、系統的網路架構。涵蓋範圍及與業務之間的關聯性、以及系統對台灣證券市場之影響。研究結果發現,有十項因素促成台灣證券市場IOS成長:(1).組織變革;(2).技術能力;(3).成功經驗;(4).不斷創新;(5).經濟效益;(6).跨組織效率;(7).競爭優勢;(8).市場安全;(9).作業流程標準化;(10).政府參與。另外有二項因素限制IOS的成長:(1).系統整合;(2).電腦相關成本。IOS對台灣證券市場的影響可以從四個構面分析:(1).國家經濟;(2).產業;(3).組織;(4).投資人。本研究最後建議台灣證券市場應去除限制IOS成長的因素,並朝三個方向努力:(1).進行產業合作,建立資訊聯盟(Information Alliance);(2).擴充系統網路範圍,加強規模經濟、範疇經濟(Economies of Scale and Scope)及網路外部性(Network Externalities);(3).建立證券、金融業EDI,達到無紙化作業。
68

Comparaison et évolution de schémas XML / Comparison and evolution of XML schema

Amavi, Joshua 28 November 2014 (has links)
XML est devenu le format standard d’échange de données. Nous souhaitons construire un environnement multi-système où des systèmes locaux travaillent en harmonie avec un système global, qui est une évolution conservatrice des systèmes locaux. Dans cet environnement, l’échange de données se fait dans les deux sens. Pour y parvenir nous avons besoin d’un mapping entre les schémas des systèmes. Le but du mapping est d’assurer l’évolution des schémas et de guider l’adaptation des documents entre les schémas concernés. Nous proposons des outils pour faciliter l’évolution de base de données XML. Ces outils permettent de : (i) calculer un mapping entre le schéma global et les schémas locaux, et d’adapter les documents ; (ii) calculer les contraintes d’intégrité du système global à partir de celles des systèmes locaux ; (iii) comparer les schémas de deux systèmes pour pouvoir remplacer un système par celui qui le contient ; (iv) corriger un nouveau document qui est invalide par rapport au schéma d’un système, afin de l’ajouter au système. Des expériences ont été menées sur des données synthétiques et réelles pour montrer l’efficacité de nos méthodes. / XML has become the de facto format for data exchange. We aim at establishing a multi-system environment where some local original systems work in harmony with a global integrated system, which is a conservative evolution of local ones. Data exchange is possible in both directions, allowing activities on both levels. For this purpose, we need schema mapping whose is to ensure schema evolution, and to guide the construction of a document translator, allowing automatic data adaptation wrt type evolution. We propose a set of tools to help dealing with XML database evolution. These tools are used : (i) to compute a mapping capable of obtaining a global schema which is a conservative extension of original local schemas, and to adapt XML documents ; (ii) to compute the set of integrity constraints for the global system on the basis of the local ones ; (iii) to compare XML types of two systems in order to replace a system by another one ; (iv) to correct a new document with respect to an XML schema. Experimental results are discussed, showing the efficiency of our methods in many situations.

Page generated in 0.0481 seconds