• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • Tagged with
  • 17
  • 17
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design and Implementation of a Data Persistence Layer for the GEMMA Framework

Gowda, Indhu Mathi 11 January 2017 (has links)
Data within an organization is highly structured and organized into specified applications or systems. These systems have a different function within an organization, so each user will have a different level to access each system. So by the data mapping approach user can easily isolate those data and prepare the declarations for the available data element. Generic Modular Mapping Framework (GEMMA) a new common generic framework for data mapping was developed by Airbus Group Innovation GmbH to avoid numerous potential issues in matching data from one source to another. It is geared towards the high flexibility in dealing with a large number of different challenges in handling huge data. It has an open architecture that allows the inclusion of the application-specific code and provides a generic rule-based mapping engine that allows the users to define their own mapping rules. But GEMMA tool is presently used to read and process the data on the fly in memory, as each time the tool is used for mapping data from different sources. This has an impact on large memory consumption when handling large data and is inefficient in storing and retrieving the session data which are the user decisions. This paper provides a detailed description of the GEMMA tool, with the new concept for specific requirements inherited in the framework and in the current architecture to achieve the goals.
12

An information processing model and a set of risk identification methods for privacy impact assessment in an international context / 国際的な文脈におけるプライバシー影響評価のための情報取扱モデル及び一連のリスク特定手法

Kuroda, Yuki 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24935号 / 情博第846号 / 新制||情||142(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 黒田 知宏, 教授 矢守 克也, 教授 曽我部 真裕 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DGAM
13

Towards a Conceptual Framework for Persistent Use: A Technical Plan to Achieve Semantic Interoperability within Electronic Health Record Systems

Blackman-Lees, Shellon 01 January 2017 (has links)
Semantic interoperability within the health care sector requires that patient data be fully available and shared without ambiguity across participating health facilities. The need for the current research was based on federal stipulations that required health facilities provide complete and optimal care to patients by allowing full access to their health records. The ongoing discussions to achieve interoperability within the health care industry continue to emphasize the need for healthcare facilities to successfully adopt and implement Electronic Health Record (EHR) systems. Reluctance by the healthcare industry to implement these EHRs for the purpose of achieving interoperability has led to the current research problem where it was determined that there is no existing single data standardization structure that can effectively share and interpret patient data within heterogeneous systems. The current research used the design science research methodology (DSRM) to design and develop a master data standardization and translation (MDST) model that allowed seamless exchange of healthcare data among multiple facilities. To achieve interoperability through a common data standardization structure, where multiple independent data models can coexist, the translation mechanism incorporated the use of the Resource Description Framework (RDF). Using RDF, a universal exchange language, allowed for multiple data models and vocabularies to be easily combined and interrelated within a single environment thereby reducing data definition ambiguity. Based on the results from the research, key functional capabilities to effectively map and translate health data were documented. The research solution addressed two primary issues that impact semantic interoperability – the need for a centralized standards repository and a framework that effectively maps and translates data between various EHRs and vocabularies. Thus, health professionals have a single interpretation of health data across multiple facilities which ensures the integrity and validity of patient care. The research contributed to the field of design science development through the advancements of the underlying theories, phases, and frameworks used in the design and development of data translation models. While the current research focused on the development of a single, common information model, further research opportunities and recommendations could include investigations into the implementation of these types of artifacts within a single environment at a multi-facility hospital entity.
14

Preservation Through Re-Contextualization

Olson, Andrea E. 01 January 2009 (has links) (PDF)
Sustainable development practices and historic preservation efforts are imbued with contradictions, overlappings and shortcomings. Adaptive reuse is a tool for the sustainable preservation of existing building stock that bridges these approaches and more appropriately addresses the values of time, energy, place and community with respect to the built environment. Destruction of both material and abstract qualities can be circumvented by actively engaging a site, landscape or context through revealing and crossbreeding complex patterns, traces and perspectives. The value of a datascape is optimized when such a re-contextualization consists of both additive and subtractive manipulations and is flexible, continuous and regenerative. To avoid demolition and severing connections to the past and to extend the potential success of the development of the former Belchertown State School for the Feeble Minded in Belchertown, Massachusetts, I investigated ways by which the existing Auditorium Building and its relationship to the site could be re-contextualized. Since 1992, this defunct state-operated facility has been closed, transferred to the town and considered for economic development. Within the one hundred fifty-five-acre parcel that remains to be developed there are approximately sixty acres of forested areas and wetlands, a freshwater pond, and numerous abandoned school buildings in poor condition. The Auditorium Building, centrally located within the buildable area of the state school parcel, acted as a gateway into the campus and historically served as a gathering, performing and learning space for both school and Belchertown residents. In conjunction with precedent and programmatic research, I mapped patterns of State School site data which included not only existing, visible data but that which is historical, potential and invisible. The interpretation of these vectors, connections and boundaries served as a framework for re-contextualization and aimed to identify contextual attributes that require preservation, accretion or removal. The grafting of this data to the Auditorium Building and its surroundings exposed and affected various patterns of behavior that ultimately impacted its form, program and relationship to the landscape.
15

The Use of High Altitude Photography As An Improved Data Source For Drainage System Analysis

Edwards, Peter 10 1900 (has links)
<p> Studies to date involving the network properties of drainage systems have been theoretical in nature; and the environmental implications of these network characteristics have not been exploited to the extent that would appear warranted. This situation exists due to the lack of an accurate data source. Many studies have recognized this. inadequacy of the conventional data sources to meet the necessary requirements of efficiency (in data production and handling), accuracy, consistency and uniformity. </p> <p> The present study demonstrates that high altitude, small scale colour infrared photography is capable of providing drainage network data that fulfill all these basic requirements. Data derived from the three drainage basins, mapped from a variety of data sources, demonstrate three important points. The level of detail obtained from the small scale colour infrared photography far exceeds that available from more traditional data sources. Secondly, these network data are statistica+ly consistent with the traditional data sources. Thirdly, the basin characteristics derived from the high altitude data source show a marked association with the known surficial environments and an expected variation from one surficial environment to another. </p> / Thesis / Master of Arts (MA)
16

An approach to automate the adaptor software generation for tool integration in Application/ Product Lifecycle Management tool chains.

Singh, Shikhar January 2016 (has links)
An emerging problem in organisations is that there exist a large number of tools storing data that communicate with each other too often, throughout the process of an application or product development. However, no means of communication without the intervention of a central entity (usually a server) or storing the schema at a central repository exist. Accessing data among tools and linking them is tough and resource intensive. As part of the thesis, we develop a software (also referred to as ‘adaptor’ in the thesis), which, when implemented in the lifecycle management systems, integrates data seamlessly. This will eliminate the need of storing database schemas at a central repository and make the process of accessing data within tools less resource intensive. The adaptor acts as a wrapper to the tools and allows them to directly communicate with each other and exchange data. When using the developed adaptor for communicating data between various tools, the data in relational databases is first converted into RDF format and is then sent or received. Hence, RDF forms the crucial underlying concept on which the software will be based. The Resource description framework (RDF) provides the functionality of data integration irrespective of underlying schemas by treating data as resource and representing it as URIs. The model of RDF is a data model that is used for exchange and communication of data on the Internet and can be used in solving other real world problems like tool integration and automation of communication in relational databases. However, developing this adaptor for every tool requires understanding the individual schemas and structure of each of the tools’ database. This again requires a lot of effort for the developer of the adaptor. So, the main aim of the thesis will be to automate the development of such adaptors. With this automation, the need for anyone to manually assess the database and then develop the adaptor specific to the database is eliminated. Such adaptors and concepts can be used to implement similar solutions in other organisations faced with similar problems. In the end, the output of the thesis is an approachwhich automates the process of generating these adaptors. / Resource Description Framework (RDF) ger funktionaliteten av dataintegration, oberoende av underliggande scheman genom att behandla uppgifter som resurs och representerar det som URI. Modellen för Resource Description Framework är en datamodell som används för utbyte och kommunikation av uppgifter om Internet och kan användas för att lösa andra verkliga problem som integrationsverktyg och automatisering av kommunikation i relationsdatabaser. Ett växande problem i organisationer är att det finns ett stort antal verktyg som lagrar data och som kommunicerar med varandra alltför ofta, under hela processen för ett program eller produktutveckling. Men inga kommunikationsmedel utan ingripande av en central enhet (oftast en server) finns. Åtkomst av data mellan verktyg och länkningar mellan dem är resurskrävande. Som en del av avhandlingen utvecklar vi en programvara (även hänvisad till som "adapter" i avhandlingen), som integrerar data utan större problem. Detta kommer att eliminera behovet av att lagra databasscheman på en central lagringsplats och göra processen för att hämta data inom verktyg mindre resurskrävande. Detta kommer att ske efter beslut om en särskild strategi för att uppnå kommunikation mellan olika verktyg som kan vara en sammanslagning av många relevanta begrepp, genom studier av nya och kommande metoder som kan hjälpa i nämnda scenarier. Med den utvecklade programvaran konverteras först datat i relationsdatabaserna till RDF form och skickas och tas sedan emot i RDF format. Således utgör RDF det viktiga underliggande konceptet för programvaran. Det främsta målet med avhandlingen är att automatisera utvecklingen av ett sådant verktyg (adapter). Med denna automatisering elimineras behovet att av någon manuellt behöver utvärdera databasen och sedan utveckla adaptern enligt databasen. Ett sådant verktyg kan användas för att implementera liknande lösningar i andra organisationer som har liknande problem. Således är resultatet av avhandlingen en algoritm eller ett tillvägagångssätt för att automatisera processen av att skapa adaptern.
17

Order-sensitive XML Query Processing Over Relational Sources

Murphy, Brian R 05 May 2003 (has links)
XML is an emerging standard format for data on the Web as well as in business applications. In order to store and access this information in an efficient manner, database technology must be utilized. A relational database system, the most established and mature technology for query processing and storage, creates a strong foundation for such an XML data management system. However, while relational databases are based on SQL queries, the original user queries are written in XQuery, an XML query language. This XML query language has support for order-sensitive queries as XML is an order-sensitive markup language. A major problem has been discovered with loading XML in a relational database. That problem is the lack of native SQL support for and management of order handling. While XQuery has order and positional support, SQL does not have the same support. For example, individuals who were viewing XML information about music albums would have a hard time querying for the first three songs of a track list from a relational backend. Mapping XML documents to relational backends also proves hard as the data models (hierarchical elements versus flat tables) are so different. For these reasons, and other purposes, the Rainbow System is being developed at WPI as a system that bridges XML data and relational data. This thesis in particular deals with the algebra operators that affect order, order sensitive loading and mapping of XML documents, and the pushdown of order handling into SQL-capable query engines. The contributions of the thesis are the order-sensitive rewrite rules, new XML to relational mappings with different order styles, order-sensitive template-driven SQL generation, and a proposed metadata table for order-sensitive information. A system that implements these proposed techniques with XQuery as the XML query language and Oracle as the backend relational storage system has been developed. Experiments were created to measure execution time based on various factors. First, scalability of the system as backend data set size grows is studied. Second, scalability of the system as results returned from the database grows, and finally, query execution times with different loading types are explored. The experimental results are encouraging. Query execution with the relational backend proves to be much faster than native execution within the Rainbow system. These results confirm the practical utility of our proposed order-sensitive XQuery execution solution over relational data.

Page generated in 0.3594 seconds