• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A flexible approach for mapping between object-oriented databases and xml. A two way method based on an object graph.

Naser, Taher A.J. January 2011 (has links)
One of the most popular challenges facing academia and industry is the development of effective techniques and tools for maximizing the availability of data as the most valuable source of knowledge. The internet has dominated as the core for maximizing data availability and XML (eXtensible Markup Language) has emerged and is being gradually accepted as the universal standard format for platform independent publishing and exchanging data over the Internet. On the other hand, there remain large amount of data held in structured databases and database management systems have been traditionally used for the effective storage and manipulation of large volumes of data. This raised the need for effective methodologies capable of smoothly transforming data between different formats in general and between XML and structured databases in particular. This dissertation addresses the issue by proposing a two-way mapping approach between XML and object-oriented databases. The basic steps of the proposed approach are applied in a systematic way to produce a graph from the source and then transform the graph into the destination format. In other words, the derived graph summarizes characteristics of the source whether XML (elements and attributes) or object-oriented database (classes, inheritance and nesting hierarchies). Then, the developed methodology classifies nodes and links from the graph into the basic constructs of the destination, i.e., elements and attributes for XML or classes, inheritance and nesting hierarchies for object-oriented databases. The methodology has been successfully implemented and illustrative case studies are presented in this document.
12

Knowledge Graph Creation and Software Testing

Kyasa, Aishwarya January 2023 (has links)
Background: With the burgeoning volumes of data, efficient data transformation techniques are crucial. RDF mapping language has been recognized as a conventional method, whileIKEA the Knowledge graph’s approach brings a new perspective with tailored functions and schema definitions. Objectives: This study aims to compare the efficiency and effectiveness of the RDF mapping language (RML) and IKEA Knowledge graph(IKG) approaches in transforming JSON data into RDF format. It explores their performance across different complexity levels to provide insights into their strengths and limitations. Methods: We began our research by studying how professionals in the industry currently transform JSON data into Resource description framework(RDF) formats through a literature review. After gaining this understanding, we conducted practical experiments to compare the RDF mapping language (RML) and IKEA Knowledge graph(IKG)approaches at various complexity levels. We assessed user-friendliness, adaptability, execution time, and overall performance. This combined approach aimed to connect theoretical knowledge with experimental data transformation practices. Results: The results demonstrate the superiority of the IKEA Knowledge graph approach(IKG), particularly in intricate scenarios involving conditional mapping and external graph data lookup. It showcases the IKEA Knowledge Graph (IKG) method’s versatility and efficiency in managing diverse data transformation tasks. Conclusions: Through practical experimentation and thorough analysis, this study concludes that the IKEA Knowledge graph approach demonstrates superior performance in handling complex data transformations compared to the RDF mapping language (RML) approach. This research provides valuable insights for choosing an optimal data trans-formation approach based on the specific task complexities and requirements
13

Remote access capability embedded in linked data using bi-directional transformation: issues and simulation

Malik, K.R., Farhan, M., Habib, M.A., Khalid, S., Ahmad, M., Ghafir, Ibrahim 24 January 2020 (has links)
No / Many datasets are available in the form of conventional databases, or simplified comma separated values. The machines do not adequately handle these types of unstructured data. There are compatibility issues as well, which are not addressed well to manage the transformation. The literature describes several rigid techniques that do the transformation from unstructured or conventional data sources to Resource Description Framework (RDF) with data loss and limited customization. These techniques do not present any remote way that helps to avoid compatibility issues among these data forms simultaneous utilization. In this article, a new approach has been introduced that allows data mapping. This mapping can be used to understand their differences at the level of data representations. The mapping is done using Extensible Markup Language (XML) based data structures as intermediate data presenter. This approach also allows bi-directional data transformation from conventional data format and RDF without data loss and with improved remote availability of data. This is a solution to the issue concerning update when dealing with any change in the remote environment for the data. Thus, traditional systems can easily be transformed into Semantic Web-based system. The same is true when transforming data back to conventional data format, i.e. Database (DB). This bidirectional transformation results in no data loss, which creates compatibility between both traditional and semantic form of data. It will allow applying inference and reasoning on conventional systems. The census un-employment dataset is used which is being collected from US different states. Remote bi-directional transformation is mapped on the dataset and developed linkage using relationships between data elements. This approach will help to handle both types of data formats to co-exist at the same time, which will create opportunities for data compatibility, statistical powers and inference on linked data found in remote areas.
14

Duomenų gavimas iš daugialypių šaltinių ir jų struktūrizavimas / Data Mining from Multiple Sources and Structurization

Barauskas, Antanas 19 June 2014 (has links)
Šio darbo idėja yra Išgauti-Pertvarkyti-Įkelti (angl. ETL) principu veikiančios sistemos sukūrimas. Sistema išgauna duomenis iš skirtingo tipo šaltinių, juos tinkamai pertvarko ir tik tuomet įkelia į parinktą saugojimo vietą. Išnagrinėti pagrindiniai duomenų gavimo būdai ir populiariausi šiuo metu ETL įrankiai. Sukurta debesų kompiuterija paremtos daugiakomponentinės duomenų gavimo iš daugialypių šaltinių ir jų struktūrizavimo vieningu formatu sistemos architektūra ir prototipas. Skirtingai nuo duomenis kaupiančių sistemų, ši sistema duomenis išgauna tik tuomet, kai jie reikalingi. Duomenų saugojimui naudojama grafu paremta duomenų bazė, kuri leidžia saugoti ne tik duomenis bet ir jų tarpusavio ryšių informaciją. Darbo apimtis: 48 puslapiai, 19 paveikslėlių, 10 lentelių ir 30 informacijos šaltinių. / The aim of this work is to create ETL (Extract-Transform-Load) system for data extraction from different types of data sources, proper transformation of the extracted data and loading the transformed data into the selected place of storage. The main techniques of data extraction and the most popular ETL tools available today have been analyzed. An architectural solution based on cloud computing, as well as, a prototype of the system for data extraction from multiple sources and data structurization have been created. Unlike the traditional data storing - based systems, the proposed system allows to extract data only in case it is needed for analysis. The graph database employed for data storage enables to store not only the data, but also the information about the relations of the entities. Structure: 48 pages, 19 figures, 10 tables and 30 references.
15

A flexible approach for mapping between object-oriented databases and XML : a two way method based on an object graph

Naser, Taher Ahmed Jabir January 2011 (has links)
One of the most popular challenges facing academia and industry is the development of effective techniques and tools for maximizing the availability of data as the most valuable source of knowledge. The internet has dominated as the core for maximizing data availability and XML (eXtensible Markup Language) has emerged and is being gradually accepted as the universal standard format for platform independent publishing and exchanging data over the Internet. On the other hand, there remain large amount of data held in structured databases and database management systems have been traditionally used for the effective storage and manipulation of large volumes of data. This raised the need for effective methodologies capable of smoothly transforming data between different formats in general and between XML and structured databases in particular. This dissertation addresses the issue by proposing a two-way mapping approach between XML and object-oriented databases. The basic steps of the proposed approach are applied in a systematic way to produce a graph from the source and then transform the graph into the destination format. In other words, the derived graph summarizes characteristics of the source whether XML (elements and attributes) or object-oriented database (classes, inheritance and nesting hierarchies). Then, the developed methodology classifies nodes and links from the graph into the basic constructs of the destination, i.e., elements and attributes for XML or classes, inheritance and nesting hierarchies for object-oriented databases. The methodology has been successfully implemented and illustrative case studies are presented in this document.
16

Vytvoření prezentační vrstvy a rozhraní systému pro integraci a vyhledávání informací / Presentation layer and interface for a system for information integration and search

Hladík, Tomáš January 2016 (has links)
The aim of this thesis is to analyse, design and implement a system to present business objects. These objects are accessible through an existing RESTful API of an integration system. The system combines various sources and publishes them together with information how these data are stored. The task of the thesis is to create data views that meet the following requirements - they can react to metadata changes, they support multiple output formats and these views can be customized. The application should be easily modifiable and connectable to already existing web systems.
17

Matematické modely způsobilosti procesu / Mathematical Models of Process Capability

Horník, Petr January 2015 (has links)
Firstly, we deal with the verification of normality and other necessary prerequisites needed in this thesis. We also introduce transformations to converse non-normally distributed data to normal and continue with capability analysis. We describe the design of control charts, useful tools to assess process stability. They help us to eliminate assignable causes and leave only chance causes in process. We obtain process in control state. Finally, we introduce both capability and performance ratios for both normal and non-normal data, and analyse some of their properties. At the end of the thesis, we prove acquired knowledge by performing capability analysis of real process.
18

Student Experiences of Participation in Tracked Classes Throughout High School: The Ethic of Justice, School Leadership, and Curriculum Design

Falkenstein, Robert N. 02 November 2007 (has links)
No description available.
19

Adaptyvūs duomenų modeliai projektavime / Adaptive data models in design

Pliuskuvienė, Birutė 27 June 2008 (has links)
Disertacijoje nagrinėjamos taikomųjų uždavinių, kurių duomenys išreikšti reliacinėmis aibėmis, sprendimus realizuojančių priemonių adaptyvumo problemos. Pagrindiniai tyrimo objektai yra adaptyvieji duomenų modeliai: duomenų išrinkimo modelis, duomenų agregavimo modelis ir duomenų apdorojimo projektavimo modelis. Darbo tikslas – sukurti adaptyviąją duomenų apdorojimo projektavimo technologiją, kuri leistų išrinkti, agreguoti ir apdoroti duomenis keičiant tik šią technologiją sudarančių adaptyviųjų duomenų modelių formalių išraiškų parametrus. Naudojant sukurtą technologiją skirtingiems uždaviniams spęsti taikomas vienas ir tas pats duomenų apdorojimo principas. Kitaip tariant, visą algoritmų ir juos realizuojančių programini�� modulių sistemą galime pritaikyti skirtingiems taikomojo pobūdžio uždaviniams spręsti. Tai leidžia sumažinti naujų programinių priemonių kūrimo apimtis ir sąnaudas. / The dissertation deals with the adaptivity difficulties of the solutions implemented to solve applied problems whose data is expressed as relational sets. The main objects of research are adaptive data models: a data selection model, a data aggregation model and a model for designing data processing. The aim of the work is to create an adaptive technology for designing data processing that would enable to perform data selection, aggregation and processing by changing only the parameters of formal expressions for the adaptive data models forming the technology. While using the technology created for solving different problems the same data processing principle is used. In other words, the whole system of algorithms and program modules implementing them can be adjusted for solving different problems of applied nature. This allows to decrease the volume and expenses of creating new software.
20

Adaptive data models in design / Adaptyvūs duomenų modeliai projektavime

Pliuskuvienė, Birutė 27 June 2008 (has links)
In the dissertation the adaptation problem of the software whose instability is caused by the changes in primary data contents and structure as well as the algorithms for applied problems implementing solutions to problems of applied nature is examined. The solution to the problem is based on the methodology of adapting models for the data expressed as relational sets. / Disertacijoje nagrinėjama taikomųjų uždavinių sprendimus realizuojančių programinių priemonių, kurių nepastovumą lemia pirminių duomenų turinio, jų struktūrų ir sprendžiamų taikomojo pobūdžio uždavinių algoritmų pokyčiai, adaptavimo problema.

Page generated in 0.1061 seconds