• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 17
  • 17
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 33
  • 23
  • 21
  • 21
  • 18
  • 18
  • 16
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

OWL transformavimas į reliacinių duomenų bazių schemas / Transformation of OWL to Relational Database Schemas

Petrikas, Giedrius 26 August 2010 (has links)
Ontologijų aprašymai yra dažniausiai naudojami semantiniame žiniatinklyje (Semantic Web/Web 2.0), tačiau pastaruoju metu jie randa vis daugiau ir daugiau pritaikymo kasdienėms informacijos sistemoms. Puikiai suformuota ontologija privalo turėti teisingą sintaksę ir nedviprasmišką mašinai suprantamą interpretaciją, tokiu būdu ji gali aiškiai apibrėžti fundamentalias sąvokas ir ryšius probleminėje srityje. Ontologijos vis plačiau naudojamos įvairiuose taikymuose: verslo procesų ir informacijos integravime, paieškoje ir žvalgyme. Tokie taikymai reikalauja geros greitaveikos, efektyvaus saugojimo ir didelio mąsto ontologinių duomenų manipuliavimo. Kai ontologijomis paremtos sistemos auga tiek akiračiu, tiek apimtimi, specialistų sistemose naudojami samprotavimo varikliai tampa nebetinkami. Tokiomis aplinkybėmis, ontologijų saugojimas reliacinėse duomenų bazėse tampa būtinas semantiniame žiniatinklyje ir įmonėse. Šiame darbe atsakoma į klausimą kokiu būdu OWL ontologijas galima efektyviai transformuoti į reliacinių duomenų bazių schemas. / Ontology descriptions are typically used in Semantic Web/Web2.0, but nowadays they find more and more adaptability in everyday Information Systems. Well-formed ontology must have correct syntax and unambiguous machine-understandable interpretation, so it is capable to clearly defining fundamental concepts and relationships of the problem domain. Ontologies are increasingly used in many applications: business process and information integration, search and navigation. Such applications require scalability and performance, efficient storage and manipulation of large scale ontological data. In such circumstances, storing ontologies in relational databases are becoming the relevant needs for Semantic Web and enterprises. For ontology development, Semantic Web languages are dedicated: Resource Description Framework (RDF) and schema RDFS, and Web Ontology Language (OWL) that consists of three sublanguages – OWL Lite, OWL Description Logic (DL) and OWL Full. When ontology based systems are growing in scope and volume, reasoners of expert systems are becoming unsuitable. In this work an algorithm which fully automatically transforms ontologies, represented in OWL, to RDB schemas is proposed. Some concepts, e.g. ontology classes and properties are mapped to relational tables, relations and attributes, other (constraints) are stored like metadata in special tables. Using both direct mapping and metadata, it is possible to obtain appropriate relational structures and not to lose the... [to full text]
62

OWL ontologijų transformavimas į reliacinių duomenų bazių schemas / Transforming OWL Ontologies To Relational Database Schemas

Vyšniauskas, Ernestas 15 January 2007 (has links)
The current work has arisen with respect to the growing importance of ontology modelling in Information Systems development. Due to emerging technologies of Semantic Web, it is desirable to use for this purpose the Web Ontology Language OWL. From the other side, the relational database technology has ensured the best facilities for storing, updating and manipulating the information of problem domain. This work covers analysis of the process how ontology of a particular domain described in OWL may be transformed and stored in a relational database. The algorithms for transformation of domain ontology, described in OWL, to relational database are proposed. According this algorithm, ontology classes are mapped to relational tables, properties to relations and attributes, and constraints – to metadata. The proposed algorithm is capable to transform all OWL Lite and part of OWL DL syntax. The further expansion of the algorithm to cover more capabilities of OWL should be based on the same principles. The prototypical tool, performing transformations, has been implemented as add-in for the ontology development tool “Protégé����. The methodology of transformation is illustrated with an example.
63

Benchmark para métodos de consultas por palavras-chave a bancos de dados relacionais / Benchmark for query methods by keywords to relational databases

Oliveira Filho, Audir da Costa 21 June 2018 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-08-03T11:37:48Z No. of bitstreams: 2 Dissertação - Audir da Costa Oliveira Filho - 2018.pdf: 1703675 bytes, checksum: f21c9ff479b840d0cdd37dfc9827c0dd (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-08-03T11:41:39Z (GMT) No. of bitstreams: 2 Dissertação - Audir da Costa Oliveira Filho - 2018.pdf: 1703675 bytes, checksum: f21c9ff479b840d0cdd37dfc9827c0dd (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-08-03T11:41:39Z (GMT). No. of bitstreams: 2 Dissertação - Audir da Costa Oliveira Filho - 2018.pdf: 1703675 bytes, checksum: f21c9ff479b840d0cdd37dfc9827c0dd (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-06-21 / Keyword query techniques have been proven to be very effective due of their user-friendliness on the Web. However, much of the data is stored in relational databases, being necessary knowledge of a structured language to access this data. In this sense, during the last decade some works have been proposed with the intention of performing keyword queries to relational databases. However, systems that implement this approach have been validated using ad hoc methods that may not reflect real-world workloads. The present work proposes a benchmark for evaluation of the methods of keyword queries to relational databases defining a standardized form with workloads that are consistent with the real world. This proposal assists in assessing the effectiveness of current and future systems. The results obtained with the benchmark application suggest that there are still many gaps to be addressed by keyword query techniques. / Técnicas de consultas por palavras-chave se mostraram muito eficazes devido à sua facilidade de utilização por usuário na Web. Contudo, grande parte dos dados estão armazenados em bancos de dados relacionais, sendo necessário conhecimento de uma linguagem estruturada para acesso a esses dados. Nesse sentido, durante a última década alguns trabalhos foram propostos com intuito de realizar consultas por palavras-chaves a bancos de dados relacionais. No entanto, os sistemas que implementam essa abordagem foram validados utilizando métodos ad hoc com bancos de dados que podem não refletir as cargas utilizadas no mundo real. O presente trabalho propõe um benchmark para avaliação dos métodos de consultas por palavras-chave a bancos de dados relacionais definindo uma forma padronizada com cargas de trabalhos condizentes com a do mundo real. Esta proposta auxilia na avaliaçãode eficácia dos sistemas atuais e futuros. Os resultados obtidos com a aplicação do benchmark sugerem que ainda existe muitas lacunas a serem tratadas pelas técnicas de consultas por palavras-chave.
64

Uma técnica para ranqueamento de interpretações SQL oriundas de consultas com palavras-chave / A technique forranking SQL interpretations from keyword queries

Sousa, Walisson Pereira de 11 December 2017 (has links)
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-01-26T12:51:05Z No. of bitstreams: 2 Dissertação - Walisson Pereira de Sousa - 2017.pdf: 2525793 bytes, checksum: 0717fb8c52cc2e89d38f1e7c4a763ec1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-01-29T10:41:52Z (GMT) No. of bitstreams: 2 Dissertação - Walisson Pereira de Sousa - 2017.pdf: 2525793 bytes, checksum: 0717fb8c52cc2e89d38f1e7c4a763ec1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-01-29T10:41:52Z (GMT). No. of bitstreams: 2 Dissertação - Walisson Pereira de Sousa - 2017.pdf: 2525793 bytes, checksum: 0717fb8c52cc2e89d38f1e7c4a763ec1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-12-11 / Retrieving information using words from a natural language is a simple and already consolidated way to access data on the Web. It would be highly desirable that a similar method could be used to submit queries on databases, thereby freeing the user from learning a query language and knowing the searched database structure. In this sense, a great research effort has been dedicated by the database community in order to develop more efficient query keywords techniques for database access. However, a keyword query can result in a large number of SQL interpretations, most of them irrelevant for the initial query. This work carry out a study of different query interpretations ranking techniques and, finally, proposes a ranking methodology that maximizes the amount of relevant results for keyword queries submitted to relational databases. / Recuperar informações utilizando palavras de uma linguagem natural é uma maneira simples e já consolidada para acessar dados na Web. Seria altamente desejável que um método semelhante fosse utilizado para executar consultas em bancos de dados, liberando assim o usuário do aprendizado de uma linguagem de consulta e o conhecimento da estrutura do banco de dados a ser consultado. Nesse sentido, um grande esforço de pesquisa vem sendo dedicado pela comunidade de Banco de dados, a fim de desenvolver técnicas de consultas com palavras-chave mais eficientes para acesso a bancos de dados. No entanto, uma consulta com palavras-chave pode originar uma grande quantidade de interpretações SQL, boa parte delas resultando em dados irrelevantes para a consulta inicial. Este trabalho realiza um estudo de diferentes técnicas para ranqueamento de interpretações de consultas e, ao final, propõe uma metodologia de ranqueamento que maximiza a quantidade de resultados relevantes para consultas com palavras-chave submetidas a bancos de dados relacionais.
65

Object oriented databases : a natural part of object oriented software development?

Carlsson, Anders January 2003 (has links)
The technology of object oriented databases was introduced to system developers in the late 1980?s. Despite that it is rarely used today. This thesis introduces the concept of object oriented databases as the purposed solution to the problems that exist with the use of relational databases. The thesis points to the advantages with storing the application objects in the database without disassembling them to fit a relational data model. Based on that advantages and the cost of introducing such a rarely used technology into a project, a guideline for when to use object oriented databases and when to use relational databases is given. / anders@actk.net
66

A comparison of latency for MongoDB and PostgreSQL with a focus on analysis of source code

Lindvall, Josefin, Sturesson, Adam January 2021 (has links)
The purpose of this paper is to clarify the differences in latency between PostgreSQL and MongoDB as a consequence of their differences in software architecture. This has been achieved through benchmarking of Insert, Read and Update operations with the tool “Yahoo! Cloud Serving Benchmark”, and through source code analysis of both database management systems (DBMSs). The overall structure of the architecture has been researched with Big O notation as a tool to examine the complexity of the source code. The result from the benchmarking show that the latency for Insert and Update operations were lower for MongoDB, while the latency for Read was lower for PostgreSQL. The results from the source code analysis show that both DBMSs have a complexity of O(n), but that there are multiple differences in their software architecture affecting latency. The most important difference was the length of the parsing process which was larger for PostgreSQL. The conclusion is that there are significant differences in latency and source code and that room exists for further research in the field. The biggest limitation of the experiment consist of factors such as background processes which affected latency and could not be eliminated, resulting in a low validity.
67

Návrh a implementace programu pro převod UML struktur do programovacího jazyka / Design and implementation of program for transformation of UML structures to programming language

Minářová, Alice January 2017 (has links)
This work deals with the problematics of conversion of UML diagrams to code. Initially, existing solutions in this field are analyzed. Based on the gained findings, a new tool is designed and implemened. It accepts UML class diagrams and database models from the free diagramming environment Dia and for each of these offers three target languages. The key feature of this tool is its modular design allowing new target languages to be added easily.
68

Úložiště Gentoo Portage jako souborový systém založený na relační databázi / A File System Implementing Storage for Gentoo Portage Based on a Relational Database

Štulpa, Adam January 2010 (has links)
The thesis deals with implementation of a program which can, through the use of FUSE library, make data accessible in the relational database like the classical Gentoo Portage storage. First, the reader of the thesis becomes familiar with FUSE library itself. After that, the new data model is built based on the analysis of the Portage structure. The new model emphasizes especially dependencies of packages. The key implementation issues are also described in the next part. Finally, the thesis assesses outcomes achieved including comparison with standard Portage implementation and classical file system. The other possibilities of the project development are considered as well.
69

Prestanda- och användbarhetsanalys av decentraliserad ledger-teknik utvecklad med antingen SQL eller Blockkedja / Performance and utility analysis of decentralized ledger technology developed using either SQL or Blockchain

Asbai, Ali January 2022 (has links)
B-SPORT+ är ett projekt som var intresserat av att utveckla en applikation för tips och vägledning inom fysisk träning anpassad till personer med funktionsnedsättningar. B-SPORT+ identifieradebehovet av en decentraliserad ledger i applikationen. En ledger innebär en digital huvudbok som lagrar data över transaktioner utförda i en applikation. I ett tidigare arbete lyftes Blockkedja fram som en möjlig lösning för applikationen. B-SPORT+ upplevde dock att denna teknik innehöll nackdelar som hög energiförbrukning och dyr implementation. Därför skulle detta arbete undersöka,utveckla och utvärdera ett alternativ till Blockkedja med hjälp av relationsdatabaser. Resultatet blev två prototyper. Den första prototypen efterliknade Blockkedja-teknik genom att horisontellt fragmentera en relationsdatabas som innehöll en tabell för utförda transaktioner. Sedan användes hasning för att validera transaktioner mellan databas fragment. Det utvecklades även en prototyp med hjälp av Blockkedja-teknik, denna prototyp användes för att utvärdera den första prototypen. Utvärderingen visade att SQL-prototypens struktur minskade minnesutnyttjandet i användardatorer samt minskade den transaktioners energikonsumtionen och tid. Denna struktur tillät även moderering utav data i ledgern vilket var vitalt för den applikation B-SPORT+ ville utveckla / B-SPORT+ is a project that was interested in developing an application for advice and guidance in regards to physical exercise adapted for people with disabilities. B-SPORT+ identified the need for a decentralized ledger in the application. A decentralized ledger is a register that stores data on transactions performed in an application. In previous work on the application, a Blockchain was highlighted as a possible solution. However, B-SPORT+ experienced that this technology contained disadvantages such as high energy consumption and expensive implementation. Therefore, this work investigated, developed and evaluated an alternative to Blockchain using relational databases. The result was two prototypes. The first prototype mimicked Blockchain technology by horizontally fragmenting a relational database that stored a table of performed transactions. Then, cryptographic hashing was used to validate transactions between database fragments. A prototype was also developed using Blockchain technology, this prototype was used to evaluate the first prototype. The evaluation showed that the structure of the SQL prototype reduced memory utilization in user computers it also reduced energy consumption and time when performing transactions. This structure also allowed moderation of data in the ledger, which was vital for the application BSPORT + wanted to develop.
70

Data storage for a small lumberprocessing company in Sweden

Bäcklund, Simon, Ljungdahl, Albin January 2021 (has links)
The world is becoming increasingly digitized, and with this trend comes an increas-ing need for storing data for companies of all sizes. For smaller enterprises, thiscould prove to be a major challenge due to limitations in knowledge and financialassets. So the purpose of this study is to investigate how smaller companies cansatisfy their needs for data storage and which database management system to usein order to not let their shortcomings hold their development and growth back. Tofulfill this purpose, a small wood processing company in Sweden is examined andused as an example. To investigate and answer the problem, literary research is con-ducted to gain knowledge about data storage and the different options for this thatexist. Microsoft Access, MySQL, and MongoDB are selected for evaluation andtheir performance is compared in controlled experiments. The results of this studyindicates that, due to the small amount of data that the example company possesses,the simplicity of Microsoft Access trumps the high performance of its competitors.However, with increasingly developed internet infrastructure, the option of hostinga database in the cloud has become a feasible option. If hosting the database in thecloud is the desired solution, Microsoft Access has a higher operating cost than theother alternatives, making MySQL come out on top.

Page generated in 0.1896 seconds