• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 17
  • 15
  • 13
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

ProxStor : flexible scalable proximity data storage & analysis

Giannoules, James Peter 17 February 2015 (has links)
ProxStor is a cloud-based human proximity storage and query informational system taking advantage of both the near ubiquity of mobile devices and the growing digital infrastructure in our everyday physical world, commonly referred to as the Internet of Things (IoT). The combination provides the opportunity for mobile devices to identify when entering and leaving the proximity of a space based upon this unique identifying infrastructure information. ProxStor provides a low-overhead interface for storing these proximity events while additionally offering search and query capabilities to enable a richer class of location aware applications. ProxStor scales up to store and manage more than one billion objects, while enabling future horizontal scaling to expand to multiple systems working together supporting even more objects. A single seamless web interface is presented to clients system.. More than 18 popular graph database systems are supported behind ProxStor. Performance benchmarks while running on Neo4j and OrientDB graph database systems are compared to determine feasibility of the design. / text
12

Aplikace grafové databáze na analytické úlohy / Application of graph database for analytical tasks

Günzl, Richard January 2014 (has links)
This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to design own convenient graph database use case and to realize the template verifying designed use case. The theoretical part focuses on the description of properties and principles of the graph database which are then compared with relational database approach. Next part dedicates analysis and explanation of the most typical use cases of the graph database including the unsuitable use cases. The last part of thesis contains analysis of own graph database use case in which several principles are defined. The principles can be applied separately. There are crucial analytical operations in the use case. The analytical operations search the causes with their rate of influence on amount or change in the value of the indicator. This part also includes the realization of the template verifying the use case in the graph database. The template consists of the database structure design, the concrete database data and analytical operations. In the end the returned results from graph database are verified by the alternative calculations without using the graph database.
13

Integrando banco de dados relacional e orientado a grafos para otimizar consultas com alto grau de indireção / Integrating relational and graph-oriented database to optimize queries with high degree of indirection

Marino Hilario Catarino 10 November 2017 (has links)
Um indicador importante na área acadêmica está relacionado ao grau de impacto de uma publicação, o que pode auxiliar na avaliação da qualidade e do grau de internacionalização de uma instituição. Para melhor delimitar esse indicador torna-se necessária a realização de uma análise das redes de colaboração dos autores envolvidos. Considerando que o modelo de dados relacional é o modelo predominante dos bancos de dados atuais, observa-se que a análise das redes de colaboração é prejudicada pelo fato desse modelo não atender, com o mesmo desempenho, a todos os tipos de consultas realizadas. Uma alternativa para executar as consultas que perdem desempenho no modelo de banco de dados relacional é a utilização do modelo de banco de dados orientado a grafos. Porém, não é claro quais parâmetros podem ser utilizados para definir quando utilizar cada um dos modelos de bancos de dados. Assim, este trabalho tem como objetivo fazer uma análise de consultas que, a partir da sintaxe da consulta e do ambiente de execução, possa apontar o modelo de dados mais adequado para execução da referida consulta. Com essa análise, é possível delimitar em que cenários uma integração entre o modelo relacional e o orientado a grafos é mais adequada. / An important indicator in the academic area is related to the degree of impact of a publication that can help in evaluating the quality and degree of internationalization in academic institutions. One approach to better understand the aforementioned indicator is analyzing the collaboration network formed by each researcher. In order to analyze this network, several alternatives use the well known relational data model which is predominant in most databases used today. Even though this model is widely used, it has a performance drawback when some types of queries are performed. For overcoming this drawback, certain alternatives are using a graph-oriented database model which is similar to a collaboration network model. However, it is unclear what parameters can be used to define when to use a relational or graph-oriented model. In this work, we propose an analysis of queries that, from the syntax of a query and the execution environment, can point to the most suitable data model for the execution given a specific query. With this query analysis, it is possible to delimit in which scenarios an integration between the relational and the graph-oriented models is more appropriate.
14

Vizualizace rozsáhlých grafových dat na webu / Large Graph Data Visualisation on the Web

Jarůšek, Tomáš January 2020 (has links)
Graph databases provide a form of data storage that is fundamentally different from a relational model. The goal of this thesis is to visualize the data and determine the maximum volume that current web browsers are able to process at once. For this purpose, an interactive web application was implemented. Data are stored using the RDF (Resource Description Framework) model, which represents them as triples with a form of subject - predicate - object. Communication between this database, which runs on server and client is realized via REST API. The client itself is then implemented in JavaScript. Visualization is performed by using the HTML element canvas and can be done in different ways by applying three specially designed methods: greedy, greedy-swap and force-directed. The resulting boundaries were determined primarily by measuring time complexities of different parts and were heavily influenced by user's goals. If it is necessary to visualize as much data as possible, then 150000 triples were set to be the limiting volume. On the other hand, if the goal is maximum quality and application smoothness, then the limit doesn't exceed a few thousand.
15

Creating a Graph Database from a Set of Documents / Skapandet av en grafdatabas från ett set av dokument

Nikolic, Vladan January 2015 (has links)
In the context of search, it may be advantageous in some use-cases to have documents saved in a graph database rather than a document-orientated database. Graph databases are able to model relationships between objects, in this case documents, in ways which allow for efficient retrieval, as well as search queries that are slightly more specific or complex. This report will attempt to explore the possibilities of storing an existing set of documents into a graph database. A Named Entity Recognizer was used on a set of news articles in order to extract entities from each news article’s body of text. News articles that contain the same entities are then connected to each other in the graph. Ideas to improve this entity extraction are also explored. The method of evaluation that was utilized in this report proved not to be ideal for this task in that only a relative measure was given, not an absolute one. As such, no absolute answer with regards to the quality of the method can be presented. It is clear that improvements can be made, and the result should be subject to further study. / I ett sökkontext kan det vara födelaktigt att i några användarscenarion utgå från dokument lagrade i en grafdatabas gentemot en dokument-orienterad databas. Grafdatabaser kan modellera förhållanden mellan objekt, som i detta fall är dokument, på ett sätt som ökar effektiviteten för vissa mer specifika eller komplexa sökfrågor. Denna rapport utforskar möjligheterna i att lagra existerande dokument i en grafdatabas. En Named Entity Recognizer används för att extrahera entiter från en stor samling nyhetsartiklar. Nyhetsartiklar som innehåller samma entiteter är sedan kopplade till varandra i grafen. Dessutom undersöks möjligheter till att förbättra extraheringen av entiteter. Evalueringsmetoden som användes visade sig mindre än ideal, då endast en relativ snarare än absolut bedömning kan göras av den slutgiltiga grafen. Därav kan inget slutgiltigt svar ges angående grafens och metodens kvalitet, men resultatet bör vara av intresse för framtida undersökningar.
16

Unsupervised Topic Modeling to Improve Stormwater Investigations

Arvidsson, David January 2022 (has links)
Stormwater investigations are an important part of the detail plan that is necessary for companies and industries to write. The detail plan is used to show that an area is well suited for among other things, construction. Writing these detail plans is a costly and time consuming process and it is not uncommon they get rejected. This is because it is difficult to find information about the criteria you need to meet and what you need to address within the investigation. This thesis aims to make this problem less ambiguous by applying the topic modeling algorithm LDA (latent Dirichlet allocation) in order to identify the structure of stormwater investigations. Moreover, sentences that contain words from the topic modeling will be extracted to give each word a perspective of how it can be used in the context of writing a stormwater investigation. Finally a knowledge graph will be created with the extracted topics and sentences. The result of this study indicates that topic modeling and NLP (natural language processing) can be used to identify the structure of stormwater investigations. Furthermore it can also be used to extract useful information that can be used as a guidance when learning and writing stormwater investigations.
17

Recommendation system for job coaches

Söderkvist, Nils January 2021 (has links)
For any unemployed person in Sweden that is looking for a job, the most common place they can turn to is the Swedish Public Employment Service, also known as Arbetsförmedlingen, where they can register to get help with the job search process. Occasionally, in order to land an employment, the person might require extra guidance and education, Arbetsförmedlingen outsource this education to external companies called providers where each person gets assigned a coach that can assist them in achieving an employment quicker. Given the current labour market data, can the data be used to help optimize and speed up the job search process? To try and help optimize the process, the labour market data was inserted into a graph database, using the database, a recommendation system was built which uses different methods to perform each recommendation. The recommendations can be used by a provider to assist them in assigning coaches to newly registered participants as well as recommending activities. The performance of each recommendation method was evaluated using a statistic measure. While the user-created methods had acceptable performance, the overall best performing recommendation method was collaborative filtering. However, there are definitely some potential for the user-created method, and given some additional testing and tuning, the methods can surely outperform the collaborative filtering method. In addition, expanding the database by adding more data would positively affect the recommendations as well.
18

Dynamic constraint handling in mass customization systems : A database solution

Kåhlman, Johan, Spånberger, Josef January 2020 (has links)
Purpose: The purpose of this study is to develop an architecture for a Mass Customization information system that allows for product customization restrictions being dynamically expressed through the database. Method: A study evaluating an artifact made using Design Science Research. The evaluation was made using both a quantitative and a qualitative method. Findings: Building upon a literature review to establish a knowledge base, an artifact was created using React and Node.js to build a web application combined with a Neo4j graph database. The artifact allows for products and their inherent restrictions to be dynamically added and modified through constraints defined in their data. This data can be used in the web application to assemble and customize components, where the constraints can be enforced by the web application in real time without any modification to the application. The artifact can enforce all constraints that were intended and it was considered as a better overall solution for handling constraints compared to the currently used solution by Divid, a market leading company in the usage of Mass Customization systems with constraint handling in the context of ventilation systems. Implications: The results implicate that the usage of graph database systems in Mass Customization systems holds great promise, specifically as a new way to handle constraints between components dynamically. Limitations: The results from the expert panel only reflects the opinions of Divid and might not be true for other companies interested in this area. The artifact solution was successful in its purpose to illustrate the concept of dynamic constraint handling. However, it is still unclear if the system holds up in a professional context with more complex rules and heavy demands on performance.
19

Comparison of graph databases and relational databases performance

Asplund, Einar, Sandell, Johan January 2023 (has links)
There has been a change of paradigm in which way information is being produced, processed, and consumed as a result of social media. While planning to store the data, it is important to choose a suitable database for the type of data, as unsuitable storage and analysis can have a noticeable impact on the system’s energy consumption. Additionally, effectively analyzing data is essential because deficient data analysis on a large dataset can lead to repercussions due to unsound decisions and inadequate planning. In recent years, an increasing amount of organizations have provided services that cannot be anymore achieved efficiently using relational databases. An alternative data storage is graph databases, which is a powerful solution for storing and searching for relationship-dense data. The research question that the thesis aims to answer is, how do state-of-the-art-graph database and relational database technologies compare with each other from a performance perspective in terms of time taken to query, CPU usage, memory usage, power usage, and temperature of the server? To answer the research question, an experimental study using analysis of variance will be performed. One relational database, MySQL, and two graph databases, ArangoDB and Neo4j, will be compared using a benchmark. The benchmark used is Novabench. The results from the post-hoc, KruskalWallis, and analysis of variances show that there are significant differences between the database technologies. This means the null hypothesis, that there is no significant difference, is rejected, and the alternative hypothesis, that there is a significant difference in performance between the database technologies in the aspects of Time to Query, Central Processing Unit usage, Memory usage, Average Energy usage, and temperature holds. In conclusion, the research question was answered. The study shows that Neo4j was the fastest at executing queries, followed by MySQL, and in last place ArangoDB. The results also showed that MySQL was more demanding on memory usage than the other database technologies.
20

Manufacturing Knowledge Management Using a Virtual Factory-Based Ontology Implemented in a Graph Database

Ghorbani Tajani, Mehran January 2022 (has links)
Ontology-based technologies like Semantic Web and Knowledge Graphs are promising for knowledge management in manufacturing industries. In the literature there are abundant of publications related to using ontologies to represent and capture knowledge in manufacturing. Many of them cover the use of ontologies for managing knowledge in different aspects of Product Lifecycle Management (PLM). Nevertheless, very few of them cover how ontologies can be used with virtual factory models, data and information as well as the knowledge generated from using these models and their corresponding engineering activities. An “extension” of existing ontologies is badly needed as digital, virtual models in terms of simulation and digital twins have become more popular in the industry. Without such an extended knowledge management process and system, it is difficult to re-use the artefacts and knowledge generated from the expensive and valuable virtual engineering activities. Relying on the cutting-edge graph database technologies and what they can offer regarding knowledge management, and also recent developments in the domain ontology field, an extended knowledge management implementation, specifically designed for virtual engineering has been done. Moreover, a clear roadmap for establishment of knowledge bases around production systems armed with Virtual Factory(VF) and Multi-Objective Optimization (MOO) processes has been provided. This, includes defining key elements of manufacturing procedures, constructing an ontology, defining data structure in preferably a graph database, and accessing valuable historical (provenance) data regarding different engineering entities and/or activities.

Page generated in 0.0751 seconds