• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 5
  • 2
  • Tagged with
  • 27
  • 19
  • 15
  • 12
  • 10
  • 9
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

NoSQL Database Selection Focused on Performance Criteria for Web-driven Applications

Kharboutli, Zacky January 2019 (has links)
This paper delivers a comparative analysis of the performance of three of the NoSQL technologies in Web applications. These technologies are graph stores, key-value stores, and document stores. The study aims to assist developers and organizationsin picking the suitable NoSQL solution for their application. For this purpose, three identical e-book applications were developed. Each of these is connected to adatabase from the selected technologies to examine how they perform compared toeach other against various performance measures.
12

RETAIL DATA ANALYTICS USING GRAPH DATABASE

Priya, Rashmi 01 January 2018 (has links)
Big data is an area focused on storing, processing and visualizing huge amount of data. Today data is growing faster than ever before. We need to find the right tools and applications and build an environment that can help us to obtain valuable insights from the data. Retail is one of the domains that collects huge amount of transaction data everyday. Retailers need to understand their customer’s purchasing pattern and behavior in order to take better business decisions. Market basket analysis is a field in data mining, that is focused on discovering patterns in retail’s transaction data. Our goal is to find tools and applications that can be used by retailers to quickly understand their data and take better business decisions. Due to the amount and complexity of data, it is not possible to do such activities manually. We witness that trends change very quickly and retailers want to be quick in adapting the change and taking actions. This needs automation of processes and using algorithms that are efficient and fast. In our work, we mine transaction data by modeling the data as graphs. We use clustering algorithms to discover communities (clusters) in the data and then use the clusters for building a recommendation system that can recommend products to customers based on their buying behavior.
13

Eliciting correlations between components selection decision cases in software architecting

Ahmed, Mohamed Ali January 2019 (has links)
A key factor of software architecting is the decision-making process. All phases of software development contain some kind of decision-making activities. However, the software architecture decision process is the most challenging part. To support the decision-making process, a research project named ORION provided a knowledge repository that contains a collection of decision cases. To utilize the collected data in an efficient way, eliciting correlations between decision cases needs to be automated.  The objective of this thesis is to select appropriate method(s) for automatically detecting correlations between decision cases. To do this, an experiment was conducted using a dataset of collected decision cases that are based on a taxonomy called GRADE. The dataset is stored in the Neo4j graph database. The Neo4j platform provides a library of graph algorithms which allow to analyse a number of relationships between connected data. In this experiment, five Similarity algorithms are used to find correlated decisions, then the algorithms are analysed to determine whether the they would help improve decision-making.  From the results, it was concluded that three of the algorithms can be used as a source of support for decision-making processes, while the other two need further analyses to determine if they provide any support.
14

ProxStor : flexible scalable proximity data storage & analysis

Giannoules, James Peter 17 February 2015 (has links)
ProxStor is a cloud-based human proximity storage and query informational system taking advantage of both the near ubiquity of mobile devices and the growing digital infrastructure in our everyday physical world, commonly referred to as the Internet of Things (IoT). The combination provides the opportunity for mobile devices to identify when entering and leaving the proximity of a space based upon this unique identifying infrastructure information. ProxStor provides a low-overhead interface for storing these proximity events while additionally offering search and query capabilities to enable a richer class of location aware applications. ProxStor scales up to store and manage more than one billion objects, while enabling future horizontal scaling to expand to multiple systems working together supporting even more objects. A single seamless web interface is presented to clients system.. More than 18 popular graph database systems are supported behind ProxStor. Performance benchmarks while running on Neo4j and OrientDB graph database systems are compared to determine feasibility of the design. / text
15

Aplikace grafové databáze na analytické úlohy / Application of graph database for analytical tasks

Günzl, Richard January 2014 (has links)
This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to design own convenient graph database use case and to realize the template verifying designed use case. The theoretical part focuses on the description of properties and principles of the graph database which are then compared with relational database approach. Next part dedicates analysis and explanation of the most typical use cases of the graph database including the unsuitable use cases. The last part of thesis contains analysis of own graph database use case in which several principles are defined. The principles can be applied separately. There are crucial analytical operations in the use case. The analytical operations search the causes with their rate of influence on amount or change in the value of the indicator. This part also includes the realization of the template verifying the use case in the graph database. The template consists of the database structure design, the concrete database data and analytical operations. In the end the returned results from graph database are verified by the alternative calculations without using the graph database.
16

A performance comparison between graph databases : Degree project about the comparisonbetween Neo4j, GraphDB and OrientDB on different operations

Alm, Robert, Imeri, Lavdim January 2021 (has links)
In this research we study what is the theoretical complexity of Neo4J, OrientDB and GraphDB, (three known Graph Databases that can be accessed by a Java instance), and how this complexity is manifested in a real life performance, To study their practical performance, a software was implemented and named as a profiler, which is capable to profile, (to record the time that is needed), each operation, and display the results in an accurate and organized manner. The technical documentation of those 3 databases was reviewed as well, to identify how the databases work, and what are their strong and weak points. By the profiling process, the best performance was displayed by Neo4J, and while OrientDB failed to deliver, GraphDB takes the second place in terms of performance. We can identify a potential in OrientDB’s approach, but its structure is too complex and rigid. Neo4J has a robust structure and an architecture that gives to it a great performance, while the Cypher syntax, which Neo4J uses, minimizes the possibility of human error. GraphDB is optimized for large scale public-data operations but performs well as a stand-alone solution as well. / <p>An important part of this publication is its GitHub Repository</p><p>https://github.com/Exarchias/graph-databases-profiler</p>
17

Konzeption und prototypische Implementierung eines web-basierten Dashboards zur Softwarevisualisierung: Masterarbeit zur Erlangung des akademischen Grades Master of Science – Wirtschaftsinformatik

Mewes, Tino 06 December 2018 (has links)
Der Schwerpunkt dieser Arbeit liegt auf der Konzeption sowie prototypischen Implementierung eines web-basierten Dashboards zur Softwarevisualisierung. Ziel der Arbeit ist es, ein Dashboard zu entwickeln, welches Informationen eines Softwareprojekts dynamisch aus einer Graphdatenbank visualisiert und Projekteitern aufgabenbezogen zur Entscheidungsunterstützung darstellt. Derzeit existiert keine Softwarelösung, die diesen Anforderungen vollumfänglich gerecht wird. Es existieren jedoch bereits Bibliotheken und Softwaresysteme, welche Teilaspekte zu einer möglichen Gesamtlösung beitragen können. Diese können bei der prototypischen Implementierung von Nutzen sein und müssen daher beachtet werden. Um die Ziele der Arbeit zu erreichen, werden verschiedene Forschungsmethoden angewandt. Es wird eine Literaturrecherche durchgeführt, mit dem Ziel, typische Aufgaben von Projektleitern im Bereich Software Engineering zu identifizieren. Um die vom Dashboard zu unterstützenden Aufgaben ableiten zu können, werden außerdem verschiedene existierende Dashboard-Werkzeuge analysiert. Mithilfe der gewonnenen Ergebnisse wird das Dashboard konzipiert und prototypisch implementiert. Durch eine Fallstudie anhand von Open-Source-Projekten wird das Dashboard abschließend evaluiert.:1 Einleitung 1.1 Motivation und Problemstellung 1.2 Zielstellung 1.3 Methodisches Vorgehen 2 Grundlagen 2.1 Softwarevisualisierung 2.2 jQAssistant 2.3 Neo4j 2.4 Webanwendungen und web-basierte Frameworks 2.5 D3.js 2.6 Dashboards 3 Konzeption 3.1 Mission Statement 3.2 Architekturziele 3.3 Kontextabgrenzung 3.3.1 Fachlicher Kontext 3.3.1.1 Literaturrecherche zu typischen Aufgaben von Projektleitern 3.3.1.2 Analyse existierender Dashboard-Werkzeuge 3.3.1.3 Use-Case-Diagramm der zu unterstützenden Aufgaben 3.3.1.4 Mockups der Benutzungsschnittstelle 3.3.2 Technischer Kontext 3.4 Randbedingungen 3.4.1 Technische Randbedingungen 3.4.2 Organisatorische Randbedingungen 3.5 Risiken und technische Schulden 3.6 Entwurfsentscheidungen 3.6.1 Auswahl des Neo4j-Treibers 3.6.2 Vergleich existierender Webframeworks 3.6.2.1 Angular 3.6.2.2 Backbone.js 3.6.2.3 Ember.js 3.6.2.4 Vue.js 3.6.2.5 React 3.7 Lösungsstrategie 4 Implementierung 4.1 Implementierungskomponenten 4.1.1 CoreUI 4.1.2 Nivo 4.2 Dashboard 4.2.1 Startseite 4.2.2 Einstellungen 4.2.3 Benutzerdefinierte Abfragen 4.2.4 Visualisierungskomponenten 4.2.4.1 Struktur 4.2.4.2 Dateitypen 4.2.4.3 Abhängigkeiten 4.2.4.4 Aktivitäten 4.2.4.5 Wissensverteilung 4.2.4.6 Hotspots 4.2.4.7 Statische Quellcodeanalyse 4.2.4.8 Testabdeckung 4.2.4.9 Erstellung 4.3 Eingesetzte Werkzeuge 4.3.1 Entwicklung und Test 4.3.1.1 Jest 4.3.1.2 Codecov 4.3.1.3 Travis CI 4.3.1.4 Prettier 4.3.1.5 Docker 4.3.2 Installation und Wartung 5 Evaluation 6 Fazit und Ausblick
18

Entwurf eines Datenmodells zur Speicherung von Softwarevisualisierungsartefakten

Vogelsberg, Lisa 13 March 2019 (has links)
Das System Getaviz der Forschungsgruppe \textit{Visual Software Analytics} des Instituts für Wirtschaftsinformatik der Universität Leipzig stellt mehrere Werkzeuge zur Erzeugung und Analyse von Softwarevisualisierungen bereit. Da sich der Einsatz des Frameworks Xtext zur Generierung einer Visualisierung bezüglich Flexibilität und Speicherperformance als problematisch herausgestellt hat, wurde untersucht wie eine Ablösung dieser Technologie in Zukunft erreicht werden kann.\\ Zu diesem Zweck wurde ein Datenmodell entwickelt, welches es ermöglicht, Softwarevisualisierungsartefakte mittels Neo4j in einer Datenbank abzulegen und von dort wieder abzurufen. Der Entwurf dieses Datenmodells basiert auf einer vorangegangenen Analyse der bisher genutzten Modelle für die Metaphern City und Recursive Disk, um deren Bestandteile auf die neuen Modelle abzubilden. Um mögliche Auswirkungen einer Migration auf die Performance und Flexibilität des Transformationsprozesses untersuchen zu können, wurde anschließend das Datenmodell und der Zugriff auf die Datenbank innerhalb des Generators in Form von zusätzlichen Komponenten implementiert. Dadurch konnte erreicht werden, Transformationen ohne den Einsatz von Xtext durchzuführen. Als Eingabemodell wird ein von jQAssistant generierter Graph verwendet, der die Struktur des Softwaresystems abbildet.
19

Dynamic constraint handling in mass customization systems : A database solution

Kåhlman, Johan, Spånberger, Josef January 2020 (has links)
Purpose: The purpose of this study is to develop an architecture for a Mass Customization information system that allows for product customization restrictions being dynamically expressed through the database. Method: A study evaluating an artifact made using Design Science Research. The evaluation was made using both a quantitative and a qualitative method. Findings: Building upon a literature review to establish a knowledge base, an artifact was created using React and Node.js to build a web application combined with a Neo4j graph database. The artifact allows for products and their inherent restrictions to be dynamically added and modified through constraints defined in their data. This data can be used in the web application to assemble and customize components, where the constraints can be enforced by the web application in real time without any modification to the application. The artifact can enforce all constraints that were intended and it was considered as a better overall solution for handling constraints compared to the currently used solution by Divid, a market leading company in the usage of Mass Customization systems with constraint handling in the context of ventilation systems. Implications: The results implicate that the usage of graph database systems in Mass Customization systems holds great promise, specifically as a new way to handle constraints between components dynamically. Limitations: The results from the expert panel only reflects the opinions of Divid and might not be true for other companies interested in this area. The artifact solution was successful in its purpose to illustrate the concept of dynamic constraint handling. However, it is still unclear if the system holds up in a professional context with more complex rules and heavy demands on performance.
20

Návrh postupu tvorby aplikace pro Linked Open Data / The proposal of application development process for Linked Open Data

Budka, Michal January 2014 (has links)
This thesis deals with the issue of Linked Open Data. The goal of this thesis is to introduce the reader to this issue as a whole and to the possibility of using Linked Open Data for developing useful applications by proposing a new development process focusing on such applications. The theoretical part offers an insight into the issue of Open Data, Linked Open Data and the NoSQL database systems and their usability in this field. It focuses mainly on graph database systems and compares them with relational database systems using predefined criteria. Additionally, the goal of this thesis is to develop an application using the proposed development process, which provides a tool for data presentation and statistical visualisation for open data sets published by the Supreme Audit Office and the Czech Trade Inspection. The application is mainly developed for the purpose of verifying the proposed development process and to demonstrate the connectivity of open data published by two different organizations.The thesis includes the process of selecting a development methodology, which is then used for optimising work on the implementation of the resulting application and the process of selecting a graph database system, that is used to store and modify open data for the purposes of the application.

Page generated in 0.0361 seconds