Spelling suggestions: "subject:"anda relational database"" "subject:"ando relational database""
51 |
Comparison of graph databases and relational databases performanceAsplund, Einar, Sandell, Johan January 2023 (has links)
There has been a change of paradigm in which way information is being produced, processed, and consumed as a result of social media. While planning to store the data, it is important to choose a suitable database for the type of data, as unsuitable storage and analysis can have a noticeable impact on the system’s energy consumption. Additionally, effectively analyzing data is essential because deficient data analysis on a large dataset can lead to repercussions due to unsound decisions and inadequate planning. In recent years, an increasing amount of organizations have provided services that cannot be anymore achieved efficiently using relational databases. An alternative data storage is graph databases, which is a powerful solution for storing and searching for relationship-dense data. The research question that the thesis aims to answer is, how do state-of-the-art-graph database and relational database technologies compare with each other from a performance perspective in terms of time taken to query, CPU usage, memory usage, power usage, and temperature of the server? To answer the research question, an experimental study using analysis of variance will be performed. One relational database, MySQL, and two graph databases, ArangoDB and Neo4j, will be compared using a benchmark. The benchmark used is Novabench. The results from the post-hoc, KruskalWallis, and analysis of variances show that there are significant differences between the database technologies. This means the null hypothesis, that there is no significant difference, is rejected, and the alternative hypothesis, that there is a significant difference in performance between the database technologies in the aspects of Time to Query, Central Processing Unit usage, Memory usage, Average Energy usage, and temperature holds. In conclusion, the research question was answered. The study shows that Neo4j was the fastest at executing queries, followed by MySQL, and in last place ArangoDB. The results also showed that MySQL was more demanding on memory usage than the other database technologies.
|
52 |
Integration of relational database metadata and XML technology to develop an abstract framework to generate automatic and dynamic web entry forms.Elsheh, Mohammed M. January 2009 (has links)
Developing interactive web application systems requires a large amount of effort on designing database, system logic and user interface. These tasks are expensive and error-prone. Web application systems are accessed and used by many different sets of people with different backgrounds and numerous demands. Meeting these demands requires frequent updating for Web application systems which results in a very high cost process. Thus, many attempts have been made to automate, to some degree, the construction of Web user interfaces. Three main directions have been cited for this purpose. The first direction suggested of generating user interfaces from the application¿s data model. This path was able to generate the static layout of user interfaces with dynamic behaviour specified programmatically. The second tendency suggested deployment of the domain model to generate both, the layout of a user interface and its dynamic behaviour. Web applications built based on this approach are most useful for domain-specific interfaces with a relatively fixed user dialogue. The last direction adopted the notion of deploying database metadata to developing dynamic user interfaces. Although the notion was quite valuable, its deployment did not present a generic solution for generating a variety of types of dynamic Web user interface targeting several platforms and electronic devices.
This thesis has inherited the latter direction and presented significant improvements on the current deployment of this tendency. This thesis aims to contribute towards the development of an abstract framework to generate abstract and dynamic Web user interfaces not targeted to any particular domain or platform. To achieve this target, the thesis proposed and evaluates a general notion for implementing a prototype system that uses an internal model (i.e. database metadata) in conjunction with XML technology. Database metadata is richer than any external model and provides the information needed to build dynamic user interfaces. In addition, XML technology became the mainstream of presenting and storing data in an abstract structure. It is widely adopted in Web development society because of its ability to be transformed into many different formats with a little bit of effort. This thesis finds that only Java can provide us with a generalised database metadata based framework. Other programming languages apply some restrictions on accessing and extracting database metadata from numerous database management systems. Consequently, JavaServlets and relational database were used to implement the proposed framework. In addition, Java Data Base Connectivity was used to bridge the two mentioned technologies.
The implementation of our proposed approach shows that it is possible and very straightforward to produce different automatic and dynamic Web entry forms that not targeted at any platform. In addition, this approach can be applied to a particular domain without affecting the main notion or framework architecture. The implemented approach demonstrates a number of advantages over the other approaches based on external or internal models.
|
53 |
Ontology-based approaches to improve RDF Triple StoreAlbahli, Saleh Mohammad 21 March 2016 (has links)
No description available.
|
54 |
Resource Efficient Parallel VLDB with Customizable Degree of RedundancyXiong, Fanfan January 2009 (has links)
This thesis focuses on the practical use of very large scale relational databases. It leverages two recent breakthroughs in parallel and distributed computing: a) synchronous transaction replication technologies by Justin Y. Shi and Suntain Song; and b) Stateless Parallel Processing principle pioneered by Justin Y. Shi. These breakthroughs enable scalable performance and reliability of database service using multiple redundant shared-nothing database servers. This thesis presents a Functional Horizontal Partitioning method with customizable degree of redundancy to address practical very large scale database applications problems. The prototype VLDB implementation is designed for transparent non-intrusive deployments. The prototype system supports Microsoft SQL Servers databases. Computational experiments are conducted using industry-standard benchmark (TPC-E). / Computer and Information Science
|
55 |
Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data contextKeylla Ramos Saes 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
|
56 |
Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data contextSaes, Keylla Ramos 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
|
57 |
Árbol de decisión para la selección de un motor de base de datos / Decision tree for the selection of database engineBendezú Kiyán , Enrique Renato, Monjaras Flores, Álvaro Gianmarco 30 August 2020 (has links)
Desde los últimos años, la cantidad de usuarios que navega en internet ha crecido exponencialmente. Por consecuencia, la cantidad de información que se maneja crece a manera desproporcionada y, por ende, el manejo de grandes volúmenes de información obtenidos de internet ha ocasionado grandes problemas.
Los diferentes tipos de bases de datos tienen un funcionamiento variado, dado que, se ve afectado el rendimiento para ejecutar las transacciones cuando se lidia con diferentes cantidades de información. Entre este tipo de variedades, se analizará las bases de datos relacionales, bases de datos no relaciones y bases de datos en memoria.
Para las organizaciones es muy importante contar con un acelerado manejo de información debido a la gran demanda por parte de los clientes y el mercado en general, permitiendo que no se disminuya la agilidad de operación interna cuando se requiera manejar información, y conservar la integridad de esta. Sin embargo, cada categoría de base de datos está diseñada para cubrir diferentes casos de usos específicos para mantener un alto rendimiento con respecto al manejo de los datos.
El presente proyecto tiene como objetivo el estudio de diversos escenarios de los principales casos de uso, costos, aspectos de escalabilidad y rendimiento de cada base de datos, mediante la elaboración de un árbol de decisión, en el cual, se determine la mejor opción de categoría de base de datos según el flujo que decida tomar el usuario.
Palabras clave: Base de Datos, Base de Datos Relacional, Base de Datos No Relacional, Base de Datos en Memoria, Árbol de Decisión. / In recent years, the number of users browsing the internet has grown exponentially. Consequently, the amount of information handled grows disproportionately and, therefore, the handling of large volumes of information obtained from the Internet has caused major problems.
Different types of databases work differently, since the performance of executing transactions suffers when dealing with different amounts of information. Among this type of varieties, relational databases, non-relationship databases and in-memory databases will be analyzed.
For organizations it is very important to have an accelerated information management due to the great demand from customers and the market in general, allowing the agility of internal operation to not be diminished when it is required to manage information, and to preserve the integrity of is. However, each category of database is designed to cover different specific use cases to maintain high performance regarding data handling.
The purpose of this project is to study various scenarios of the main use cases, costs, scalability and performance aspects of each database, through the development of a decision tree, in which the best option for database category according to the flow that the user decides to take. / Tesis
|
58 |
Um método de integração de dados armazenados em bancos de dados relacionais e NOSQL / A method for integration data stored in databases relational and NOSQLVilela, Flávio de Assis 08 October 2015 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-05T19:33:36Z
No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-08T14:30:29Z (GMT) No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-08-08T14:30:29Z (GMT). No. of bitstreams: 2
Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2015-10-08 / The increase in quantity and variety of data available on the Web contributed to the
emergence of NOSQL approach, aiming at new demands, such as availability, schema
flexibility and scalability. At the same time, relational databases are widely used for
storing and manipulating structured data, providing stability and integrity of data, which
is accessed through a standard language such as SQL. This work presents a method for
integrating data stored in heterogeneous sources, in which an input query in standard
SQL produces a unified answer, based in the partial answers of relational and NOSQL
databases. / O aumento da quantidade e variedade de dados disponíveis na Web contribuiu com o surgimento
da abordagem NOSQL, visando atender novas demandas, como disponibilidade,
flexibilidade de esquema e escalabilidade. Paralelamente, bancos de dados relacionais são
largamente utilizados para armazenamento e manipulação de dados estruturados, oferecendo
estabilidade e integridade de dados, que são acessados através de uma linguagem
padrão, como SQL. Este trabalho apresenta um método de integração de dados armazenados
em fontes heterogêneas, no qual uma consulta de entrada em SQL produz uma resposta
unificada, baseada nas respostas parciais de bancos de dados relacionais e NOSQL.
Palavras–chave
|
59 |
A C++ Implementation And Evaluation Of Alternative Plan Generation Methods For Multiple Query OptimizationAbudula, Dilixiati 01 November 2006 (has links) (PDF)
In this thesis, alternative plan generation methods for multiple query optimization(MQO) are introduced and an implementation in the C++ programming.language has been developed. Multiple query optimization, aims to minimize the total cost of executing a set of relational database queries. In
traditional single query optimization only the cost of execution of a single relational database query is minimized. In single query optimization a search is performed to investigate possible alternative methods of accessing relational database tables and alternative methods of performing join operations in the case of multi-relation queries where records from two or more relational tables have to be brought together using one of the join algortihms (e.g. nested loops, sort merge, hash
join,etc). The choice of join method depends on the availability of indexes, amount of available main memory, the existence of ORDER BY clause for sorted output, the sizes of involved relations, many other factors. A simple way of performing multiple query optimization is to take the query execution plans generated for each of the queries as input to a MQO algorithm, and then try to identify common tasks
in those plans using the MQO algorithm. However, this approach will reduce the achievable benefits since a more expensive execution plan (thus discarded by a single query optimizer) could have more common operations with other query execution plans, resulting in a lower total cost for MQO. .For this purpose we will introduce several methods for generating such potentially beneficial alternative query execution plans and experimentaly evaluate and compare their performances.
|
60 |
Reengineering and Development for Executive Information Systems : The Case of Southern Taiwan Business Group of Chunghwa TelecomChang, I-Ming 03 August 2000 (has links)
In the earlier period, large enterprise, developed its management reports system based on files system and the 3rd-generation language. The managers of several departments access management information from the reports system. Because competition stress increasing quickly and information requirement changing frequently, the legacy system could not satisfy the information need of managers. The maintainability of legacy system is decreasing, and the cost is growing up. How to solve the difficulties on system maintenance ? System reengineering is commonly used as a solution. How to choose a good migration strategy is also a big issue.
This research focuses on finding a migration strategy for the legacy systems and a methodology of developing EIS based on users¡¦ needs via current new information technologies. The methodology is applied to implement a EIS for a large enterprise in order to verify the feasibility of the methodology. A questionnaire investigation among the users of the new system has clearly shown a fairly good users¡¦ satisfaction.
|
Page generated in 0.1447 seconds