Spelling suggestions: "subject:"database"" "subject:"catabase""
661 |
XML document representation on the Neo solutionFaraglia, Piergiorgio January 2007 (has links)
This thesis aims to find a graph structure for representing XML documents and to implement the former representation for storing such documents. The graph structure, in fact, is the complete representation for the XML documents; this is dued to the id/idref attribute which could be present inside the XML document tag. Two different graph structures have been defined on this thesis, they are called most granular and customizable representations. The first one is the simplest way for representing XML documents, while the second one makes some improvements for optimizing inserting, deleting, and querying functions. The implementation of the former graph structures is made over a new kind of database built specifically for storing semi-structured data, such database is called Neo. Neo database works only with three primitives: node, relationship, and property. Such data model represents a new solution compared to the traditional relational view. The XML information manager implements two different API which work with the two former graph structure respectively. The first API works with the customizable representation, while the second one works with the customizable representation. Some evaluations have been done over the second implemented API, and they showed that the implemented code is free of bugs and moreover that the customizable representation brings about some improvements on making queries over the stored data.
|
662 |
Comparison of performance between Raw SQL and Eloquent ORM in LaravelJound, Ishaq, Halimi, Hamed January 2016 (has links)
Context. PHP framework Laravel offers three techniques to interact with databases, Eloquent ORM, Query builder and Raw SQL. It is important to select the right database technique when developing a web application because there are pros and cons with each approach. Objectives. In this thesis we will measure the performance of Raw SQL and Eloquent ORM, there is little research on which technique is faster. Intuitively, Raw SQL should be faster than Eloquent ORM, but exactly how much faster needs to be researched. Methods. To measure the performance of both techniques, we developed a blog application and we ran database operations select, insert and update in both techniques. Conclusions. Results indicated that overall Raw SQL performed better than Eloquent ORM in our database operations. There was a noticeable difference of average response time between Raw SQL and Eloquent ORM in all database operations. We can conclude that Eloquent ORM is good for building small to medium sized applications, where simple CRUD operations are used for the small amount of data. Typically for tasks like inserting a single row into a database or retrieving few rows from the database. Raw SQL is preferable for the applications that are dealing with huge amount of data, bulk data loads and complex queries.
|
663 |
Information Centric Development of Component-Based Embedded Real-Time SystemsHjertström, Andreas January 2009 (has links)
This thesis presents new techniques for data management of run-time data objectsin component-based embedded real-time systems. These techniques enabledata to be modeled, analyzed and structured to achieve data managementduring development, maintenance and execution.The evolution of real-time embedded systems has resulted in an increasedsystem complexity beyond what was thought possible just a few years ago.Over the years, new techniques and tools have been developed to manage softwareand communication complexity. However, as this thesis show, currenttechniques and tools for data management are not sufficient. Today, developmentof real-time embedded systems focuses on the function aspects of thesystem, in most cases disregarding data management.The lack of proper design-time data management often results in ineffectivedocumentation routines and poor overall system knowledge. Contemporarytechniques to manage run-time data do not satisfy demands on flexibility,maintainability and extensibility. Based on an industrial case-study that identifiesa number of problems within current data management techniques, bothduring design-time and run-time, it is clear that data management needs to beincorporated as an integral part of the development of the entire system architecture.As a remedy to the identified problems, we propose a design-time data entityapproach, where the importance of data in the system is elevated to beincluded in the entire design phase with proper documentation, properties, dependenciesand analysis methods to increase the overall system knowledge.Furthermore, to efficiently manage data during run-time, we introduce databaseproxies to enable the fusion between two existing techniques; ComponentBased Software Engineering (CBSE) and Real-Time Database ManagementSystems (RTDBMS). A database proxy allows components to be decoupledfrom the underlying data management strategy without violating the componentencapsulation and communication interface. / INCENSE
|
664 |
Computational verification of published human mutationsKamanu, Frederick Kinyua January 2008 (has links)
Magister Scientiae - MSc / The completion of the Human Genome Project, a remarkable feat by any measure, has provided over three billion bases of reference nucleotides for comparative studies. The next, and perhaps more challenging step is to analyse sequence variation and relate this information to important phenotypes. Most human sequence variations are characterized by structural complexity and, are hence, associated with abnormal functional dynamics. This thesis covers the assembly of a computational platform for verifying these variations, based on accurate, published, experimental data. / South Africa
|
665 |
Aplikace grafové databáze na analytické úlohy / Application of graph database for analytical tasksGünzl, Richard January 2014 (has links)
This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to design own convenient graph database use case and to realize the template verifying designed use case. The theoretical part focuses on the description of properties and principles of the graph database which are then compared with relational database approach. Next part dedicates analysis and explanation of the most typical use cases of the graph database including the unsuitable use cases. The last part of thesis contains analysis of own graph database use case in which several principles are defined. The principles can be applied separately. There are crucial analytical operations in the use case. The analytical operations search the causes with their rate of influence on amount or change in the value of the indicator. This part also includes the realization of the template verifying the use case in the graph database. The template consists of the database structure design, the concrete database data and analytical operations. In the end the returned results from graph database are verified by the alternative calculations without using the graph database.
|
666 |
Praticas participativas na elicitação de requisitos para "database marketing" / Participatory design techniques used in requirements elicitation for database marketingSantos, Fernanda Regina Benhami dos 31 July 2006 (has links)
Orientador: Maria Cecilia Calani Baranauskas / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-07T11:33:12Z (GMT). No. of bitstreams: 1
Santos_FernandaReginaBenhamidos_M.pdf: 1651997 bytes, checksum: 11791fd5437c38c9e9af31d0eb194308 (MD5)
Previous issue date: 2006 / Resumo: O uso de informações pelas organizações para criarem estratégias e estabelecerem longos relacionamentos com seus clientes, cria a necessidade da construção de um Database Marketing para dar suporte a estas iniciativas. Muitas estratégias deste tipo não foram bem sucedidas por não disponibilizarem dados e informações sobre seus clientes, que realmente fossem úteis para as análises realizadas. Este trabalho propõe uma abordagem que utiliza práticas participativas como Group Elicitation Method, BluePrint Mapping e PICTIVE para compor uma metodologia aplicada à fase de elicitação de requisitos, a fim de minimizar as falhas ocorridas nesta fase, que poderiam se propagar aos resultados esperados pelas organizações. Um estudo de caso demonstra o uso destas técnicas em conjunto com os comentários sobre os resultados obtidos apontando os pontos fortes e fracos, além da proposta de trabalhos futuros que podem ser realizados a partir dos resultados apontados / Abstract: The use of information by organizations to define strategies and establish a long-term relationship with their customers, generates the need of having a Database Marketing to give support to those initiatives. Many of these strategies did not succeed because of a lack of data and information about their customers, that were not available to be used in the analysis. This essay suggests an approach based on participatory design (PD) techniques such as Group Elicitation Method, BluePrint Mapping and PICTIVE to create a methodology applicable to the requirements elicitation phase, in order to minimize missing information that occurs regularly in this phase, and could be propagated to the results expected by organizations. A case study demonstrate the use of this methodology based on PD techniques with comments about results obtained, including a list of weak and strong points of this methodology, and also a proposal for future work that can be done based on the results listed herein / Mestrado / Mestre Profissional em Ciencia da Computação
|
667 |
Návrh databáze pro drobné chovatele zvířat / Database Design for Small Animal BreedersZáboj, Zdeněk January 2020 (has links)
This master´s thesis is focused on the design of a database for small breeders, for whom it makes no sense to use large enterprise solutions. In this thesis I will focus on the analysis of the needs of those individuals and on the design of a database that will satisfy their needs with minimal financial requirements.
|
668 |
Návrh relační databáze pro obecní knihovnu / Design of Relational Database for Municipal LibraryVlk, Jan January 2020 (has links)
This diploma thesis focuses on problematics associated with design of relational database. It is divided into several parts where it deals with theoretical basis, analyses of current state and the design of own solution.
|
669 |
Tvorba a zpracování signálové databáze / Creation and processing of signal databaseGlett, Jiří January 2009 (has links)
The work become acquainted with history and rise databases globally. Construes philosophy structuralization, sorting and purpose using. Work further deal with concrete databases softwares intended directly to processing audio signals. Further treat of programmes that the make possible generation personal database structures. Work deal about SUSAS database and analysing its content. It is created self database of music signals, which includes several musical groups always similar in certain aspect. Speech database contains records from the SUSAS database and records from television programs, reality-shows, sports broadcasts, reports and documents, when speakers are subjected to stress and emotions. The work is a database program that can effectively classify all records and processes. The database can be freely extended. The resulting program was realized into Czech and English version.
|
670 |
Serverová aplikace pro zpracování dat z databáze MySQL a jejich interpretaci / Server application to process data from a MySQL database and their interpretationGardian, Ján January 2016 (has links)
Diploma thesis is about creating server application that process and interprets data from the database. Main aim of such application is able to process a large number of database requests in real-time environment. Provided database contains records of measuring download speed and quality of mobile connection via different radio technology from various providers. Those measured data are sent from users all around the world and amount of data collected is still growing. Therefore created server application can adapt to increasing size of database thanks to aggregation. This method of aggregation and use of index in database tables are further discussed in the theoretical part. Mainly putting indexes in tables produce significant acceleration of processing database requests. Final product of this thesis is an application that consist of three components: a server application running aggregation, website that interprets measured data and back-end interface providing measured data as well. Data at the website are presented in form of graphs for different countries and used radio technologies. Web address and user manual for finished applications are provided in the fourth chapter of diploma thesis. In the last part of thesis are performed various speed tests of programmed application that confirm the effectiveness of selected and described methods to accelerate work with the database.
|
Page generated in 0.0337 seconds