• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 17
  • 17
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 33
  • 23
  • 21
  • 21
  • 18
  • 18
  • 16
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Comparison of performance between Raw SQL and Eloquent ORM in Laravel

Jound, Ishaq, Halimi, Hamed January 2016 (has links)
Context. PHP framework Laravel offers three techniques to interact with databases, Eloquent ORM, Query builder and Raw SQL. It is important to select the right database technique when developing a web application because there are pros and cons with each approach.  Objectives. In this thesis we will measure the performance of Raw SQL and Eloquent ORM, there is little research on which technique is faster. Intuitively, Raw SQL should be faster than Eloquent ORM, but exactly how much faster needs to be researched.   Methods. To measure the performance of both techniques, we developed a blog application and we ran database operations select, insert and update in both techniques.   Conclusions. Results indicated that overall Raw SQL performed better than Eloquent ORM in our database operations. There was a noticeable difference of average response time between Raw SQL and Eloquent ORM in all database operations. We can conclude that Eloquent ORM is good for building small to medium sized applications, where simple CRUD operations are used for the small amount of data. Typically for tasks like inserting a single row into a database or retrieving few rows from the database. Raw SQL is preferable for the applications that are dealing with huge amount of data, bulk data loads and complex queries.
42

Performance Analysis of Relational Databases, Object-Oriented Databases and ORM Frameworks / Prestandaanalys av Relationsdatabaser, Objektorienterade Databaser och ORM-Ramverk

Nagy, Victor January 2014 (has links)
In the planning stage of web and software development, it is important to select the right tool for the job. When selecting the database to use, relational databases like MySQL is a popular choice. However, relational databases suffer by object-relational impedance mismatch. In this work we will explore the response time of relational and object-oriented databases and the overhead of ORM frameworks. This will be done by creating a web application that receives data about flights and airports from a client, which measures the response time of the databases and the entire request. It was found that MySQL has the lowest response time, while the ORM framework Hibernate adds an overhead on some of the tests while performing similar to MySQL. Db4o had the highest response time in a majority of the tests. In future works, this study could be extended by other tests or by other type of databases.
43

Computational verification of published human mutations

Kamanu, Frederick Kinyua January 2008 (has links)
Magister Scientiae - MSc / The completion of the Human Genome Project, a remarkable feat by any measure, has provided over three billion bases of reference nucleotides for comparative studies. The next, and perhaps more challenging step is to analyse sequence variation and relate this information to important phenotypes. Most human sequence variations are characterized by structural complexity and, are hence, associated with abnormal functional dynamics. This thesis covers the assembly of a computational platform for verifying these variations, based on accurate, published, experimental data. / South Africa
44

Holistic Source-centric Schema Mappings For XML-on-RDBMS

Patil, Priti 05 1900 (has links) (PDF)
No description available.
45

Využití systémového katalogu pro správu databází / Application of system catalogue for database management

Nečas, Jaroslav January 2008 (has links)
The purpose of this project is to create a design and subsequently the web application for a database management which uses the system catalogue of the Microsoft SQL database system. The introduction deals with the principle of the relational databases and includes the list of the most widespread database systems. The next chapter of the theoretical part describes the syntax of the SQL query language, and the next chapter deals with the properties of the system catalogue which were utilized. Finally, the last of the theoretical chapters is centred on the .NET platform and its technologies – particularly the ASP.NET and the ADO.NET technologies. The former is used for creating dynamical web pages, and the latter is an object library used for the data access. The practical part of this work comprises the web application scheme on which is the final web application based. The most emphasized issues in this part are the security and the supposed program design of the web application. The final part of the work contains a description of the finished web application, especially the configuration and the functions which the application provides to a user. And finally, the whole project is briefly evaluated in the conclusion.
46

Návrh systému pro evidenci požárů / Design of System for Maintanance of Fires

Mešková, Lucia January 2016 (has links)
The aim of this thesis is to design a system for the purposes of registry of fires of the Fire and rescue department of the Slovak Republic. System should serve to members of the department for administration of records about all fires within the Žilina Region and it should work through the web interface. Emphasis will be placed on using frameworks with the aim to simplify the process of development of the system.
47

Design av Riskdatabas : En studie för effektivare hantering av risker

Shi, Henry, Ho, Johnny January 2013 (has links)
Risk management is a key competency that is constantly being researched how it can be improved within project management. The risk management process consists of four major steps: identify risks, assess the risks’ significance on the project, evaluate and address the key risks and follow up.The majority of companies seem to neglect certain identified risks, and decide not to mitigate if the risk does not cause adverse effects to the business. To counteract undesirable consequences and help organisations to become more effective at managing risks an initial work has been conducted for a risk repository.The project develops a proposal on the design of a risk repository which aim to effectively support a database implementation. The study includes literature studies which resulted in a relational model for database implementation. Furthermore, personal meetings were conducted within the framework of risk management.The study has resulted in development of relational models, one entity relationship model and one data model, which are essential for the database implementation. The modelling technique was based on an approach founded by Chen (1976). This modelling approach is still actively used in education and by developers within entity-relationship modelling and database design.The developed design supports the implementation of the data model in a risk repository, which eventually supports decision-making in risk management for businesses. A suggestion for further research has also emphasized the risk repository. The suggestion aims to increase the efficiency of the database of a computing system which enables cost estimates of potential risks that may occur.
48

Visualiseringsverktyg för modulärproduktutveckling : En studie om designen och implentationen av ett verktyg som ska effektivisera ett modulärt arbetssätt

Holm, Mathias January 2014 (has links)
Due to a growing interest in communication in today’s society the demand for equipment that is used in communication networks increases while the competition between companies that produce this equipment grows. To meet the increasing demand and at the same time having a competitive product development many companies use some effective product design, such as the modular product design. When a modular product design is used it’s good to compare different module configurations for a certain product and to simplify these comparisons some tool can be used. This work examines the design and development of a tool that visualize information about different modular plans in a web interface. The focus of the work is on the storage and processing of data to be presented and also the software architecture, namely the back-end of the tool. The front-end consists of a web interface that is developed and described in another thesis. Different techniques to store data are examined and data models are developed. A multitier architecture, more precisely three tiers, is used in the tool where the three tiers are one tier for the data storage, one tier for the processing of data and one for the web interface. A relational database is used as data storage and to process data the programing language Java is used. To communicate between the web interface and the tier that process data a RESTful API is used.
49

An Introduction to Functional Independency in Relational Database Normalization

Chen, Tennyson X., Liu, Sean Shuangquan, Meyer, Martin D., Gotterbarn, Don 17 May 2007 (has links)
In this paper, we discuss the deficiencies of normal form definitions based on Functional Dependency and introduce a new normal form concept based on Functional Independency. Functional Independency has not been systematically investigated while there is a very strong theoretical foundation for the study of Functional Dependency in relational database normalization. This paper will demonstrate that considering Functional Dependency alone cannot eliminate some common data anomalies and the normalization process can yield better database designs with the addition of Functional Independency.
50

Karst Database Development in Minnesota: Design and Data Assembly

Gao, Y., Alexander, E. C., Tipping, R. G. 01 May 2005 (has links)
The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces.

Page generated in 0.0951 seconds