• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 18
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tentacle : a graph-based database system

Welz, Gerhard Marc 06 September 2023 (has links) (PDF)
With the advent of large and complex applications and the emergence of semi-structured information repositories such as the World Wide Web, new demands are being made on database systems. The TENTACLE database system is an experimental database system which provides facilities capable of meeting some of these demands. The distinguishing features of the system are that it: uses a graph-based data model (and storage subsystem) to provide a flexible means of representing poorly structured information, integrates a path expression-based query language with a general purpose language to query and manipulate the graph structures, thereby eliminating the impedance mismatch encountered in a two language system, and provides a programmable database kernel capable of executing the combined query and utility language, allowing the construction of domain specific applications inside the database without the assistance of wrappers or gateways. As a demonstration of the utility of the system, I have constructed a hypertext server inside the TENTACLE database without making use of external mediators or gateways. Since the hypertext server program is part of the database content, database facilities may be used to assist in the creation and maintenance of the hypertext server itself. In addition, the close integration of hypertext server and database simplifies tasks such as the management of associations between hypertext entities or the maintenance of different document views.
2

Evaluating recursive relational queries modelled by networks of coroutines

Glauert, J. R. W. January 1983 (has links)
No description available.
3

Implementing a heterogeneous relational database node

Long, J. A. January 1985 (has links)
No description available.
4

Process modelling for information system description

Stanczyk, S. F. January 1987 (has links)
No description available.
5

A declarative specification language for temporal database applications

Theodoulidis, Charalampos I. January 1990 (has links)
No description available.
6

High Availability for Database Systems in Geographically Distributed Cloud Computing Environments

Meng, Huangdong January 2014 (has links)
In recent years, cloud storage systems have become very popular due to their good scal- ability and high availability. However, these storage systems provide limited transactional capabilities, which makes developing applications that use these systems substantially more difficult than developing applications that use a traditional SQL-based relational database management systems (DBMS). There have been solutions that provide transactional SQL-based DBMS services on the cloud, including solutions that use cloud shared storage systems to store the data. However, none of these solutions take advantage of the shared cloud storage architecture to provide DBMS high availability. These solutions typically deal with the failure of a DBMS server by restarting this server and going through crash recovery based on the transaction log, which can lead to long DBMS service downtimes that are not acceptable to users. It is possible to run traditional DBMS high availability solutions in cloud environments. These solutions are typically based on shipping the transaction log from a primary server to a backup server, and replaying the log at the backup server to keep it up to date with the primary. However, these solutions do not work well if the primary and backup are in different, geographically distributed data centers due to the high latency of log shipping. Furthermore, these solutions do not take advantage of the capabilities of the underlying shared storage system. We present a new transparent high availability system for transactional SQL-based DBMS on a shared storage architecture, which we call CAC-DB (Continuous Access Cloud DataBase). Our system is especially designed for eventually consistent cloud storage systems that run efficiently in multiple geographically distributed data centers. The database and transaction logs are stored in such a storage system, and therefore remain available after a failure up to the failure of an entire data center (e.g., in a natural disaster). CAC-DB takes advantage of this shared storage to ensure that the DBMS service remains available and transactionally consistent in the face of failures up to the loss of one or more data centers. By taking advantage of shared storage, CAC-DB can run in a geographically distributed environment with minimal overhead as compared to traditional log shipping solutions. In CAC-DB, an active (primary) and a standby (backup) DBMS run on different servers in different data centers. The standby catches up with the active's memory state by replaying the shared log. When the active crashes, the standby can finish the failover process and reach peak throughput very quickly. The DBMS service only experiences several seconds of downtime. While the basic idea of replaying the log is simple and not new, the shared storage environment poses many new challenges including the need for synchronization protocols, new buffer pool management mechanisms, approaches for guaranteeing strong consistency without sacrifi cing performance and new shared storage based failure detection mechanism. This thesis solves these challenges and presents a system that achieves the following goal: if a data center fails, not only does the persistent image of the database on the storage tier survive, but also the DBMS service can resume almost uninterrupted and reach peak throughput in a very short time. At the same time, the throughput of the DBMS service in normal processing is not negatively affected. Our experiments with CAC-DB running on EC2 con rm that it can achieve the above goals.
7

Performance comparison between multi-model, key-value and documental NoSQL database management systems

Jansson, Jens, Vukosavljevic, Alexandar, Catovic, Ismet January 2021 (has links)
This study conducted an experiment that compares the multi-model NoSQL DBMS ArangoDB with other NoSQL DBMS, in terms of the average response time of queries. The DBMS compared in this experiment are the following: Redis, MongoDB, Couchbase, and OrientDB. The hypothesis that is answered in this study is the following: “There is a significant difference between ArangoDB, OrientDB, Couchbase, Redis, MongoDB in terms of the average response time of queries”. This is examined by comparing the average response time of 1 000, 100 000, and 1 000 000 queries between these database systems. The results show that ArangoDB performs worse compared to the other DBMS. Examples of future work include using additional DBMS in the same experiment and replacing ArangoDB with another multi-model DBMS to decide whether such a DBMS, in general, performs worse than single-model DBMS.
8

A Database Supported Modeling Environment for Pandemic Planning and Course of Action Analysis

Ma, Yifei 24 June 2013 (has links)
Pandemics can significantly impact public health and society, for instance, the 2009 H1N1<br />and the 2003 SARS. In addition to analyzing the historic epidemic data, computational simulation of epidemic propagation processes and disease control strategies can help us understand the spatio-temporal dynamics of epidemics in the laboratory. Consequently, the public can be better prepared and the government can control future epidemic outbreaks more effectively. Recently, epidemic propagation simulation systems, which use high performance computing technology, have been proposed and developed to understand disease propagation processes. However, run-time infection situation assessment and intervention adjustment, two important steps in modeling disease propagation, are not well supported in these simulation systems. In addition, these simulation systems are computationally efficient in their simulations, but most of them have<br />limited capabilities in terms of modeling interventions in realistic scenarios.<br />In this dissertation, we focus on building a modeling and simulation environment for epidemic propagation and propagation control strategy. The objective of this work is to<br />design such a modeling environment that both supports the previously missing functions,<br />meanwhile, performs well in terms of the expected features such as modeling fidelity,<br />computational efficiency, modeling capability, etc. Our proposed methodologies to build<br />such a modeling environment are: 1) decoupled and co-evolving models for disease propagation, situation assessment, and propagation control strategy, and 2) assessing situations and simulating control strategies using relational databases. Our motivation for exploring these methodologies is as follows: 1) a decoupled and co-evolving model allows us to design modules for each function separately and makes this complex modeling system design simpler, and 2) simulating propagation control strategies using relational databases improves the modeling capability and human productivity of using this modeling environment. To evaluate our proposed methodologies, we have designed and built a loosely coupled and database supported epidemic modeling and simulation environment. With detailed experimental results and realistic case studies, we demonstrate that our modeling environment provides the missing functions and greatly enhances many expected features, such as modeling capability, without significantly sacrificing computational efficiency and scalability. / Ph. D.
9

Modeling and Computation of Complex Interventions in Large-scale Epidemiological Simulations using SQL and Distributed Database

Kaw, Rushi 30 August 2014 (has links)
Scalability is an important problem in epidemiological applications that simulate complex intervention scenarios over large datasets. Indemics is one such interactive data intensive framework for High-performance computing (HPC) based large-scale epidemic simulations. In the Indemics framework, interventions are supplied from an external, standalone database which proved to be an effective way of implementing interventions. Although this setup performs well for simple interventions and small datasets, performance and scalability of complex interventions and large datasets remain an issue. In this thesis, we present IndemicsXC, a scalable and massively parallel high-performance data engine for Indemics in a supercomputing environment. IndemicsXC has the ability to implement complex interventions over large datasets. Our distributed database solution retains the simplicity of Indemics by using the same SQL query interface for expressing interventions. We show that our solution implements the most complex interventions by intelligently offloading them to the supercomputer nodes and processing them in parallel. We present an extensive performance evaluation of our database engine with the help of various intervention case studies over synthetic population datasets. The evaluation of our parallel and distributed database framework illustrates its scalability over standalone database. Our results show that the distributed data engine is efficient as it is parallel, scalable and cost-efficient means of implementing interventions. The proposed cost-model in this thesis could be used to approximate intervention query execution time with decent accuracy. The usefulness of our distributed database framework could be leveraged for fast, accurate and sensible decisions by the public health officials during an outbreak. Finally, we discuss the considerations for using distributed databases for driving large-scale simulations. / Master of Science
10

A Database System for the Control and Maintenance of Computing Equipment Inventory

Pande, Vidya 01 1900 (has links)
<p> It is proposed to design, develop and implement a data base system to support the requirement of the Technical Computing Services department of McMaster University with respect to their responsibilities for the control and servicing of units of computing equipment at McMaster University.</p> <p> This data base contains information concerning each unit of equipment, its manufacturer, custodian, model number, serial number, purchase or lease record, maintenance record, past and present locations and service record.</p> <p> This project determines various cross-sections of this information to be retrieved. This includes the development of software to create, maintain, update the data base and to produce necessary reports. The design is implemented by CDC's DMS-170 with COBOL 5 as the host language.</p> / Thesis / Master of Science (MSc)

Page generated in 0.0797 seconds