• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1605
  • 457
  • 422
  • 170
  • 114
  • 102
  • 60
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3643
  • 856
  • 804
  • 754
  • 608
  • 543
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 276
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

An approximate load balancing parallel hash join algorithm to handle data skew in a parallel data base system

Geum, Seong 05 1900 (has links)
No description available.
462

Secure database modeling and design

Oh, Yong-Chul 05 1900 (has links)
No description available.
463

Extensions to Aldat to support distributed database operations with no global scheme

Gaudon, Melanie E. January 1986 (has links)
No description available.
464

Databases For Mediation Systems : Design and Data scaling approach

Ayyagari, Nitin Reddy January 2015 (has links)
Context: There is continuous growth in data generation due to wide usage of modern communication systems. Systems have to be designed which can handle the processing of these data volumes efficiently. Mediation systems are meant to serve this purpose. Databases form an integral part of the mediation systems. Suitability of the databases for such systems is the principle theme of this work. Objectives: The objective of this thesis is to identify the key requirements for databases that can be used as part of Mediation systems, gain a thorough understanding of various features, the data models commonly used in databases and to benchmark their performance. Methods: Previous work that has been carried out on various databases is studied as a part of literature review. Test bed is set up as a part of experiment and performance metrics such as throughput and total time taken were measured through a Java based client. Thorough analysis has been carried out by varying various parameters like data volumes, number of threads in the client etc. Results: Cassandra has a very good write performance for event and batch operations. Cassandra has a slightly better read performance when compared to MySQL Cluster but this differentiation withers out in case of fewer number of threads in the client. Conclusions: On evaluation of MySQL Cluster and Cassandra we conclude that they have several features that are suitable for mediation systems. On the other hand, Cassandra does not guarantee ACID transactions while MySQL Cluster has good support. There is need for further evaluation on new generation databases which are not mature enough as of now.
465

From entities to objects : reverse engineering a relational data model into an object-oriented design

Hines, Gary L. January 2000 (has links)
In many software applications, an object-oriented design (OOD) is generated first, then persistent storage is implemented by mapping the objects to a relational database. This thesis explores the "reverse engineering" of an OOD out of an existing relational data model. Findings from the current literature are presented, and a case study is undertaken using the model and research process published by GENTECH, a nonprofit organization promoting genealogical computing. The model is mapped into an OOD and captured in Unified Modeling Language (UML) class diagrams and object collaboration diagrams. The suitability of the example OOD is evaluated against the GENTECH research process using UML use cases and sequence diagrams. The mapping of relational database designs into OODs is found to be suitable in certain instances. / Department of Computer Science
466

Multi-Master Replication for Snapshot Isolation Databases

Chairunnanda, Prima January 2013 (has links)
Lazy replication with snapshot isolation (SI) has emerged as a popular choice for distributed databases. However, lazy replication requires the execution of update transactions at one (master) site so that it is relatively easy for a total SI order to be determined for consistent installation of updates in the lazily replicated system. We propose a set of techniques that support update transaction execution over multiple partitioned sites, thereby allowing the master to scale. Our techniques determine a total SI order for update transactions over multiple master sites without requiring global coordination in the distributed system, and ensure that updates are installed in this order at all sites to provide consistent and scalable replication with SI. We have built our techniques into PostgreSQL and demonstrate their effectiveness through experimental evaluation.
467

High Availability for Database Systems in Geographically Distributed Cloud Computing Environments

Meng, Huangdong January 2014 (has links)
In recent years, cloud storage systems have become very popular due to their good scal- ability and high availability. However, these storage systems provide limited transactional capabilities, which makes developing applications that use these systems substantially more difficult than developing applications that use a traditional SQL-based relational database management systems (DBMS). There have been solutions that provide transactional SQL-based DBMS services on the cloud, including solutions that use cloud shared storage systems to store the data. However, none of these solutions take advantage of the shared cloud storage architecture to provide DBMS high availability. These solutions typically deal with the failure of a DBMS server by restarting this server and going through crash recovery based on the transaction log, which can lead to long DBMS service downtimes that are not acceptable to users. It is possible to run traditional DBMS high availability solutions in cloud environments. These solutions are typically based on shipping the transaction log from a primary server to a backup server, and replaying the log at the backup server to keep it up to date with the primary. However, these solutions do not work well if the primary and backup are in different, geographically distributed data centers due to the high latency of log shipping. Furthermore, these solutions do not take advantage of the capabilities of the underlying shared storage system. We present a new transparent high availability system for transactional SQL-based DBMS on a shared storage architecture, which we call CAC-DB (Continuous Access Cloud DataBase). Our system is especially designed for eventually consistent cloud storage systems that run efficiently in multiple geographically distributed data centers. The database and transaction logs are stored in such a storage system, and therefore remain available after a failure up to the failure of an entire data center (e.g., in a natural disaster). CAC-DB takes advantage of this shared storage to ensure that the DBMS service remains available and transactionally consistent in the face of failures up to the loss of one or more data centers. By taking advantage of shared storage, CAC-DB can run in a geographically distributed environment with minimal overhead as compared to traditional log shipping solutions. In CAC-DB, an active (primary) and a standby (backup) DBMS run on different servers in different data centers. The standby catches up with the active's memory state by replaying the shared log. When the active crashes, the standby can finish the failover process and reach peak throughput very quickly. The DBMS service only experiences several seconds of downtime. While the basic idea of replaying the log is simple and not new, the shared storage environment poses many new challenges including the need for synchronization protocols, new buffer pool management mechanisms, approaches for guaranteeing strong consistency without sacrifi cing performance and new shared storage based failure detection mechanism. This thesis solves these challenges and presents a system that achieves the following goal: if a data center fails, not only does the persistent image of the database on the storage tier survive, but also the DBMS service can resume almost uninterrupted and reach peak throughput in a very short time. At the same time, the throughput of the DBMS service in normal processing is not negatively affected. Our experiments with CAC-DB running on EC2 con rm that it can achieve the above goals.
468

AUTONOMIC WORKLOAD MANAGEMENT FOR DATABASE MANAGEMENT SYSTEMS

Zhang, Mingyi 07 May 2014 (has links)
In today’s database server environments, multiple types of workloads, such as on-line transaction processing, business intelligence and administrative utilities, can be present in a system simultaneously. Workloads may have different levels of business importance and distinct performance objectives. When the workloads execute concurrently on a database server, interference may occur and result in the workloads failing to meet the performance objectives and the database server suffering severe performance degradation. To evaluate and classify the existing workload management systems and techniques, we develop a taxonomy of workload management techniques. The taxonomy categorizes workload management techniques into multiple classes and illustrates a workload management process. We propose a general framework for autonomic workload management for database management systems (DBMSs) to dynamically monitor and control the flow of the workloads and help DBMSs achieve the performance objectives without human intervention. Our framework consists of multiple workload management techniques and performance monitor functions, and implements the monitor–analyze–plan–execute loop suggested in autonomic computing principles. When a performance issue arises, our framework provides the ability to dynamically detect the issue and to initiate and coordinate the workload management techniques. To detect severe performance degradation in database systems, we propose the use of indicators. We demonstrate a learning-based approach to identify a set of internal DBMS monitor metrics that best indicate the problem. We illustrate and validate our framework and approaches using a prototype system implemented on top of IBM DB2 Workload Manager. Our prototype system leverages the existing workload management facilities and implements a set of corresponding controllers to adapt to dynamic and mixed workloads while protecting DBMSs against severe performance degradation. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-07 13:35:42.858
469

Process algebra approach to parallel DBMS performance modelling

Pua, Chai Seng January 1999 (has links)
No description available.
470

Characterization of solitons and shockwaves in nonlinear transmission lines at microwave frequencies

Salameh, Daoud Yousef January 1998 (has links)
No description available.

Page generated in 0.0687 seconds