31 |
Ontology-based query processing for global information systems /Mena, Eduardo. Illarramendi, Arantza. January 2001 (has links)
Univ., Diss. u.d.T.: Mena, Eduardo: Observer--Zaragoza, 1998. / Literaturverz. S. [203] - 212.
|
32 |
Real-time data management in the distributed environment /Chen, Deji, January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 260-273). Available also in a digital version from Dissertation Abstracts.
|
33 |
Analysis and prototyping of the United States Marine Corps Total Force Administration System (TFAS), Echelon II : a web enabled database for the small unit leader /Simmons, Steven A. January 2002 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2002. / Thesis advisor(s): Daniel R. Dolk. Includes bibliographical references (p. 125). Also available online.
|
34 |
Alternating parallelism and the stabilization of distributed systems /Haddix, Frank Furman, January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 127-131). Available also in a digital version from Dissertation Abstracts.
|
35 |
Maintaining retrieval effectiveness in distributed, dynamic information retrieval systemsViles, Charles L. January 1996 (has links)
Thesis (Ph. D.)--University of Virginia, 1996. / Includes vita and abstract. Includes bibliographical references (leaves 135-146).
|
36 |
Asynchronous Backup and Initialization of a Database Server for Replicated Database SystemsBhalla, Subhash, Madnick, Stuart E. 14 April 2003 (has links)
A possibility of a temporary disconnection of database service exists in many computing environments. It is a common need to permit a participating site to lag behind and re-initialize to full recovery. It is also necessary that active transactions view a globally consistent system state for ongoing operations. We present an algorithm for on-the-fly backup and site-initialization. The technique is non-blocking in the sense that failure and recovery procedures do not interfere with ordinary transactions. As a result the system can tolerate disconnection of services and reconnection of disconnected services, without incurring high overheads
|
37 |
An infrastructure for secure distributed object-oriented databasesDreyer, Lucas Cornelius Johannes 10 September 2012 (has links)
M.Sc. / In a society that is becoming increasingly reliant on information, it is necessary for information to be stored efficiently and safely. Database technology is used to store large chunks of information efficiently, while database security is concerned with storing information securely. More complex computer applications (CAD/CAM, multimedia and Groupware) led to then development of object-oriented programming, with object-oriented databases following shortly after. Object-oriented databases store the data of object-oriented systems efficiently and permanently. They provide a rich set of semantic structures that allows them to be used in applications where other database models are simply inadequate. In federations consisting of several interconnected databases, security plays a vital role in the proper management of information. This work describes a Secure Distributed Object Environment (SDOE) infrastructure. It is designed to be implementation-oriented, on which strict theoretic prototypes such as SPOP (Selfprotecting Object Prototype) can be built. SPOP is a prototype of a secure object-oriented database and is based on the SPO database model of Olivier. To describe federated database architectures (used by SDOE and SPOP), it is necessary to understand the architecture of federated database systems. Reference architectures for federated database systems are discussed first and a comparison is drawn between two prominent reference architectures. We proposed a generalised reference architecture based on these two architectures. created in order to make the use of object-oriented programming in a distributed environment as problem free as possible. A marshal buffer structure will be discussed thirdly. The latter structure is used to contain procedure parameters during an RPC (Remote Procedure Call). Fourthly, the communications infrastructure necessary to support higher-level services is discussed. The infrastructure is implemented in Linux (a UNIX variant), and this approach has provided several interesting challenges. The fifth discussion will deal with the requirements for a name service. A name service is necessary if objects were to be used transparently (without reference to their current locations in the federation).
|
38 |
Accelerating SPARQL Queries and Analytics on RDF DataAl-Harbi, Razen 09 November 2016 (has links)
The complexity of SPARQL queries and RDF applications poses great challenges on distributed RDF management systems. SPARQL workloads are dynamic and con- sist of queries with variable complexities. Hence, systems that use static partitioning su↵er from communication overhead for workloads that generate excessive communi- cation. Concurrently, RDF applications are becoming more sophisticated, mandating analytical operations that extend beyond SPARQL queries. Being primarily designed and optimized to execute SPARQL queries, which lack procedural capabilities, exist- ing systems are not suitable for rich RDF analytics.
This dissertation tackles the problem of accelerating SPARQL queries and RDF analytics on distributed shared-nothing RDF systems. First, a distributed RDF en- gine, coined AdPart, is introduced. AdPart uses lightweight hash partitioning for sharding triples using their subject values; rendering its startup overhead very low. The locality-aware query optimizer of AdPart takes full advantage of the partition- ing to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. By exploiting hash- based locality, AdPart achieves better or comparable performance to systems that employ sophisticated partitioning schemes.
To cope with workloads dynamism, AdPart is extended to dynamically adapt to workload changes. AdPart monitors the data access patterns and dynamically redis- tributes and replicates the instances of the most frequent patterns among workers.Consequently, the communication cost for future queries is drastically reduced or even
eliminated. Experiments with synthetic and real data verify that AdPart starts faster than all existing systems and gracefully adapts to the query load.
Finally, to support and accelerate rich RDF analytical tasks, a vertex-centric RDF analytics framework is proposed. The framework, named SPARTex, bridges the gap between RDF and graph processing. To do so, SPARTex: (i) implements a generic SPARQL operator as a vertex-centric program. The operator is coupled with an optimizer that generates e cient execution plans. (ii) It allows SPARQL to invoke vertex-centric programs as stored procedures. Finally, (iii) it provides a unified in- memory data store that allows the persistence of intermediate results. Consequently, SPARTex can e ciently support RDF analytical tasks consisting of complex pipeline of operators.
|
39 |
Evaluation of on-premises distributeddatabases allowing for offline accessBäckström, Joel, Erkgärds, Emil January 2022 (has links)
No description available.
|
40 |
Query processing in heterogeneous distributed database management systemsBhasker, Bharat 20 September 2005 (has links)
The goal of this work is to present an advanced query processing algorithm formulated and developed in support of heterogeneous distributed database management systems. Heterogeneous distributed database management systems view the integrated data through an uniform global schema. The query processing algorithm described here produces an inexpensive strategy for a query expressed over the global schema. The research addresses the following aspects of query processing: (1) Formulation of a low level query language to express the fundamental heterogeneous database operations; (2) Translation of the query expressed over the global schema to an equivalent query expressed over a conceptual schema; (3) An estimation methodology to derive the intermediate result sizes of the database operations; (4) A query decomposition algorithm to generate an efficient sequence of the basic database operations to answer the query. This research addressed the first issue by developing an algebraic query language called cluster algebra. The cluster algebra consists of the following operations: (a) Selection, union, intersection and difference, which are extensions of their relational algebraic counterparts to heterogeneous databases; (b) Normal-join and normal-projection which replace their counterparts, join and projection, in the relational algebra; (c) Two new operators embed and unembed to restructure the database schema. The second issue of the query translation was addressed by development of an algorithm that translates a cluster algebra query expressed over the virtual views to an equivalent cluster algebra query expressed over the conceptual databases. A non-parametric estimation methodology to estimate the result size of a cluster algebra operation was developed to address the third issue described above. Finally, this research developed a query decomposition algorithm, applicable to the relational and non-relational databases, that decomposes a query by computing all profitable semi-join operations, followed by the determination of the best sequence of join operations per processing site. The join optimization is performed by formulating a zero-one integer linear program that uses the non-parametric estimation technique to compute the sizes of intermediate results. The query processing algorithm was implemented in the context of DAVID, a heterogeneous distributed database management system. / Ph. D.
|
Page generated in 0.0724 seconds