• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1031

Evaluating alternative methods of providing database access over low speed communications

Werbel, Daniel T. 23 December 2009 (has links)
One of the most important activities in the systems engineering process is the determination of the best implementation method from a set of alternatives. This project describes a process that can be followed to evaluate a set of implementation alternatives. This process consists of performing the following activities: Definition of the need, requirements and functional analysis, evaluation of the alternatives, requirements validation, and risk identification. To clarify the activities in the evaluation process, the project follows a case study in which the XYZ Corporation determines the best implementation approach for providing access to a remote database over low speed communications lines. Three alternatives were evaluated by the XYZ Corporation. After performing the evaluation, an HTML only implementation approach was selected. This implementation had the highest perfonnance and dependability compared to the other alternatives. Regional users will use a Netscape browser to view HTML pages stored at the corporate headquarters. A web seNer located at the corporate headquarters will interface with the database seNer by performing the required additions, updates, and queries to the corporate database. The web seNer will also fonnat the returns into HTML pages for viewing at the regional sites. / Master of Science
1032

Transformation of relational schema into static object schema

Kutan, Kent 02 February 2010 (has links)
<p>The objective of this paper is to show how relational database schema can be transformed into static object-oriented database schema. First, data definition in the relational model and the object model are described. Next, the transformation rules are explained. This is followed by an illustration of an algorithm used to construct object-oriented schema out of relational schema. Finally, the algorithm is implemented through the use of C++.</p> / Master of Science
1033

An Approach to Achieve DBMS Vendor Independence for Ides AB's Platform

Johansson, Philip, Blomqvist, Niklas January 2017 (has links)
Software that is developed with few integration capabilities to different user interfaces or database vendors might lose market share in the long run. To stay competitive, companies that find themselves in situations like these might need to look at options to increase their alternatives. This thesis aims to present and evaluate how Ides AB could achieve vendor independence as it relates to database integration.The proposed solution is based on pre-existing code from an existing product and thus includes theory about the methods and details how one can read, understand and analyse code. The outcome is presented with code examples to give the reader a clear and concise understanding. In the evaluation phase, we take other related work into consideration as it relates to our thesis focus. The proposed approach presented consists of a class to represent different database vendors. It also consists of abstract functions handling the interaction between different databases. Which database the class interacts with is determined by the connection established. The approach also includes what is possible to make database agnostic verified by an evaluation.
1034

Establishing a Framework for an African Genome Archive

Southgate, Jamie January 2021 (has links)
>Magister Scientiae - MSc / The generation of biomedical research data on the African continent is grow- ing, with numerous studies realizing the importance of African genetic diver- sity in discoveries of human origins and disease susceptibility. The decrease in costs to purchase and utilize such tools has enabled research groups to produce datasets of signi cant scienti c value. However, this success story has resulted in a new challenge for African Researchers and institutions. An increase in data scale and complexity has led to an imbalance of infrastructure and skills to manage, store and analyse this data. The lack of physical infrastructure has left genomic research on the continent lagging behind its counterparts abroad, drastically limiting the sharing of data and posing challenges for researchers wishing to explore secondary analysis, study veri cation and amalgamation. The scope of this project entailed the design and implementation of a proto- type genome archive to support the e ective use of data resources amongst researchers. The prototype consists of a web interface and storage backend for users to upload and browse projects, datasets and metadata stored in the archive. The server, middleware, database and server-side framework are components of the genome archive and form the software stack. The server component provides the shared resources such as network connectivity, le storage, security and metadata database. The database type implemented in storing the metadata relating to the sample les is a NoSQL database. This database is interfaced with the iRods middleware component which controls data being sent between the server, database and the Flask framework. The Flask framework which is based on the Python programming language, is the development platform of the archive web application. The Cognitive Walkthrough methodology was used to evaluate suitabil- ity of the software for its users. Results showed that the core conceptual model adopted by the prototype software is consistent and that actions available to the user are visible. Issues were raised pertaining to user feedback when per- forming tasks and metadata term meaning. The development of a continent wide genome archive for Africa is feasible by utilizing open source software and metadata standards to improve data discovery and reuse.
1035

Development of a Mineral-Specific Sorption Database for Surface Complexation Modeling (Final Report and Manual)

Richter, Anke, Vahle, A., Nebelung, Cordula, Brendler, Vinzenz January 2004 (has links)
RES³T - the Rossendorf Expert System for Surface and Sorption Thermodynamics - is a digitized thermodynamic sorption database, implemented as a relational database. It is mineral-specific and can therefore also be used for additive models of more complex solid phases such as rocks or soils. An integrated user interface helps users to access selected mineral and sorption data, to extract internally consistent data sets for sorption modeling, and to export them into formats suitable for other modeling software. Data records comprise of mineral properties, specific surface area values, characteristics of surface binding sites and their protolysis, sorption ligand information, and surface complexation reactions. An extensive bibliography is also included, providing links not only to the above listed data items, but also to background information concerning surface complexation model theories, surface species evidence, and sorption experiment techniques. The RES³T database is intended for an international use. This requires high standards in availability, consistency and actuality. Therefore the authors of the database decided to couple the database onto an authorization tool.
1036

Sysplex-Cluster-Technologien für Hochleistungs-Datenbanken

Spruth, Wilhelm G., Rahm, Erhard 17 October 2018 (has links)
Wir stellen die Cluster-Architektur IBM Parallel Sysplex und ihren Einsatz zur Datenbank- und Transaktionsverarbeitung vor. Die Sysplex-Architektur ermöglicht die Nutzung von bis zu 32 Mehrprozessor-Großrechnern auf einem gemeinsamen Datenbestand, ohne Modifikation bestehender Anwendungen. Eine wesentliche Komponente ist die sogenannte Coupling Facility (CF), in der allen Rechnern zugängliche globale Datenstrukturen und globale Pufferbereiche verwaltet werden. Wir diskutieren, wie mit einer solchen „nahen“ Rechnerkopplung leistungskritische Cluster-Aufgaben zur Synchronisation und Kohärenzkontrolle gelöst werden. Leistungsuntersuchungen zeigen eine hohe Skalierbarkeit der Sysplex-Performance in praktischen Einsatzfällen.
1037

Citation analysis of database publications

Rahm, Erhard, Thor, Andreas 19 October 2018 (has links)
We analyze citation frequencies for two main database conferences (SIGMOD, VLDB) and three database journals (TODS, VLDB Journal, Sigmod Record) over 10 years. The citation data is obtained by integrating and cleaning data from DBLP and Google Scholar. Our analysis considers different comparative metrics per publication venue, in particular the total and average number of citations as well as the impact factor which has so far only been considered for journals. We also determine the most cited papers, authors, author institutions and their countries.
1038

Dynamic load balancing in parallel database systems

Rahm, Erhard 19 October 2018 (has links)
Dynamic load balancing is a prerequisite for effectively utilizing large parallel database systems. Load balancing at different levels is required in particular for assigning transactions and queries as well as subqueries to nodes. Special problems are posed by the need to support both inter-transaction/query as well as intra-transaction/query parallelism due to conflicting performance requirements. We compare the major architectures for parallel database systems, Shared Nothing and Shared Disk, with respect to their load balancing potential. For this purpose, we focus on parallel scan and join processing in multi-user mode. It turns out that both the degree of query parallelism as well as the processor allocation should be determined in a coordinated way and based on the current utilization of critical resource types, in particular CPU and memory.
1039

Ein Simulationsansatz zur Bewertung paralleler Shared-Disk-Datenbanksysteme

Stöhr, Thomas 23 October 2018 (has links)
No description available.
1040

On Parallel Join Processing in Object-Relational Database Systems

Märtens, Holger, Rahm, Erhard 06 November 2018 (has links)
So far only few performance studies on parallel object-relational database systems are available. In particular, the relative performance of relational vs. reference-based join processing in a parallel environment has not been investigated sufficiently. We present a performance study based on the BUCKY benchmark to compare parallel join processing using reference attributes with relational hash- and merge-join algorithms. In addition, we propose a data allocation scheme especially suited for object hierarchies and set-valued attributes.

Page generated in 0.061 seconds