711 |
Hydrogen bonding in the crystalline stateHayward, Owen David January 2001 (has links)
No description available.
|
712 |
Selective Data Replication for Distributed Geographical Data SetsGu, Xuan January 2008 (has links)
The main purpose of this research is to incorporate additional higher-level semantics into the existing data replication strategies in such a way that their flexibility and performance can be improved in favour of both data providers and consumers. The resulting approach from this research is referred to as the selective data replication system. With this system, the data that has been updated by a data provider is captured and batched into messages known as update notifications. Once update notifications are received by data consumers, they are used to evaluate so-called update policies, which are specified by data consumers containing details on when data replications need to occur and what data needs to be updated during the replications.
|
713 |
THE USE OF ABSTRACTIONS IN MODEL MANAGEMENT.DOLK, DANIEL ROY. January 1982 (has links)
The concept of a generalized model management System (GMMS) and its role in a decision support system are discussed. A paradigm for developing a GMMS which integrates artificial intelligence techniques with data management concepts is presented. The paradigm views a GMMS as a knowledge-based modeling system (KBMS) with knowledge abstractions as the vehicle of knowledge and model representation. Knowledge abstractions are introduced as a hybrid of the predicate calculus, semantic network, and frame representations in artificial intelligence (AI) embodied in an equivalent of a programming language data abstraction structure. As a result, models represented by knowledge abstractions are not only subject to the powerful problem reduction and inference techniques available in the AI domain but are also in a form conducive to model management. The knowledge abstraction in its most general form is seen as a frame which serves as a template for generating abstraction instances for specific classes of models. The corollaries of an abstraction-based GMMS with current data management concepts are explored. A CODASYL implementation of an abstraction-based GMMS for the class of linear programming models is described and demonstrated.
|
714 |
A METHODOLOGY FOR GLOBAL SCHEMA DESIGN.MANNINO, MICHAEL VICTOR. January 1983 (has links)
A global schema is an integrated view of heterogeneous databases used to support data sharing among independent, existing databases. Global schema design complexities arise from the volume of details, design choices, potential conflicts, and interdependencies among design choices. The methodology described provides a framework for efficient management of these critical dimensions in generating and evaluating alternative designs. The methodology contains three major steps. First, differences due to the varying local data models are resolved by converting each local schema to an equivalent schema in a unifying data model. Second, the entity types of the local schemas in the unifying model are grouped into clusters called common areas. All the entity types in a common area can possibly be merged via generalization. For each common area, semantic information is defined that drives the merging process. Third, each common area is integrated into the global schema by applying a set of generalization operators. Mapping rules are then defined to resolve differences in the representations of equivalent attributes. Th integration of the local schemas is based on equivalence assertions. Four types of attribute equivalences are defined: two attributes may be locally or globally equivalent, and they can be key or non-key. Strategies for handling each of these cases are proposed and evaluated. The global schema design methodology includes several algorithms which may assist a designer. One algorithm analyzes a set of equivalence assertions for consistency and completeness including resolution of transitively implied assertions. A second algorithm performs an interactive merge of a common area by presenting the possible generalization actions to the designer. It supports the theme that many generalization structures can be possible, and the appropriate structure often depends on designer preferences and application requirements. The methodology is evaluated for several cases involving real databases. The cases demonstrate the utility of the methodology in managing the details, considering many alternatives, and resolving conflicts. In addition, these cases demonstrate the need for a set of computer-aided tools; for even a relatively small case, the number of details and design choices can overwhelm a designer.
|
715 |
Policies Based Intrusion Response System for DBMSNayeem, Fatima, Vijayakamal, M. 01 December 2012 (has links)
Relational databases are built on Relational Model
proposed by Dr. E. F. Codd. The relational model has
become a consistent and widely used DBMS in the world.
The databases in this model are efficient in storing and
retrieval of data besides providing authentication through
credentials. However, there might be many other attacks
apart from stealing credentials and intruding database.
Adversaries may always try to intrude into the relational
database for monetary or other gains [1]. The relational
databases are subjected to malicious attacks as they hold
the valuable business data which is sensitive in nature.
Monitoring such database continuously is a task which is
inevitable keeping the importance of database in mind.
This is a strategy that is in top five database strategies as
identified by Gartner research which are meant for getting
rid of data leaks in organizations [2]. There are regulations
from governments like US with respect to managing data
securely. The data management like HIAPP, GLBA, and
PCI etc. is mentioned in the regulations as examples. / Intrusion detection systems play an important role in detecting
online intrusions and provide necessary alerts. Intrusion detection
can also be done for relational databases. Intrusion response
system for a relational database is essential to protect it from
external and internal attacks. We propose a new intrusion
response system for relational databases based on the database
response policies. We have developed an interactive language
that helps database administrators to determine the responses to
be provided by the response system based on the malicious
requests encountered by relational database. We also maintain a
policy database that maintains policies with respect to response
system. For searching the suitable policies algorithms are
designed and implemented. Matching the right policies and
policy administration are the two problems that are addressed in
this paper to ensure faster action and prevent any malicious
changes to be made to policy objects. Cryptography is also used
in the process of protecting the relational database from attacks.
The experimental results reveal that the proposed response
system is effective and useful.
|
716 |
Deriving mathematical significance in palaeontological data from large-scale database technologiesHewzulla, Dilshat January 2000 (has links)
No description available.
|
717 |
Towards Automatic Initial Buffer ConfigurationKu, Fei Yen January 2003 (has links)
Buffer pools are blocks of memory used in database systems to retain frequently referenced pages. Configuring the buffer pools is a difficult and manual task that involves determining the amount of memory to devote to the buffer pools, the number of buffer pools to use, their sizes, and the database objects assigned to each buffer pool. A good buffer configuration improves query response times and system throughput by reducing the number of disk accesses. Determining a good buffer configuration requires knowledge of the database workload.
Empirical studies have shown that optimizing the initial buffer configuration (determined at database design time) can improve system throughput. A good initial configuration can also provide a faster convergence towards a favorable dynamic buffer allocation. Previous studies have not considered automating the buffer pool configuration process.
This thesis presents two techniques that facilitate the initial buffer configuration task. First, we develop an analytic model of the GCLOCK buffer replacement policy that can be used to evaluate the effectiveness of a particular buffer configuration for a given workload. Second, to obtain the necessary model parameters, we propose a workload characterization scheme that extracts workload parameters, describing the query reference patterns, from the query access plans. In addition, we extend an existing multifractal model and present a multifractal skew model to represent query access skew.
Our buffer model has been validated against measurements of the buffer manager of a commercial database system. The model has also been compared to an alternative GCLOCK buffer model. Our results show that our proposed model closely predicts the actual physical read rates and recognizes favourable buffer configurations. This work provides a foundation for the development of an automated buffer configuration tool.
|
718 |
Evaluation of Shortest Path Query Algorithm in Spatial DatabasesLim, Heechul January 2003 (has links)
Many variations of algorithms for finding the shortest path in a large graph have been introduced recently due to the needs of applications like the Geographic Information System (GIS) or Intelligent Transportation System (ITS). The primary subjects of those algorithms are materialization and hierarchical path views. Some studies focus on the materialization and sacrifice the pre-computational costs and storage costs for faster computation of a query. Other studies focus on the shortest-path algorithm, which has less pre-computation and storage but takes more time to compute the shortest path. The main objective of this thesis is to accelerate the computation time for the shortest-path queries while keeping the degree of materialization as low as possible. This thesis explores two different categories: 1) the reduction of the I/O-costs for multiple queries, and 2) the reduction of search spaces in a graph. The thesis proposes two simple algorithms to reduce the I/O-costs, especially for multiple queries. To tackle the problem of reducing search spaces, we give two different levels of materializations, namely, the <i>boundary set distance matrix</i> and <i>x-Hop sketch graph</i>, both of which materialize the shortest-path view of the boundary nodes in a partitioned graph. Our experiments show that a combination of the suggested solutions for 1) and 2) performs better than the original Disk-based SP algorithm [7], on which our work is based, and requires much less storage than <i>HEPV</i> [3].
|
719 |
Construction and management of large-scale and complex virtual manufacturing environmentsXu, Zhijie January 2000 (has links)
No description available.
|
720 |
Automating the gathering of relevant information from biomedical textCanevet, Catherine January 2009 (has links)
More and more, database curators rely on literature-mining techniques to help them gather and make use of the knowledge encoded in text documents. This thesis investigates how an assisted annotation process can help and explores the hypothesis that it is only with respect to full-text publications that a system can tell relevant and irrelevant facts apart by studying their frequency. A semi-automatic annotation process was developed for a particular database - the Nuclear Protein Database (NPD), based on a set of full-text articles newly annotated with regards to subnuclear protein localisation, along with eight lexicons. The annotation process is carried out online, retrieving relevant documents (abstracts and full-text papers) and highlighting sentences of interest in them. The process also offers a summary Table of the facts found clustered by type of information. Each method involved in each step of the tool is evaluated using cross-validation results on the training data as well as test set results. The performance of the final tool, called the “NPD Curator System Interface”, is estimated empirically in an experiment where the NPD curator updates the database with pieces of information found relevant in 31 publications using the interface. A final experiment complements our main methodology by showing its extensibility to retrieving information on protein function rather than localisation. I argue that the general methods, the results they produced and the discussions they engendered are useful for any subsequent attempt to generate semi-automatic database annotation processes. The annotated corpora, gazetteers, methods and tool are fully available on request of the author (catherine.canevet@bbsrc.ac.uk).
|
Page generated in 0.089 seconds