• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Program Transformation for Proving Database Transaction Safety

Lawley, Michael John, n/a January 2000 (has links)
In this thesis we propose the use of Dijkstra's concept of a predicate transformer [Dij75] for the determination of database transaction safety [SS89] and the generation of simple conditions to check that a transaction will not violate the integrity constraints in the case that it is not safe. The generation of this simple condition is something that can be done statically, thus providing a mechanism for generating safe transactions. Our approach treats a database as state, a database transaction as a program, and the database's integrity constraints as a postcondition in order to use a predicate transformer [Dij75] to generate a weakest precondition. We begin by introducing a set-oriented update language for relational databases for which a predicate transformer is then defined. Subsequently, we introduce a more powerful update language for deductive databases and define a new predicate transformer to deal with this language and the more powerful integrity constraints that can be expressed using recursive rules. Next we introduce a data model with object-oriented features including methods, inheritance and dynamic overriding. We then extend the predicate transformer to handle these new features. For each of the predicate transformers, we prove that they do indeed generate a weakest precondition for a transaction and the database integrity constraints. However, the weakest precondition generated by a predicate transformer still involves much redundant checking. For several general classes of integrity constraint, including referential integrity and functional dependencies, we prove that the weakest precondition can be substantially further simplified to avoid checking things we already know to be true under the assumption that the database currently satisfies its integrity con-straints. In addition, we propose the use of the predicate transformer in combination with meta-rules that capture the exact incremental change to the database of a particular transaction. This provides a more general approach to generating simple checks for enforcing transaction safety. We show that this approach is superior to known existing previous approaches to the problem of efficient integrity constraint checking and transaction safety for relational, deductive, and deductive object-oriented databases. Finally we demonstrate several further applications of the predicate transformer to the problems of schema constraints, dynamic integrity constraints, and determining the correctness of methods for view updates. We also show how to support transactions embedded in procedural languages such as C.
632

Efficient computation of advanced skyline queries.

Yuan, Yidong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Skyline has been proposed as an important operator for many applications, such as multi-criteria decision making, data mining and visualization, and user-preference queries. Due to its importance, skyline and its computation have received considerable attention from database research community recently. All the existing techniques, however, focus on the conventional databases. They are not applicable to online computation environment, such as data stream. In addition, the existing studies consider efficiency of skyline computation only, while the fundamental problem on the semantics of skylines still remains open. In this thesis, we study three problems of skyline computation: (1) online computing skyline over data stream; (2) skyline cube computation and its analysis; and (3) top-k most representative skyline. To tackle the problem of online skyline computation, we develop a novel framework which converts more expensive multiple dimensional skyline computation to stabbing queries in 1-dimensional space. Based on this framework, a rigorous theoretical analysis of the time complexity of online skyline computation is provided. Then, efficient algorithms are proposed to support ad hoc and continuous skyline queries over data stream. Inspired by the idea of data cube, we propose a novel concept of skyline cube which consists of skylines of all possible non-empty subsets of a given full space. We identify the unique sharing strategies for skyline cube computation and develop two efficient algorithms which compute skyline cube in a bottom-up and top-down manner, respectively. Finally, a theoretical framework to answer the question about semantics of skyline and analysis of multidimensional subspace skyline are presented. Motived by the fact that the full skyline may be less informative because it generally consists of a large number of skyline points, we proposed a novel skyline operator -- top-k most representative skyline. The top-k most representative skyline operator selects the k skyline points so that the number of data points, which are dominated by at least one of these k skyline points, is maximized. To compute top-k most representative skyline, two efficient algorithms and their theoretical analysis are presented.
633

Efficient computation of advanced skyline queries.

Yuan, Yidong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Skyline has been proposed as an important operator for many applications, such as multi-criteria decision making, data mining and visualization, and user-preference queries. Due to its importance, skyline and its computation have received considerable attention from database research community recently. All the existing techniques, however, focus on the conventional databases. They are not applicable to online computation environment, such as data stream. In addition, the existing studies consider efficiency of skyline computation only, while the fundamental problem on the semantics of skylines still remains open. In this thesis, we study three problems of skyline computation: (1) online computing skyline over data stream; (2) skyline cube computation and its analysis; and (3) top-k most representative skyline. To tackle the problem of online skyline computation, we develop a novel framework which converts more expensive multiple dimensional skyline computation to stabbing queries in 1-dimensional space. Based on this framework, a rigorous theoretical analysis of the time complexity of online skyline computation is provided. Then, efficient algorithms are proposed to support ad hoc and continuous skyline queries over data stream. Inspired by the idea of data cube, we propose a novel concept of skyline cube which consists of skylines of all possible non-empty subsets of a given full space. We identify the unique sharing strategies for skyline cube computation and develop two efficient algorithms which compute skyline cube in a bottom-up and top-down manner, respectively. Finally, a theoretical framework to answer the question about semantics of skyline and analysis of multidimensional subspace skyline are presented. Motived by the fact that the full skyline may be less informative because it generally consists of a large number of skyline points, we proposed a novel skyline operator -- top-k most representative skyline. The top-k most representative skyline operator selects the k skyline points so that the number of data points, which are dominated by at least one of these k skyline points, is maximized. To compute top-k most representative skyline, two efficient algorithms and their theoretical analysis are presented.
634

Information Centric Development of Component-Based Embedded Real-Time Systems

Hjertström, Andreas January 2009 (has links)
<p>This thesis presents new techniques for data management of run-time data objectsin component-based embedded real-time systems. These techniques enabledata to be modeled, analyzed and structured to achieve data managementduring development, maintenance and execution.The evolution of real-time embedded systems has resulted in an increasedsystem complexity beyond what was thought possible just a few years ago.Over the years, new techniques and tools have been developed to manage softwareand communication complexity. However, as this thesis show, currenttechniques and tools for data management are not sufficient. Today, developmentof real-time embedded systems focuses on the function aspects of thesystem, in most cases disregarding data management.The lack of proper design-time data management often results in ineffectivedocumentation routines and poor overall system knowledge. Contemporarytechniques to manage run-time data do not satisfy demands on flexibility,maintainability and extensibility. Based on an industrial case-study that identifiesa number of problems within current data management techniques, bothduring design-time and run-time, it is clear that data management needs to beincorporated as an integral part of the development of the entire system architecture.As a remedy to the identified problems, we propose a design-time data entityapproach, where the importance of data in the system is elevated to beincluded in the entire design phase with proper documentation, properties, dependenciesand analysis methods to increase the overall system knowledge.Furthermore, to efficiently manage data during run-time, we introduce databaseproxies to enable the fusion between two existing techniques; ComponentBased Software Engineering (CBSE) and Real-Time Database ManagementSystems (RTDBMS). A database proxy allows components to be decoupledfrom the underlying data management strategy without violating the componentencapsulation and communication interface.</p> / INCENSE
635

XML document representation on the Neo solution

Faraglia, Piergiorgio January 2007 (has links)
<p>This thesis aims to find a graph structure for representing XML documents and to implement the former representation for storing such documents. The graph structure, in fact, is the complete representation for the XML documents; this is dued to the id/idref attribute which could be present inside the XML document tag.</p><p>Two different graph structures have been defined on this thesis, they are called most granular and customizable representations. The first one is the simplest way for representing XML documents, while the second one makes some improvements for optimizing inserting, deleting, and querying functions.</p><p>The implementation of the former graph structures is made over a new kind of database built specifically for storing semi-structured data, such database is called Neo. Neo database works only with three primitives: node, relationship, and property. Such data model represents a new solution compared to the traditional relational view.</p><p>The XML information manager implements two different API which work with the two former graph structure respectively. The first API works with the customizable representation, while the second one works with the customizable representation.</p><p>Some evaluations have been done over the second implemented API, and they showed that the implemented code is free of bugs and moreover that the customizable representation brings about some improvements on making queries over the stored data.</p>
636

Global Positioning in Harsh Environments

Resch, Bernd, Romirer-Maierhofer, Peter January 2005 (has links)
<p>As global location systems offer only restricted availability, they are not suitable for a world- </p><p>wide tracking application without extensions. This thesis contains a goods-tracking solution, </p><p>which can be considered globally working in contrast to formerly developed technologies. For </p><p>the creation of an innovative approach, an evaluation of the previous efforts has to be made. </p><p>As a result of this assessment, a newly developed solution is presented in this thesis that uses </p><p>the Global Positioning System (GPS) in connection with the database correlation method </p><p>involving Global System for Mobile Communications (GSM) fingerprints. The database </p><p>entries are generated automatically by measuring numerous GSM parameters such as Cell </p><p>Identity and signal strength involving handsets of several different providers and the real </p><p>reference position obtained via a high sensitivity GPS receiver.</p>
637

Database Engineering Process Modelling/Modélisation des processus d'ingénierie des bases de données

Roland, Didier 15 May 2003 (has links)
One of the main current research activities in Software engineering is concerned about modelling the development process of huge softwares in order to bring some help to the engineer to design and maintain an application. In general, every design process is seen as rational application of transformation operators to one or more products (mainly specifications) in order to produce new products that satisfy to some given criteria. This modelling mainly allows a methodological guidance. Indeed, at each step of the process, the set of pertinent activities and types of products are proposed to the designer, without any other. This guidance may possibly be reinforced with some help. Furthermore, this modelling allows to document the process with its history, ie with a representation of performed actvities. This history is itself the basis of maintenance activities. Two examples : a Replay function that allows, during a modification, to do (automatically or assisted) the same activities as during the design, and Reverse Engineering that allows to recover not only some technical and functional documentation of an application, but also a plausible history of its design. The thesis aims at elaborating a general model of design processes, applying it to database engineering and implementing it in the DB-MAIN CASE tool. It will be done in four phases : 1. elaboration of a model, a method specification language and a history representation 2. evaluation of this model with the specification of classical methods and case studies 3. methodological recommandation proposals for the elaboration of design methods 4. development and integration of some methodological control functions in the DB-MAIN CASE tool; this includes an extension of the repository, the definition of the interface of the methodological functions, the development of the methodological engine and the development of an history processor (analysis, replay,...)./Un des principaux sujets de recherche actuels dans le monde de l'ingénierie logicielle concerne la modélisation des processus de développement de grosses applications afin d'apporter de l'aide aux ingénieurs pour concevoir et maintenir leurs applications. En général, un processus de conception est vu comme l'application rationnelle d'opérateurs de transformation à un ou plusieurs produits (généralement des spécifications) pour obtenir de nouveaux produits qui satisfont une série de critères donnés. Cette modélisation permet principalement une aide méthodologique: à chaque étape du processus, seul l'ensemble des outils pertinents est mis à disposition du concepteur. Ce guidage peut être renforcé par des messages d'aide. Cette aide s'étend l'enregistrement de l'historique du processus, c'est-à-dire d'une représentation des actions entreprises. Cet historique peut lui-même être à la base d'activités de maintenance. Deux exemples: une fonction qui permet de rejouer, lors d'une modification, de manière automatique ou assistée, les mêmes actions que pendant la conception, et la rétro-ingénierie qui permet de recouvrer non seulement la documentation technique et fonctionnelle d'une application, mais aussi un historique plausible de la conception originelle. La thèse s'attache à élaborer un modèle général de processus de conception, à l'appliquer au monde des bases de données et à l'implémenter dans l'AGL DB-MAIN. Cela, en 4 phases: 1. Élaboration d'un modèle, d'un langage de spécification (MDL) et d'une représentation des historiques 2. Évaluation de ce modèle avec des méthodes de spécification classiques et études de cas 3. Propositions de recommandations méthodologiques pour l'élaboration de méthodes d'ingénierie 4. Développement et intégration de fonctions de contrôle méthodologique dans l'atelier DB-MAIN; ceci inclut l'extension du référentiel, la définition de l'interface homme-machine des fonctions méthodologiques et le développement du moteur méthodologique.
638

Computational Verification of Published Human Mutations.

Kamanu, Frederick Kinyua. January 2008 (has links)
<p>The completion of the Human Genome Project, a remarkable feat by any measure, has provided over three billion bases of reference nucleotides for comparative studies. The next, and perhaps more challenging step is to analyse sequence variation and relate this information to important phenotypes. Most human sequence variations are characterized by structural complexity and, are hence, associated with abnormal functional dynamics. This thesis covers the assembly of a computational platform for verifying these variations, based on accurate, published, experimental data.</p>
639

Global Positioning in Harsh Environments

Resch, Bernd, Romirer-Maierhofer, Peter January 2005 (has links)
As global location systems offer only restricted availability, they are not suitable for a world- wide tracking application without extensions. This thesis contains a goods-tracking solution, which can be considered globally working in contrast to formerly developed technologies. For the creation of an innovative approach, an evaluation of the previous efforts has to be made. As a result of this assessment, a newly developed solution is presented in this thesis that uses the Global Positioning System (GPS) in connection with the database correlation method involving Global System for Mobile Communications (GSM) fingerprints. The database entries are generated automatically by measuring numerous GSM parameters such as Cell Identity and signal strength involving handsets of several different providers and the real reference position obtained via a high sensitivity GPS receiver.
640

On Decoupling Concurrency Control from Recovery in Database Repositories

Yu, Heng January 2005 (has links)
We report on initial research on the concurrency control issue of compiled database applications. Such applications have a repository style of architecture in which a collection of software modules operate on a common database in terms of a set of predefined transaction types, an architectural view that is useful for the deployment of database technology to embedded control programs. We focus on decoupling concurrency control from any functionality relating to recovery. Such decoupling facilitates the compile-time query optimization. <br /><br /> Because it is the possibility of transaction aborts for deadlock resolution that makes the recovery subsystem necessary, we choose the deadlock-free tree locking (TL) scheme for our purpose. With the knowledge of transaction workload, efficacious lock trees for runtime control can be determined at compile-time. We have designed compile-time algorithms to generate the lock tree and other relevant data structures, and runtime locking/unlocking algorithms based on such structures. We have further explored how to insert the lock steps into the transaction types at compile time. <br /><br /> To conduct our simulation experiments to evaluate the performance of TL, we have designed two workloads. The first one is from the OLTP benchmark TPC-C. The second is from the open-source operating system MINIX. Our experimental results show TL produces better throughput than the traditional two-phase locking (2PL) when the transactions are write-only; and for main-memory data, TL performs comparably to 2PL even in workloads with many reads.

Page generated in 0.053 seconds