• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1014
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2075
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

DistNeo4j: Scaling Graph Databases through Dynamic Distributed Partitioning

Nicoara, Daniel 14 October 2014 (has links)
Social networks are large graphs which require multiple servers to store and manage them. Providing performant scalable systems that store these graphs through partitioning them into subgraphs is an important issue. In such systems each partition is hosted by a server to satisfy multiple objectives. These objectives include balancing server loads, reducing remote traversals (number of edges cut), and adapting the partitioning to changes in the structure of the graph in the face of changing workloads. To address these issues, a dynamic repartitioning algorithm is required to modify an existing partitioning to maintain good quality partitions. Such a repartitioner should not impose a significant overhead to the system. This thesis introduces a greedy repartitioner, which dynamically modifies a partitioning using a small amount of resources. In contrast to the existing repartitioning algorithms, the greedy repartitioner is performant (in terms of time and memory), making it suitable for implementing and using it in a real system. The greedy repartitioner is integrated into DistNeo4j, which is designed as an extension of the open source Neo4j graph database system, to support workloads over partitioned graph data distributed over multiple servers. Using real-world data sets, this thesis shows that DistNeo4j leverages the greedy repartitioner to maintain high quality partitions and provides a 2 to 3 times performance improvement over the de-facto hash-based partitioning.
282

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
283

The development and application of heuristic techniques for the data mining task of nugget discovery

Iglesia, Beatriz de la January 2001 (has links)
No description available.
284

Development of a knowledge-based system for the repair and maintenance of concrete structures

Moodi, Faramarz January 2001 (has links)
Information Technology (IT) can exploit strategic opportunities for new ways of facilitating information and data exchange and the exchange of expert and specialist opinions in any field of engineering. Knowledge-Based Systems are sophisticated computer programs which store expert knowledge on specific subject and are applied to a broad range of engineering problems. Integrated Database applications have facilitated the essential capability of storing data to overcome an increasing information malaise. Integrating these areas of Information Technology (IT) can be used to bring a group of experts in any field of engineering closer together by allowing them to communicate and exchange information and opinions. The central feature of this research study is the integration of these hitherto separate areas of Information Technology (IT). In this thesis an adaptable Graphic User Interface Centred application comprising a Knowledge-Based Expert System (DEMARECEXPERT), a Database Management System (REPCON) and Evaluation program (ECON) alongside visualisation technologies is developed to produce an innovative platform which will facilitate and encourage the development of knowledge in concrete repair. Diagnosis, Evaluation, MAintenance and REpair of Concrete structures (DEMAREC) is a flexible application which can be used in four modes of Education, Diagnostic, Evaluation and Evolution. In the educational mode an inexperienced user can develop a better understanding of the repair of concrete technology by navigating through a database of textual and pictorial data. In the diagnostic mode, pictures and descriptive information taken from the database and performance of the expert system (DEMAREC-EXPERT) are used in a way that makes problem solving and decision making easier. The DEMAREC-EXPERT system is coupled to the REPCON (as an independent database) in order to provide the user with recommendations related to the best course required for maintenance and in the selection of materials and methods for the repair of concrete. In the evaluation mode the conditions observed are described in unambiguous terms that can be used by the user to be able to take engineering and management actions for the repair and maintenance of the structure. In the evolution mode of the application, the nature of distress, repair and maintenance of concrete structures within the extent of the database management system has been assessedT. he new methodology of data/usere valuation could have wider implications in many knowledge rich areas of expertise. The benefit of using REPCON lies in the enhanced levels of confidence which can be attributed to the data and to contribution of that data. Effectively, REPCON is designed to model a true evolution of a field of expertise but allows that expertise to move on in faster and more structured manner. This research has wider implications than within the realm of concrete repair. The methodology described in this thesis is developed to provide tecýnology transfer of information from experts, specialists to other practitioners and vice versa and it provides a common forum for communication and exchange information between them. Indeed, one of the strengths of the system is the way in which it allows the promotion and relegation of knowledge according to the opinion of users of different levels of ability from expert to novice. It creates a flexible environment in which an inexperienced user can develop his knowledge in maintenance and concrete repair structures. It is explained how an expert and a specialist can contribute his experience and knowledge towards improving and evolving the problem solving capability of the application.
285

Constructing highly-available distributed metainformation systems

Calsavara, Alcides January 1996 (has links)
This thesis demonstrates the adequacy of an object-oriented approach to the construction of distributed metainformation systems: systems that facilitate information use by maintaining some information about the information. Computer systems are increasingly being used to store information objects and make them accessible via network. This access, however, still relies on an adequate metainformation system: there must be an effective means of specifying relevant information objects. Moreover, distribution requires the metainformation system to cope well with intermittent availability of network resources. Typical metainformation systems developed to date permit information objects to be specified by expressing knowledge about their syntactic properties, such as keywords. Within this approach, however, query results are potentially too large to be transmitted, stored and treated, at reasonable cost and time. Users are therefore finding it difficult to navigate their way through the masses of information available. In contrast, this thesis is based on the principle that a metainformation system IS more effective if it permits information objects to be specified according to their semantic properties, and that this helps managing, filtering and navigating information. Of particular interest is object orientation because it is the stateof- the-art approach to both the representation of information semantics and the Abstract 11 design of reliable systems. The thesis presents the design and implementation of a programming toolkit for the construction of metainformation systems, where information objects can be any entity that contains information, the notion of views permits organising the information space, transactional access is employed to obtain consistency, and replication is employed to obtain high availability and scalability.
286

Knowledge discovery in spatio-temporal databases /

Abraham, Tamas Unknown Date (has links)
Thesis (PhD) -- University of South Australia, 1999
287

A formal framework for linguistic tree query

Lai, Catherine Unknown Date (has links) (PDF)
The analysis of human communication, in all its forms, increasingly depends on large collections of texts and transcribed recordings. These collections, or corpora, are often richly annotated with structural information. These datasets are extremely large so manual analysis is only successful up to a point. As such, significant effort has recently been invested in automatic techniques for extracting and analyzing these massive data sets. However, further progress on analytical tools is confronted by three major challenges. First, we need the right data model. Second, we need to understand the theoretical foundations of query languages on that data model. Finally, we need to know the expressive requirements for general purpose query language with respect to linguistics. This thesis has addressed all three of these issues. / Specifically, this thesis studies formalisms used by linguists and database theorists to describe tree structured data. Specifically, Propositional dynamic logic and monadic second-order logic. These formalisms have been used to reason about a number of tree querying languages and their applicability to the linguistic tree query problem. We identify a comprehensive set of linguistic tree query requirements and the level of expressiveness needed to implement them. The main result of this study is that the required level of expressiveness of linguistic tree query is that of the first-order predicate calculus over trees. / This formal approach has resulted in a convergence between two seemingly disparate fields of study. Further work in the intersection of linguistics and database theory should also pave the way for theoretically well-founded future work in this area. This, in turn, will lead to better tools for linguistic analysis and data management, and more comprehensive theories of human language.
288

Serializable Isolation for Snapshot Databases

Cahill, Michael James January 2009 (has links)
PhD / Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
289

Open-source technologies in web-based GIS and mapping a thesis presented to the Department of Geology and Geography in candidacy for the degree of Master of Science /

Harper, Erik. January 2006 (has links)
Thesis (M.S.)--Northwest Missouri State University, 2006. / The full text of the thesis is included in the pdf file. Title from title screen of full text.pdf file (viewed on January 25, 2008) Includes bibliographical references.
290

Domain-based data integration for Web databases /

Su, Weifeng. January 2007 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 129-138). Also available in electronic version.

Page generated in 0.036 seconds