• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 478
  • 478
  • 353
  • 335
  • 187
  • 99
  • 64
  • 63
  • 58
  • 53
  • 52
  • 52
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Reparameterized Multiple Membership Model for Multilevel Nonnested Longitudinal Data

Sun, Shuyan 26 October 2012 (has links)
No description available.
52

Algorithms and data structures for hierarchical image processing

Tsanakas, Panagiotis D. January 1985 (has links)
No description available.
53

The multi-lingual database system : a paradigm and test-bed for the investigation of data-model transformations and data-model semantics /

Demurjian, Steven Arthur January 1987 (has links)
No description available.
54

Linearly Ordered Concurrent Data Structures on Hypercubes

John, Ajita 08 1900 (has links)
This thesis presents a simple method for the concurrent manipulation of linearly ordered data structures on hypercubes. The method is based on the existence of a pruned binomial search tree rooted at any arbitrary node of the binary hypercube. The tree spans any arbitrary sequence of n consecutive nodes containing the root, using a fan-out of at most [log₂ 𝑛] and a depth of [log₂ 𝑛] +1. Search trees spanning non-overlapping processor lists are formed using only local information, and can be used concurrently without contention problems. Thus, they can be used for performing broadcast and merge operations simultaneously on sets with non-uniform sizes. Extensions to generalized and faulty hypercubes and applications to image processing algorithms and for m-way search are discussed.
55

The use of frames in database modeling

Sweet, Barbara Moore. January 1984 (has links)
Call number: LD2668 .T4 1984 S93 / Master of Science
56

The use of null values in a relational database to represent incomplete and inapplicable information

Wilson, Maria Marshall. January 1985 (has links)
Call number: LD2668 .T4 1985 W547 / Master of Science
57

Solving multiparty private matching problems using Bloom-filters

Lai, Ka-ying., 黎家盈. January 2006 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy
58

Quelques aspects algorithmiques sur les systèmes de fermeture

Renaud, Yoan 08 December 2008 (has links) (PDF)
Nous présentons dans cette thèse les définitions et notations liées aux systèmes de fermeture et montrons leur relation avec les théories de Horn. Nous nous intéressons ensuite à trois opérations sur les systèmes de fermeture : la borne supérieure, la borne inférieure et la différence. Nous proposons une caractérisation de ces différentes opérations selon la représentation des systèmes de fermeture que nous considérons. On s'intéresse ensuite au problème de génération d'une base d'implications mixtes d'un contexte formel. Nous étudions ce problème lorsque la donnée prise en considération est constituée des bases d'implications génériques positives et négatives de ce contexte. Trois résultats majeurs sont présentés : l'apport de propriétés et de règles d'inférence pour déduire des implications mixtes, l'impossibilité de générer une base d'implications mixtes juste et complète à partir de ces données dans le cas général, et la faisabilité dans le cas où le contexte est considéré réduit.
59

New Algorithm and Data Structures for the All Pairs Shortest Path Problem

Hashim, Mashitoh January 2013 (has links)
In 1985, Moffat-Takaoka (MT) algorithm was developed to solve the all pairs shortest path (APSP) problem. This algorithm manages to get time complexity of O(n² log n) expected time when the end-point independent model of probabilistic assumption is used. However, the use of a critical point introduced in this algorithm has made the implementation of this algorithm quite complicated and the running time of this algorithm is difficult to analyze. Therefore, this study introduces a new deterministic algorithm for the APSP that provides an alternative to the existing MT algorithm. The major advantages of this approach compared to the MT algorithm are its simplicity, intuitive appeal and ease of analysis. Moreover, the algorithm was shown to be efficient as the expected running time is the same O(n² log n). Performance of a good algorithm depends on the data structure used to speed up the operations needed by the algorithm such as insert, delete-min and decrease-key operations. In this study, two new data structures have been implemented, namely quaternary and dimensional heaps. In the experiment carried out, the quaternary heap that employed similar concept with the trinomial heap with a special insertion cache function performed better than the trinomial heap when the number of n vertices was small. Likewise, the dimensional heap data structure executed the decrease-key operation efficiently by maintaining the thinnest structure possible through the use of thin and thick edges, far surpassing the existing binary, Fibonacci and 2-3 heaps data structures when a special acyclic graph was used. Taken together all these promising findings, a new improved algorithm running on a good data structure can be implemented to enhance the computing accuracy and speed of todays computing machines.
60

End user logical database design: The structured entity model approach.

Higa, Kunihiko. January 1988 (has links)
We live in the Information Age. The effective use of information to manage organizational resources is the key to an organization's competitive power. Thus, a database plays a major role in the Information Age. A well designed database contains relevant, nonredundant, and consistent data. However, a well designed database is rarely achieved in practice. One major reason for this problem is the lack of effective support for logical database design. Since the late 1980s, various methodologies for database design have been introduced, based on the relational model, the functional model, the semantic database model, and the entity structure model. They all have, however, a common drawback: the successful design of database systems requires the experience, skills, and competence of a database analyst/designer. Unfortunately, such database analyst/designers are a scarce resource in organizations. The Structured Entity Model (SEM) method, as an alternative diagrammatic method developed by this research, facilitates the logical design phases of database system development. Because of the hierarchical structure and decomposition constructs of SEM, it can help a novice designer in performing top-down structured analysis and design of databases. SEM also achieves high semantic expressiveness by using a frame representation for entities and three general association categories (aspect, specialization, and multiple decomposition) for relationships. This also enables SEM to have high potential as a knowledge representation scheme for an integrated heterogeneous database system. Unlike most methods, the SEM method does not require designers to have knowledge of normalization theory in order to design a logical database. Thus, an end-user will be able to complete logical database design successfully using this approach. In this research, existing data models used for a logical database design were first studied. Second, the framework of SEM and the design approach using SEM were described and then compared with other data models and their use. Third, the effectiveness of the SEM method was validated in two experiments using novice designers and by a case analysis. In the final chapter of this dissertation, future research directions, such as the design of a logical database design expert system based on the SEM method and applications of this approach to other problems, such as the problems in integration of multiple databases and in an intelligent mail system, are discussed.

Page generated in 0.0834 seconds