• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1017
  • 224
  • 97
  • 96
  • 70
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2079
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

No relation: the mixed blessings of non-relational databases

Varley, Ian Thomas 2009 August 1900 (has links)
This paper investigates a new class of database systems loosely referred to as "non-relational databases," which offer a subset of traditional relational database functionality, in exchange for improved scalability, performance, and / or simplicity. We explore the differences in conceptual modeling techniques, and examine both the advantages and limitations of several classes of currently available systems, using running examples of real-world problems as implemented in both a traditional relational database model, as well as several non-relational models. / text
22

Performance analysis of a distributed file system

Mukhopadhyay, Meenakshi 01 January 1990 (has links)
An important design goal of a distributed file system, a component of many distributed systems, is to provide UNIX file access semantics, e.g., the result of any write system call is visible by all processes as soon as the call completes. In a distributed environment, these semantics are difficult to implement because processes on different machines do not share kernel cache and data structures. Strong data consistency guarantees may be provided only at the expense of performance. This work investigates the time costs paid by AFS 3.0, which uses a callback mechanism to provide consistency guarantees, and those paid by AFS 4.0 which uses typed tokens for synchronization. AFS 3.0 provides moderately strong consistency guarantees, but they are not like UNIX because data are written back to the server only after a file is closed. AFS 4.0 writes back data to the server whenever there are other clients wanting to access it, the effect being like UNIX file access semantics. Also, AFS 3.0 does not guarantee synchronization of multiple writers, whereas AFS 4.0 does.
23

From entities to objects : reverse engineering a relational data model into an object-oriented design

Hines, Gary L. January 2000 (has links)
In many software applications, an object-oriented design (OOD) is generated first, then persistent storage is implemented by mapping the objects to a relational database. This thesis explores the "reverse engineering" of an OOD out of an existing relational data model. Findings from the current literature are presented, and a case study is undertaken using the model and research process published by GENTECH, a nonprofit organization promoting genealogical computing. The model is mapped into an OOD and captured in Unified Modeling Language (UML) class diagrams and object collaboration diagrams. The suitability of the example OOD is evaluated against the GENTECH research process using UML use cases and sequence diagrams. The mapping of relational database designs into OODs is found to be suitable in certain instances. / Department of Computer Science
24

A semantics-based methodology for integrated distributed database design: Toward combined logical and fragmentation design and design automation.

Garcia, Hong-Mei Chen. January 1992 (has links)
The many advantages of Distributed Database (DDB) systems can only be achieved through proper DDB designs. Since designing a DDB is very difficult and expert designers are relatively few in number, "good" DDB design methodologies and associated computer-aided design tools are needed to help designers cope with design complexity and improve their productivity. Unfortunately, previous DDB design research focused on solving subproblems of data distribution design in isolation. As a result, past research on a general DDB design methodology offered only methodological frameworks that, at best, aggregate a set of non-integrated design techniques. The conventional separation of logical design from fragmentation design is problematic, but has not been fully analyzed. This dissertation presents the SEER-DTS methodology developed for the purposes of overcoming the methodological inadequacies of conventional design methodologies, resolving the DDB design problem in an integrated manner and facilitating design automation. It is based on a static semantic data model, SEER (Synthesized Extended Entity-Relationship Model) and a dynamic data model, DTS (Distributed Transaction Scheme), which together provide complete and consistent modeling mechanisms for acquiring/representing DDB design inputs and facilitating DDB schema design. In this methodology, requirement/distribution analysis and conceptual design are integrated and logical and fragmentation designs are combined. "Semantics-based" design techniques have been developed to allow for end-user design specifications and seamless design schema transformations, thereby simplifying design tasks. Towards our ultimate goal of design automation, an architectural framework for a computer-aided DDB design system, Auto-DDB, was formulated and the system was prototyped. As part of the developmental effort, a real-world DDB design case study was conducted to verify the applicability of the SEER-DTS methodology in a manual design mode. The results of a laboratory experiment showed that the SEER-DTS methodology produced better design outcomes (in terms of design effectiveness and efficiency) than a Conventional Best methodology performed by non-expert designers in an automated design mode. However, no statistically significant difference was found in user-perceived ease of use.
25

The Lexical Token Converter : Hardware support for Large Knowledge Based Systems

Wang, C. J. January 1987 (has links)
No description available.
26

Enhanced database system for active design modelling

Baig, Anwar January 1999 (has links)
No description available.
27

Knowledge acquisition from data bases

Wu, Xindong January 1993 (has links)
Knowledge acquisition from databases is a research frontier for both data base technology and machine learning (ML) techniques,and has seen sustained research over recent years. It also acts as a link between the two fields,thus offering a dual benefit. Firstly, since database technology has already found wide application in many fields ML research obviously stands to gain from this greater exposure and established technological foundation. Secondly, ML techniques can augment the ability of existing database systems to represent acquire,and process a collection of expertise such as those which form part of the semantics of many advanced applications (e.gCAD/CAM).The major contribution of this thesis is the introduction of an effcient induction algorithm to facilitate the acquisition of such knowledge from databases. There are three typical families of inductive algorithms: the generalisation- specialisation based AQ11-like family, the decision tree based ID3-like family,and the extension matrix based family. A heuristic induction algorithm, HCV based on the newly-developed extension matrix approach is described in this thesis. By dividing the positive examples (PE) of a specific class in a given example set into intersect in groups and adopting a set of strategies to find a heuristic conjunctive rule in each group which covers all the group's positiv examples and none of the negativ examples(NE),HCV can find rules in the form of variable-valued logic for PE against NE in low-order polynomial time. The rules generated in HCV are shown empirically to be more compact than the rules produced by AQ1-like algorithms and the decision trees produced by the ID3-like algorithms. KEshell2, an intelligent learning database system, which makes use of the HCV algorithm and couples ML techniques with database and knowledgebase technology, is also described.
28

A web-enabled database for tracking the Personnel Qualifications of Information Professional Officers

Aragon, Marc A. 12 1900 (has links)
The US Navy's Information Professional Community currently qualifies its officers using a paperbased system. Candidates for the Basic, Intermediate and Advanced Qualifications use qualification books to attain knowledge and subsequently, prove possession of it. Once those books are filled with signatures, a board of Subject Matter Experts tests the candidate and verifies his mastery of that knowledge. Using Knowledge Value Added analysis and Business Process Reengineering, the return on knowledge (ROK) for the current Personnel Qualification System was estimated and improved processes were designed with the goal of maximizing ROK. First, the as-is ROK was estimated for the three processes and their various subprocesses. Then, a new to-be workflow for each of the three processes was designed emphasizing incremental improvements that could be implemented quickly. Finally, another workflow was designed, emphasizing radical, unlimited change. When it was proven that Web-enabling the PQS system indeed improves the knowledge-creating capability of these processes, a prototype Web-enabled database, called the Electronic Qualbook was developed as a demonstrator of the technologies and capabilities involved. This thesis includes appendices illustrating the design of the database schema and the Electronic Qualbook's Web interfaces. A third appendix lists the majority of the HyperText Markup Language (HTML) and Active Server Pages (ASP) code integral to the Electronic Qualbook.
29

Data set simulation and RF path modeling of a QPSK radio communication system

Sun, Wei-Long. 09 1900 (has links)
This project simulates QPSK modulation signals and uses a laboratory environment to create deteriorating effects of real-world high frequency (HF) transmissions that may modify the ideal QPSK waveform. These modifications may be identifiable in order to "fingerprint" the source of the modifications. To simulate the transmission path in the real world a signal generator is used to create the QPSK I/Q signal at the HF operating frequencies and a digital sampling oscilloscope acts as a receiver and records the data for analysis. A computer with MATLAB Instrument-control Toolbox is used to generate a random-input data stream as an input to the signal generator, which modulates the RF signal. The RF signal was chosen to be at HF (5-15 MHz) and the QPSK modulation was at 9600 baud. The deterioration effects of a real-world transmitter site were chosen to be associated with the output amplifier linearity and with the transmission line condition between the transmitter and antenna.
30

Lean implementation at White Sands Missile Range a case study of lean thinking applied in a government organization

Telles, David D., Garcia, Michael S., Bissell, Daniel C. 12 1900 (has links)
Joint Applied Project / In this Joint Applied Project, we study application of lean thinking at White Sands Missile Range, an Army Major Range and Test Facility Base (MRTFB), tasked with developmental Test and Evaluation (T and E) as its primary mission. We interviewed a representative segment of leaders, managers, and working level lean implementers, and surveyed 285 participants in lean events at White Sands. We employed a comprehensive, uniform set of questions in those interviews and surveys to gain insight into significant expectations, questions, issues, concerns, difficulties, constraints, and uniquely governmental situations and circumstance related to this implementation. We organized and analyzed a massive and significant resulting data set around emerging themes including linkage between lean and personnel cuts, management support of lean, small incremental benefit versus large bottom-line impact, process documentation, metrics and measurement, vision, urgency, and goals, uniquely governmental issues, and the lean process itself. We offer relevant conclusions and recommendations, based on those themes, which may significantly aid similar government organization who are currently, or expectantly, engaged in lean implementations or other process improvement efforts. We offer those conclusions and recommendations as academic and neutral examinations of real issues associated with an actual lean implementation. Notwithstanding the difficulties and complexities that we have examined in this study, we find an overwhelming majority of our participants believe there was broad incremental benefit from lean, that its cost was warranted and necessary, and that it absolutely should continue to be used as a tool to achieve greater efficiency, quality, and effectiveness in government business processes.

Page generated in 0.0388 seconds