• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1607
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3648
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 401
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

REVISION PROGRAMMING: A KNOWLEDGE REPRESENTATION FORMALISM

Pivkina, Inna Valentinovna 01 January 2001 (has links)
The topic of the dissertation is revision programming. It is a knowledge representation formalismfor describing constraints on databases, knowledge bases, and belief sets, and providing acomputational mechanism to enforce them. Constraints are represented by sets of revision rules.Revision rules could be quite complex and are usually in a form of conditions (for instance, ifthese elements are present and those elements are absent, then this element must be absent). Inaddition to being a logical constraint, a revision rule specify a preferred way to satisfy the constraint.Justified revisions semantics assigns to any database a set (possibly empty) of revisions.Each revision satisfies the constraints, and all deletions and additions of elements in a transitionfrom initial database to the revision are derived from revision rules.Revision programming and logic programming are closely related. We established an elegantembedding of revision programs into logic programs, which does not increase the size of a program.Initial database is used in transformation of a revision program into the corresponding logicprogram, but it is not represented in the logic program.The connection naturally led to extensions of revision programming formalism which correspondto existing extensions of logic programming. More specific, a disjunctive and a nestedversions of revision programming were introduced.Also, we studied annotated revision programs, which allow annotations like confidence factors,multiple experts, etc. Annotations were assumed to be elements of a complete infinitely distributivelattice. We proposed a justified revisions semantics for annotated revision programs which agreedwith intuitions.Next, we introduced definitions of well-founded semantics for revision programming. It assignsto a revision problem a single "intended" model which is computable in polynomial time.Finally, we extended syntax of revision problems by allowing variables and implemented translatorsof revision programs into logic programs and a grounder for revision programs. The implementationallows us to compute justified revisions using existing implementations of the stablemodel semantics for logic programs.
902

Knowledge Based Integrated Multidisciplinary Aircraft Conceptual Design

Munjulury, Venkata Raghu Chaitanya January 2014 (has links)
With the ever growing complexity of aircrafts, new tools and eventually methods to use these tools are needed in aircraft conceptual design. To reduce the development cost, an enhancement in the conceptual design is needed. This thesis presents a knowledge-based aircraft geometry design tool RAPID and the methodology applied in realizing the design. The parameters used to create a geometry need to be exchange between different tools. This is achieved by using a centralized database or onedata concept. One-database will enable creating a less number of cross connections between different tools to exchange data with one another. Different types of aircraft configurations can be obtained with less effort. As RAPID is developed based on relational design, any changes made to the geometric model will update automatically. The geometry model is carefully defined to carry over to the preliminary design. The validation of RAPID is done by implementing it in different aircraft design courses at Linköping University. In the aircraft project course, RAPID was effectively used and new features were added to the obtained desired design. Knowledge-base is used to realize the design performance for the geometry with an integrated database approach for a multidisciplinary aircraft conceptual design.
903

Data base security through simulation

Hong, Seng-Phil January 1994 (has links)
This research explores the complexities of database security, encompassing both the computer hardware and software. Also important is its nature as a people oriented issue. A risk analysis of a database system's security can be examined by creating a simulation model. Though, in order for it to be truly meaningful and accurate, all aspects of design, performance and procedure must be thoroughly and carefully scrutinized.Computer or data security is a major problem in today's world of data processing. This thesis outlines the security problem' and presents trends and issues. It also addresses current trends in computer security environments, database risk analysis, and simulations.Risk analysis is a technique used to quantitatively assess the relative value of protective measures. It is useful when appropriately applied and is in some cases required by regulatory agencies.The goal of security environments is to outline the framework which is valuable in assessing security issues and in establishing partitions in the overall environment within which this and other approaches to security can be examined.A simulation prototype is given which demonstrates the concepts of risk analysis for a database system. / Department of Computer Science
904

The next generation of database : object-oriented database

Hon, Wing-Keung January 1994 (has links)
As some new computer applications, such as computer-aided design, multimedia systems, and knowledge-based systems, require more complex data structures, the traditional database system seems to be unable to support these new requirements. A recently developed database technology, object-oriented database, provides a solution to these problems.The purpose of this thesis is to investigate what is object-oriented database, especially in its internal organization such as object persistency. Two object-oriented database systems, EXODUS and ODE, are discussed in detail. In addition, a comparison between the relational databases and object-oriented databases is made. / Department of Computer Science
905

Computer security : data control and protection

Neophytou, Andonis January 1992 (has links)
Computer security is a crucial area for any organization based on electronic devices that process data. The security of the devices themselves and the data they process are the backbone of the organization. Until today there have been no completely secure systems or procedures until and a lot of research is being done in this area. It impossible for a machine or a mechanical procedure to "guess" all possible events and lead to conclusive, cohesive and comprehensive secure systems, because of: 1) the human factor, and 2) acts of nature (fire, flood etc). However, proper managerial control can alleviate the extent of the damage caused by those factors.The purpose of this study is to examine the different frameworks of computer security. Emphasis is given to data/database security and the various kinds of attacks on the data. Controls over these attacks and preventative measures will be discussed, and high level language programs will demonstrate the protection issues. The Oracle, SOL query language will be used to demonstrate these controls and prevention measures. In addition the FORTRAN high level language will be used in conjunction with SOL (Only the FORTRAN and COBOL compilers are available for embedded SOL). The C language will be used to show attacks on password files and also as an encryption/decryption program.This study was based mainly on research. An investigation of literature spanning the past decade, was examined to produce the ideas and methods of prevention and control discussed in the study. / Department of Computer Science
906

Database comparison, Oracle vs RDB

Bah, Oury Amadou January 1992 (has links)
Database and database technology are having a major impact on the growing use of computers. The rising popularity of database systems for the management of data has resulted in an increasing number of systems. As the number grows, the difficulty in choosing the system which will best meet the requirements of a particular application also increases. Knowing how to choose the correct one for a given application is important.The purpose of this thesis is to compare two very commonly used Database Management Systems (ORACLE and RDB) at Ball State University by describing and listing the advantages of each of them as well as their weaknesses, giving a comprehensive study of their performances, user friendliness, limits, and to aid users and managers in obtaining a deeper knowledge of these two systems. / Department of Computer Science
907

Database comparison : Oracle vs RDB

Alhaffar, Mohammed January 1992 (has links)
Databases and database technology are having a major impact on the growing use of computers. It is fair to say that databases are playing a critical role in almost all areas where computers are used, including business, engineering, medicine, law, education, and library science, to name a few.At Ball State University, databases are very widely used among faculty, staff, and students. The common commercial database management system (DBMS) used in the university is ORACLE. Due to the extensive use of the system and the availability and easy access to alternative systems such as RDB/VMS, a comparative research is in order.This thesis is a comprehensive comparison between the two systems. It covers the differences in design, SQL codings, and the use of host programming language such as FORTRAN. It concentrates on the differences in memory usage, execution time, as well as the CPU time needed to precompile, link, and run. / Department of Computer Science
908

Duomenų bazių programavimas C# kalba / Database programming C# language

Varkavičius, Simonas 24 July 2014 (has links)
Laikui einant informacijos srautai vis didėja, todėl reikia surasti patogų būdą ją saugoti ir apdoroti. Dėka tobulėjančių informacinių technologijų, informacijos saugojimo ir apdorojimo metodai taip pat turi galimybę tobulėti. Šiais laikais įvairūs informacijos kiekiai gali būti saugomi ne tik fizinėje formoje, bet ir elektroniniame pavidale. Tokiame pavidale informacija gali būti sėkmingai administruojama, naudojant administravimo programinę įrangą. Šiame darbe apžvelgsime C# ir C++ programavimo kalbų sukurtas (realizuotas) duomenų administravimo programas bei programavimo kalbų potencialą kuriant analogišką produkciją. Detaliau bus apžvelgiama C# programavimo kalba. Šis darbas yra skiriamas asmenims, kurie domisi duomenų administravimo programų kūrimu ir taikymo galimybėmis. Šio baigiamojo magistrinio darbo tikslas - išanalizuoti duomenų bazių programavimo C# kalba charakteristikas, sukuriant taikomųjų programų pavyzdžius ir atliekant jų efektyvumo tyrimus, bei jas palyginti su C++ kalba. Darbo uždaviniai: Susipažinti su duomenų bazių (DB) teorine dalimi. Parengti C# duomenų bazių administravimo programas, taikant skirtingas programavimo metodikas. Išanalizuoti programų veikimo charakteristikas. Pasirinkus tinkamiausias C# programavimo metodikas, parengti demonstracines programas. Palyginti parengtų C# ir C++ programų rezultatyvumą.  . / As time passed the flow of information is growing so had to find a convenient way to store and handle, thanks to the constant development of information technology, information storage and processing methods have the opportunity to excel. Nowadays, different amounts of information can be stored not only in shape but also in electronic form. Such form may be successfully managed using the administration software. In this work, review the C# and C++ programming language created (realized), data administration programs and the programming language’s potential for the analog output. Retailers will be an overview of the C# programming language. This work is given to individuals who are interested in database administration and application programming development opportunities. This master thesis is to analyze the database programming in C# language features, creating a sample application and the effectiveness of their research, and to compare them with the C++ language. Tasks: Get familiar with the database (DB) of the theoretical part. Develop a programming language, C# database management applications using different programming techniques. To analyze the performance characteristics of applications. Selecting the best C# programming techniques to develop demonstration programs. Comparison between the performance of programs developed in C# and C++ languages.
909

Optimizing Hierarchical Storage Management For Database System

Liu, Xin 22 May 2014 (has links)
Caching is a classical but effective way to improve system performance. To improve system performance, servers, such as database servers and storage servers, contain significant amounts of memory that act as a fast cache. Meanwhile, as new storage devices such as flash-based solid state drives (SSDs) are added to storage systems over time, using the memory cache is not the only way to improve system performance. In this thesis, we address the problems of how to manage the cache of a storage server and how to utilize the SSD in a hybrid storage system. Traditional caching policies are known to perform poorly for storage server caches. One promising approach to solving this problem is to use hints from the storage clients to manage the storage server cache. Previous hinting approaches are ad hoc, in that a predefined reaction to specific types of hints is hard-coded into the caching policy. With ad hoc approaches, it is difficult to ensure that the best hints are being used, and it is difficult to accommodate multiple types of hints and multiple client applications. In this thesis, we propose CLient-Informed Caching (CLIC), a generic hint-based technique for managing storage server caches. CLIC automatically interprets hints generated by storage clients and translates them into a server caching policy. It does this without explicit knowledge of the application-specific hint semantics. We demonstrate using trace-based simulation of database workloads that CLIC outperforms hint-oblivious and state-of-the-art hint-aware caching policies. We also demonstrate that the space required to track and interpret hints is small. SSDs are becoming a part of the storage system. Adding SSD to a storage system not only raises the question of how to manage the SSD, but also raises the question of whether current buffer pool algorithms will still work effectively. We are interested in the use of hybrid storage systems, consisting of SSDs and hard disk drives (HDD), for database management. We present cost-aware replacement algorithms for both the DBMS buffer pool and the SSD. These algorithms are aware of the different I/O performance of HDD and SSD. In such a hybrid storage system, the physical access pattern to the SSD depends on the management of the DBMS buffer pool. We studied the impact of the buffer pool caching policies on the access patterns of the SSD. Based on these studies, we designed a caching policy to effectively manage the SSD. We implemented these algorithms in MySQL's InnoDB storage engine and used the TPC-C workload to demonstrate that these cost-aware algorithms outperform previous algorithms.
910

Virtual files : a framework for experimental design

Ross, George D. M. January 1983 (has links)
The increasing power and decreasing cost of computers has resulted in them being applied in an ever widening area. In the world of Computer Aided Design it is now practicable to involve the machine in the earlier stages where a design is still speculative, as well as in the later stages where the computer's calculating ability becomes paramount. Research on database systems has not followed this trend, concentrating instead on commercial applications, with the result that there are very few systems targeted at the early stages of the design process. In this thesis we consider the design and implementation of the file manager for such a system, first of all from the point of view of a single designer working on an entire design, and then from the point of view of a team of designers, each working on a separate aspect of a design. We consider the functionality required of the type of system we are proposing, defining the terminology of experiments to describe it. Having ascertained our requirements we survey current database technology in order to determine to what extent it meets our requirements. We consider traditional concurrency control methods and conclude that they are incompatible with our requirements. We consider current data models and conclude that, with the exception of the persistent programming model, they are not appropriate in the context required, while the implementation of the persistent programming model provides transactions on data structures but not experiments. The implementation of experiments is considered. We examine a number of potential methods, deciding on differential files as the one most likely both to meet our requirements and to have the lowest overheads. Measurements conducted on both a preliminary and a full-scale implementation confirm that this is the case. There are, nevertheless, further gains in convenience and performance to be obtained by exploiting the capabilities of the hardware to the full; we discuss these in relation to virtual memory systems, with particular reference to the VAX/VMS environment. Turning to the case where several designers are each working on a (nearly) distinct part of a design, we consider how to detect conflicts between experiments. Basing our approach on optimistic concurrency control methods, we show how read and write sets may be used to determine those areas of the database where conflicts might arise. As an aside, we show how the methods we propose can be used in an alternative approach to optimistic concurrency control, giving a reduction in system overheads for certain applications. We consider implementation techniques, concluding that a differential files approach has significant advantages in maintaining write sets, while a two-level bitmap may be used to maintain read sets efficiently.

Page generated in 0.0311 seconds