• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 284
  • 90
  • 31
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 607
  • 607
  • 146
  • 87
  • 87
  • 71
  • 66
  • 65
  • 63
  • 61
  • 55
  • 52
  • 47
  • 47
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Steplib : a digital library for spatio-temporal and multimedia data

Baptista, Claudio January 2000 (has links)
No description available.
272

A language for the dynamic verification of design patterns in distributed computing

Neal, Stephen January 2001 (has links)
No description available.
273

A high-level framework for policy-based management of distributed systems

Cakic, Jovan January 2003 (has links)
No description available.
274

Knowledge-based debugging : matching program behaviour against known causes of failure

Andrews, Michael McMillan January 2003 (has links)
No description available.
275

Inconsistency and underdefinedness in Z specifications

Miarka, Ralph January 2002 (has links)
Abstract In software engineering, formal methods are meant to capture the requirements of software yet to be built using notations based on logic and mathematics. The formal language Z is such a notation. It has been found that in large projects inconsistencies are inevitable. It is also said, however, that consistency is required for Z specifications to have any useful meaning. Thus, it seems, Z is not suitable for large projects. Inconsistencies are a fact of life. We are constantly challenged by inconsistencies and we are able to manage them in a useful manner. Logicians recognised this fact and developed so called paraconsistent logics to continue useful, non-trivial, reasoning in the presence of inconsistencies. Quasi-classical logic is one representative of these logics. It has been designed such that the logical connectives behave in a classical manner and that standard inference rules are valid. As such, users of logic, like software engineers, should find it easy to work with QCL. The aim of this work is to investigate the support that can be given to reason about inconsistent Z specifications using quasi-classical logic. Some of the paraconsistent logics provide an extra truth value which we use to handle underdefinedness in Z. It has been observed that it is sometimes useful to combine the guarded and precondition approach to allow the representation of both refusals and underspecification. This work contributes to the development of quasi-classical logic by providing a notion of strong logical equivalence, a method to reason about equality in QCL and a tableau-based theorem prover. The use of QCL to analyse Z specifications resulted in a refined notion of operation applicability. This also led to a revised refinement condition for applicability. Furthermore, we showed that QCL allows fewer but more useful inferences in the presence of inconsistency. Our work on handling underdefinedness in Z led to an improved schema representation combining the precondition and the guarded interpretation in Z. Our inspiration comes from a non-standard three-valued interpretation of operation applicability. Based on this semantics, we developed a schema calculus. Furthermore, we provide refinement rules based on the concept that refinement means reduction of underdefinedness. We also show that the refinement conditions extend the standard rules for both the guarded and precondition approach in Z.
276

Cyclic distributed garbage collection

Rodrigues, Helena C. C. D. January 1998 (has links)
With the continued growth of distributed systems as a means to provide shared data, designers are turning their attention to garbage collection, prompted by the complexity of memory management and the desire for transparent object management. Garbage collection in very large address spaces is a difficult and unsolved problem, due to problems of efficiency, fault-tolerance, scalability and completeness. The collection of distributed garbage cycles is especially problematic. This thesis presents a new algorithm for distributed garbage collection and describes its implementation in the Network Objects system. The algorithm is based on a reference listing scheme, which is augmented by partial tracing in order to collect distributed garbage cycles. Our collector is designed to be flexible, allowing efficiency, promptness and fault-tolerance to be traded against completeness, albeit it can be also complete. Processes may be dynamically organised into groups, according to appropriate heuristics, in order to reclaim distributed garbage cycles. Multiple concurrent distributed garbage collections that span groups are supported: when two collections meet they may either merge, overlap or retreat. This choice may be done at the level of different partial tracings, of processes or of individual objects. The algorithm places no overhead on local collectors and does not disrupt the collection of acyclic distributed garbage. Partial tracing of the distributed graph involves only objects thought to be part of a garbage cycle: no collaboration with other processes is required.
277

Seamless parallel computing on heterogeneous networks of multiprocessor workstations

Vella, Kevin J. January 1998 (has links)
This thesis is concerned with portable, efficient, and, above all, seamless parallel programming of heterogeneous networks of shared memory multiprocessor workstations. The CSP model of concurrency as embodied in the occam language is used to purvey an architecture-independent and elegant view of concurrent systems. Tools and techniques for efficiently executing finely decomposed parallel programs on uniprocessor workstations, shared memory multiprocessor workstations and networks of both are examined in some detail. In particular, scheduling strategies that batch related processes together to reduce cache-related context switching overheads on uniprocessors, and to reduce contention and false sharing on shared memory multiprocessors are studied. New wait-free CP channel algorithms for shared memory multiprocessors are presented, as well as implementations of CSP channel algorithms across commodity network interconnects. A virtual parallel computer abstraction is applied to hide the inherent heterogeneity of workstation networks and enable seamless execution of parallel programs. An investigation of the performance of moderate to very fine grain parallelism on uniprocessors and shared memory multiprocessors is presented. The performance of CSP channels across TCP/IP networks is also scrutinized. The results indicate that fine grain parallelism can be handled efficiently in software on uniprocessors and shared memory multiprocessors, though issues related to caching warrant careful consideration. Other results also show that a limited amount of computation-communication overlap can be attained even with commodity network adapters which require significant processor interaction to sustain data transfer. This thesis demonstrates that seamless parallel programming across a variety of contemporary architectures using the CSP/occam model is a viable, as well as an attractive, option.
278

Student modelling by adaptive testing : a knowledge-based approach

Abdullah, Sophiana Chua January 2003 (has links)
An adaptive test is one in which the number of test items and the order in which the items are presented are computed during the delivery of the test so as to obtain an accurate estimate of a student's knowledge, with a minimum number of test items. This thesis is concerned with the design and development of computerised adaptive tests for use within educational settings. Just as, in the same setting, intelligent tutoring systems are designed to emulate human tutors, adaptive testing systems can be designed to mimic effective informal examiners. The thesis focuses on the role of adaptive testing in student modelling, and demonstrates the practicality of constructing such tests using expert emulation. The thesis makes the case that, for small scale adaptive tests, a construction process based on the knowledge acquisition technique of expert systems is practical and economical. Several experiments in knowledge acquisition for the construction of an adaptive test are described, in particular, experiments to elicit information for the domain knowledge, the student model and the problem progression strategy. It shows how a description of a particular problem domain may be captured using traditional techniques that are supported by software development in the constraint logic extension to Prolog. It also discusses knowledge acquisition techniques for determining the sequence in which questions should be asked. A student modelling architecture called SKATE is presented. This incorporates an adaptive testing strategy called XP, which was elicted from a human expert. The strategy, XP, is evaluated using simulations of students. This approach to evaluation facilitates comparisons between approaches to testing and is potentially useful in tuning adaptive tests.
279

A theory of episodic memory for case-based reasoning and its implementation

Guitierrez, Carlos Ramierez January 1998 (has links)
This thesis makes several contributions to the study of Case-based Reasoning. It presents * a comprehensive description of the foundations of the subject in Knowledge Representation, Machine Learning and Cognitive Science, * A theory of learning for Case-based Reasoning * and it provides a demonstration of this theory by presenting an extensive implementation of a case-based system for Information Retrieval. In the first part of this thesis, research is presented on the foundations of Case-based Reasoning. It relates recent work in Artificial Intelligence to earlier and still developing ideas on the nature of concepts and categories. This part also presents research into the nature of learning in Case-based Reasoning. A review of Schank's Theory of Dynamic Memory is presented and a new Theory of the Acquisition of Episodic Memory is developed. The second part of the thesis is concerned with the practical application of Case-based Reasoning. This research demonstrates how the cognitive processes involved in concept formation and the new Theory of Acquisition of Episodic Memory can be put to practical use. A complete information retrieval system is presented. This system, in addition to being an implementation of the ideas presented in the first part of the thesis, is also intended as a substantive advance in the field of Information Science. It shows how Case-based Reasoning can be used to improve query formulation by exploiting information about the contexts in which queries arise. Particular attention is paid to the problem of recognition of similarity, which is an issue of concern to both Case-based Reasoning and Information Retrieval.
280

Static analysis of Martin-Löf's intuitionistic type theory

Telford, Alastair J. January 1995 (has links)
Martin-Lof's intuitionistic type theory has been under investigation in recent years as a potential source for future functional programming languages. This is due to its properties which greatly aid the derivation of provably correct programs. These include the Curry-Howard correspondence (whereby logical formulas may be seen as specifications and proofs of logical formulas as programs) and strong normalisation (i.e. evaluation of every proof/program must terminate). Unfortunately, a corollary of these properties is that the programs may contain computationally irrelevant proof objects: proofs which are not to be printed as part of the result of a program. We show how a series of static analyses may be used to improve the efficiency of type theory as a lazy functional programming language. In particular we show how variants of abstract interpretation may be used to eliminate unnecessary computations in the object code that results from a type theoretic program. After an informal treatment of the application of abstract interpretation to type theory (where we discuss the features of type theory which make it particularly amenable to such an approach), we give formal proofs of correctness of our abstract interpretation techniques, with regard to the semantics of type theory. We subsequently describe how we have implemented abstract interpretation techniques within the Ferdinand functional language compiler. Ferdinand was developed as a lazy functional programming system by Andrew Douglas at the University of Kent at Canterbury. Finally, we show how other static analysis techniques may be applied to type theory. Some of these techniques use the abstract interpretation mechanisms previously discussed. abstract

Page generated in 0.0305 seconds