• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Supporting formal reasoning about functional programs

Collins, Graham Richard McFarlane January 2001 (has links)
It is often claimed that functional programming languages, and in particular pure functional languages are suitable for formal reasoning. This claim is supported by the fact that many people in the functional programming community do reason about languages and programs in a formal or semi-formal way. Different reasoning principles such as equational reasoning, induction and co-induction, are used, depending on the nature of the problem. Using a computer program to check the application of rules and to mechanise the tedious bookkeeping involved can simplify proofs and provide more confidence in their correctness. When reasoning about programs, this can also allow experiments with new rules and reasoning styles, where a user may not be confident about structuring a proof on paper. Checking the applicability of a rule can eliminate the risk of mistakes caused by misunderstanding the theory being used. Just as there are different ways in which formal or informal reasoning can be applied in functional programming, there are different ways in which tools can be provided to support this reasoning. This thesis describes an investigation of how to develop a mechanised reasoning system to allow reasoning about algorithms as a functional programmer would write them, not an encoding of the algorithm into a significantly different form. In addition, this work aims to develop a system to support a user who is not a theorem proving expert or an expert in the theoretical foundations of functional programming. The work is aimed towards a system that could be used by a functional programmer developing real programs and wishing to prove some or all of the programs correct or to prove that two programs are equivalent.
102

Performance analysis of wormhole routing in multicomputer interconnection networks

Sarbazi-Azad, Hamid January 2001 (has links)
Perhaps the most critical component in determining the ultimate performance potential of a multicomputer is its interconnection network, the hardware fabric supporting communication among individual processors. The message latency and throughput of such a network are affected by many factors of which topology, switching method, routing algorithm and traffic load are the most significant. In this context, the present study focuses on a performance analysis of k-ary n-cube networks employing wormhole switching, virtual channels and adaptive routing, a scenario of especial interest to current research. This project aims to build upon earlier work in two main ways: constructing new analytical models for k-ary n-cubes, and comparing the performance merits of cubes of different dimensionality. To this end, some important topological properties of k-ary n-cubes are explored initially; in particular, expressions are derived to calculate the number of nodes at/within a given distance from a chosen centre. These results are important in their own right but their primary significance here is to assist in the construction of new and more realistic analytical models of wormhole-routed k-ary n-cubes. An accurate analytical model for wormhole-routed k-ary n-cubes with adaptive routing and uniform traffic is then developed, incorporating the use of virtual channels and the effect of locality in the traffic pattern. New models are constructed for wormhole k-ary n-cubes, with the ability to simulate behaviour under adaptive routing and non-uniform communication workloads, such as hotspot traffic, matrix-transpose and digit-reversal permutation patterns. The models are equally applicable to unidirectional and bidirectional k-ary n-cubes and are significantly more realistic than any in use up to now. With this level of accuracy, the effect of each important network parameter on the overall network performance can be investigated in a more comprehensive manner than before. Finally, k-ary n-cubes of different dimensionality are compared using the new models. The comparison takes account of various traffic patterns and implementation costs, using both pin-out and bisection bandwidth as metrics. Networks with both normal and pipelined channels are considered. While previous similar studies have only taken account of network channel costs, our model incorporates router costs as well thus generating more realistic results. In fact the results of this work differ markedly from those yielded by earlier studies which assumed deterministic routing and uniform traffic, illustrating the importance of using accurate models to conduct such analyses.
103

A programming logic for Java bytecode programs

Quigley, Claire Louise January 2004 (has links)
One significant disadvantage of interpreted bytecode languages, such as Java, is their low execution speed in comparison to compiled languages like C. The mobile nature of bytecode adds to the problem, as many checks are necessary to ensure that downloaded code from untrusted sources is rendered as safe as possible. But there do exist ways of speeding up such systems. One approach is to carry out static type checking at load time, as in the case of the Java Bytecode Verifier. This reduces the number of runtime checks that must be done and also allows certain instructions to be replaced by faster versions. Another approach is the use of a Just In Time (JIT) Compiler, which takes the bytecode and produces corresponding native code at runtime. Some JIT compilers also carry out some code optimization. There are, however, limits to the amount of optimization that can safely be done by the Verifier and JITs; some operations simply cannot be carried out safely without a certain amount of runtime checking. But what if it were possible to prove that the conditions the runtime checks guard against would never arise in a particular piece of code? In this case it might well be possible to dispense with these checks altogether, allowing optimizations not feasible at present. In addition to this, because of time constraints, current JIT compilers tend to produce acceptable code as quickly as possible, rather than producing the best code possible. By removing the burden of analysis from them it may be possible to change this. We demonstrate that it is possible to define a programming logic for bytecode programs that allows the proof of bytecode programs containing loops. The instructions available to use in the programs are currently limited, but the basis is in place to extend these. The development of this logic is non-trivial and addresses several difficult problems engendered by the unstructured nature of bytecode programs.
104

Catamorphism-based program transformations for non-strict functional languages

Németh, László January 2000 (has links)
In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. These intermediate data structures are useful because they allow modularity, but they are also a cause of inefficiency: each element need to be allocated, to be examined, and to be deallocated. Warm fusion is a program transformation technique which aims to eliminate intermediate data structures. Functions in a program are first transformed into the so called build-cata form, then fused via a one-step rewrite rule, the cata-build rule. In the process of the transformation to build-cata form we attempt to replace explicit recursion with a fixed pattern of recursion (catamorphism). We analyse in detail the problem of removing - possibly mutually recursive sets of - polynomial datatypes. Wehave implemented the warm fusion method in the Glasgow Haskell Compiler, which has allowed practical feedback. One important conclusion is that catamorphisms and fusion in general deserve a more prominent role in the compilation process. We give a detailed measurement of our implementation on a suite of real application programs.
105

Memory management in a distributed system of single address space operating systems supporting quality of service

McDonald, Ian Lindsay January 2001 (has links)
The choices provided by an operating system to the application developer for managing memory came in two forms: no choice at all, with the operating system making all decisions about managing memory; or the choice to implement virtual memory management specific to the individual application. The second of these choices is, for all intents and purposes, the same as the first: no choice at all. For many application developers, the cost of implementing a customised virtual memory management system is just too high. The results is that, regardless of the level of flexibility available, the developer ends up using the system-provided default. Further exacerbating the problem is the tendency for operating system developers to be extremely unimaginative when providing that same default. Advancements in virtual memory techniques such as prefetching, remote paging, compressed caching, and user-level page replacement coupled with the provision of user-level virtual memory management should have heralded a new era of choice and an application-centric approach to memory management. Unfortunately, this has failed to materialise. This dissertation describes the design and implementation of the Heracles virtual memory management system. The Heracles approach is one of inclusion rather than exclusion. The main goal of Heracles is to provide an extensible environment that is configurable to the extent of providing application-centric memory management without the need for application developers to implement their own. However, should the application developer wish to provide a more specialised implementation for all or any part of Heracles, the system is constructed around well-defined interfaces that allow new implementations to be "plugged in" where required. The result is a virtual memory management hierarchy that is highly configurable, highly flexible, and can be adapted at run-time to meet new phases in the application's behaviour. Furthermore, different parts of an application's address space can have different hierarchies associated with managing its memory.
106

Guidelines and infrastructure for the design and implementation of highly adaptive, context-aware, mobile, peer-to-peer systems

Bell, Marek January 2007 (has links)
Through a thorough review of existing literature, and extensive study of two large ubicomp systems, problems are identified with current mobile design practices, infrastructures and a lack of required software. From these problems, a set of guidelines for the design of mobile, peer-to-peer, context-aware systems are derived. Four key items of software infrastructure that are desirable but currently unavailable for mobile systems are identified. Each of these items of software are subsequently implemented, and the thesis describes each one, and at least one system in which each was used and trialled. These four items of mobile software infrastructure are: An 802.11 wireless driver that is capable of automatically switching between ad hoc and infrastructure networks when appropriate, combined with a peer discovery mechanism that can be used to identify peers and the services running and available on them. A hybrid positioning system that combines GPS, 802.11 and GSM positioning techniques to deliver location information that is almost constantly available, and can collect further 802.11 and GSM node samples during normal use of the system. A distributed recommendation system that, in addition to providing standard recommendations, can determine the relevance of data stored on the mobile device. This information is used by the same system to prioritise data when exchanging information with peers and to determine data that may be culled when the system is low on storage space without greatly affecting overall system performance. An infrastructure for creating highly adaptive, context-aware mobile applications. The Domino infrastructure allows software functionality to be recommended, exchanged between peers, installed, and executed, at runtime.
107

Managing a reconfigurable processor in a general purpose workstation environment

Dales, Michael Winston January 2003 (has links)
This dissertation considers the problems associated with using Field Programmable Logic (FPL) within a processor to accelerate applications in the context of a general purpose workstation, where this scarce resource will need to be shared fairly and securely by the operating system between a dynamic set of competing applications with no prior knowledge of their resource usage requirements. A solution for these problems is proposed in the Proteus System, which describes a suitable set of operating system mechanisms, with appropriate hardware support, to allow the FPL resource to be virtualised and managed suitably without applications having to be aware of how the resource is allocated. We also describe a suitable programming model that would allow applications to be built with traditional techniques incorporating custom instructions. We demonstrate the practicability of this system by simulating an ARM processor extended with a suitably structured FPL resource, upon which we run multiple applications that make use of one or more blocks of FPL, all of which are managed by a custom operating system kernel. We demonstrate that applications maintain a performance gain despite having to share the FPL resource between other applications. This dissertation concludes that it is possible for an operating system to manage a reconfigurable processor within the context of a workstation environment, provided suitable hardware support, without negating the benefit of having the FPL resource, even during times of high load. It also concludes that the integration of custom hardware associated with applications into the system is manageable within existing programming techniques.
108

IfD - information for discrimination

Cai, Di January 2004 (has links)
The problem of term mismatch and ambiguity has long been serious and outstanding in IR. The problem can result in the system formulating an incomplete and imprecise query representation, leading to a failure of retrieval. Many query reformulation methods have been proposed to address the problem. These methods employ term classes which are considered as related to individual query terms. They are hindered by the computational cost of term classification, and by the fact that the terms in some class are generally related to some specific query term belonging to the class rather than relevant to the context of the query. In this thesis we propose a series of methods for automatic query reformulation (AQR). The methods constitute a formal model called IfD, standing for Information for Discrimination. In IfD, each discrimination measure is modelled as information contained in terms supporting one of two opposite hypotheses. The extent of association of terms with the query can thus be defined based directly on the discrimination. The strength of association of candidate terms with the query can then be computed, and good terms can be selected to enhance the query. Justifications for IfD are presented from several aspects: formal interpretations of infor­mation for discrimination are introduced to show its soundness; criteria are put forward to show its rationality; properties of discrimination measures are analysed to show its appro­priateness; examples are examined to show its usability; extension is discussed to show its potential; implementation is described to show its feasibility; comparisons with other methods are made to show its flexibility; improvements in retrieval performance are exhibited to show its powerful capability. Our conclusion is that the advantage and promise IfD should make it an indispensable methodology for AQR, which we believe can be an effective technique for improvement in retrieval performance.
109

Probability models for information retrieval based on divergence from randomness

Amati, Giambattista January 2003 (has links)
This thesis devises a novel methodology based on probability theory, suitable for the construction of term-weighting models of Information Retrieval. Our term-weighting functions are created within a general framework made up of three components. Each of the three components is built independently from the others. We obtain the term-weighting functions from the general model in a purely theoretic way instantiating each component with different probability distribution forms. The thesis begins with investigating the nature of the statistical inference involved in Information Retrieval. We explore the estimation problem underlying the process of sampling. De Finetti’s theorem is used to show how to convert the frequentist approach into Bayesian inference and we display and employ the derived estimation techniques in the context of Information Retrieval. We initially pay a great attention to the construction of the basic sample spaces of Information Retrieval. The notion of single or multiple sampling from different populations in the context of Information Retrieval is extensively discussed and used through-out the thesis. The language modelling approach and the standard probabilistic model are studied under the same foundational view and are experimentally compared to the divergence-from-randomness approach. In revisiting the main information retrieval models in the literature, we show that even language modelling approach can be exploited to assign term-frequency normalization to the models of divergence from randomness. We finally introduce a novel framework for the query expansion. This framework is based on the models of divergence-from-randomness and it can be applied to arbitrary models of IR, divergence-based, language modelling and probabilistic models included. We have done a very large number of experiment and results show that the framework generates highly effective Information Retrieval models.
110

Functional programming and graph algorithms

King, David Jonathan January 1996 (has links)
This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language.

Page generated in 0.0453 seconds