• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Application and network traffic correlation of grid applications

Paisley, Jonathan January 2006 (has links)
Dynamic engineering of application-specific network traffic is becoming more important for applications that consume large amounts of network resources, in particular, bandwidth. Since traditional traffic engineering approaches are static they cannot address this trend; hence there is a need for real-time traffic classification to enable dynamic traffic engineering. A packet flow monitor has been developed that operates at full Gigabit Ethernet line rate, reassembling all TCP flows in real-time. The monitor can be used to classify and analyse both plain text and encrypted application traffic. This dissertation shows, under reasonable assumptions, 100% accuracy for the detection of bulk data traffic for applications when control traffic is clear text and also 100% accuracy for encrypted GridFTP file transfers when data channels are authenticated. For non-authenticated GridFTP data channels, 100% accuracy is also achieved, provided the transferred files are tens of megabytes or more in size. The monitor is able to identify bulk flows resulting from clear text control protocols before they begin. Bulk flows resulting from encrypted GridFTP control sessions are identified before the onset of bulk data (with data channel authentication) or within two seconds (without data channel authentication). Finally, the system is able to deliver an event to a local publish/subscribe server within 1 ms of identification within the monitor. Therefore, the event delivery introduces negligible delay in the ability of the network management system to react to the event.
212

Positioning articulated figures

Etienne, Stéphane January 1998 (has links)
Many animation systems rely on key-frames or poses to produce animated sequences of figures we interpret as articulated, e.g. the skeleton of a character. The production of poses is a difficult problem which can be solved by using techniques such as forward and inverse kinematics. However, animators often find these techniques difficult to work with. The work, presented in this thesis, proposes an innovative technique which approaches this problem from a totally different direction from conventional techniques, and is based on Interactive Genetic Algorithms (IGAs). IGAs are evolutionary tools based on the theory of evolution which was first described by Darwin in 1859. They are derived from Genetic Algorithms (GAs) themselves based on the theory of evolution. IGAs have been successfully used to produce abstract pictures, sculptures and abstract animation sequences. Conventional techniques assist the animator in producing poses. On the contrary, when working with IGAs, users assist the computer in its search for a good solution. Unfortunately, this concept is too weak to allow for an efficient exploration of the space of poses as the user requires more control over the evolutionary process. So, a new concept was introduced to let the user specify directly what is of interest, that is a limb or a set of limbs. This information is efficiently used by the computer to greatly enhance the search. Users build a pose by selecting limbs which are of interest. That pose is provided to the computer as a seed to produce a new generation of poses. The degree of similarity is specified directly by the user. Typically, it is small at the beginning and increases as the process reaches convergences. The power of this new technique is demonstrated by two evaluations, one which uses a set of non expert users and another one which uses myself as the sole but expert user. The first evaluation highlighted the high cognitive requirement of the new technique whereas the second evaluation showed that given sufficient training, the new technique becomes much faster than the other two conventional techniques.
213

SUIT : a methodology and framework for Selection of User Interface development Tools

Lumsden, Joanna Marie January 2001 (has links)
This thesis describes the findings of an industrial survey that identified the context of use for software development projects. This context of use is parameterised and combined with a categorisation of UIDT functionality to produce an extensible and tailorable reference model or framework for UIDT evaluation and selection. An accompanying methodology - which together with the framework is known as SUIT (Selection of User Interface Development Tools) - guides the use of the framework such that project-specific context of use can be modelled and thereafter systematically considered during UIDT selection. This thesis proposes that such focussed and documented consideration of context of use during UIDT selection increases the quality of a selection decision and therefore facilitates reuse of UIDT evaluation and selection results. An evaluative study is described which demonstrates the effectiveness and viability of the SUIT framework and methodology as a paper-based UIDT evaluation facility. The same study also identifies the need for a computer-based tool to support the management of UIDT evaluation data and to assist its comparison and analysis. Experiences with this study, the results of the industrial study, and the structure of the framework and methodology provided input into a set of requirements for a computer-based visualisation environment that supports the comparison and analysis of UIDT data. The SUIT data visualisation environment and its qualitative evaluation are described. The evaluation results identify the usefulness and practicability of the SUIT approach when supported by the visualisation environment. They also suggest a number of refinements and extensions to the tool. The results provide an initial corpus of knowledge regarding practical strategies used by evaluators to compare and analyse UIDT evaluation data. These strategies are modelled using a novel purpose-built graphical notation that focuses on sequencing, flexibility, and patterns of activity.
214

Supporting formal reasoning about functional programs

Collins, Graham Richard McFarlane January 2001 (has links)
It is often claimed that functional programming languages, and in particular pure functional languages are suitable for formal reasoning. This claim is supported by the fact that many people in the functional programming community do reason about languages and programs in a formal or semi-formal way. Different reasoning principles such as equational reasoning, induction and co-induction, are used, depending on the nature of the problem. Using a computer program to check the application of rules and to mechanise the tedious bookkeeping involved can simplify proofs and provide more confidence in their correctness. When reasoning about programs, this can also allow experiments with new rules and reasoning styles, where a user may not be confident about structuring a proof on paper. Checking the applicability of a rule can eliminate the risk of mistakes caused by misunderstanding the theory being used. Just as there are different ways in which formal or informal reasoning can be applied in functional programming, there are different ways in which tools can be provided to support this reasoning. This thesis describes an investigation of how to develop a mechanised reasoning system to allow reasoning about algorithms as a functional programmer would write them, not an encoding of the algorithm into a significantly different form. In addition, this work aims to develop a system to support a user who is not a theorem proving expert or an expert in the theoretical foundations of functional programming. The work is aimed towards a system that could be used by a functional programmer developing real programs and wishing to prove some or all of the programs correct or to prove that two programs are equivalent.
215

Performance analysis of wormhole routing in multicomputer interconnection networks

Sarbazi-Azad, Hamid January 2001 (has links)
Perhaps the most critical component in determining the ultimate performance potential of a multicomputer is its interconnection network, the hardware fabric supporting communication among individual processors. The message latency and throughput of such a network are affected by many factors of which topology, switching method, routing algorithm and traffic load are the most significant. In this context, the present study focuses on a performance analysis of k-ary n-cube networks employing wormhole switching, virtual channels and adaptive routing, a scenario of especial interest to current research. This project aims to build upon earlier work in two main ways: constructing new analytical models for k-ary n-cubes, and comparing the performance merits of cubes of different dimensionality. To this end, some important topological properties of k-ary n-cubes are explored initially; in particular, expressions are derived to calculate the number of nodes at/within a given distance from a chosen centre. These results are important in their own right but their primary significance here is to assist in the construction of new and more realistic analytical models of wormhole-routed k-ary n-cubes. An accurate analytical model for wormhole-routed k-ary n-cubes with adaptive routing and uniform traffic is then developed, incorporating the use of virtual channels and the effect of locality in the traffic pattern. New models are constructed for wormhole k-ary n-cubes, with the ability to simulate behaviour under adaptive routing and non-uniform communication workloads, such as hotspot traffic, matrix-transpose and digit-reversal permutation patterns. The models are equally applicable to unidirectional and bidirectional k-ary n-cubes and are significantly more realistic than any in use up to now. With this level of accuracy, the effect of each important network parameter on the overall network performance can be investigated in a more comprehensive manner than before. Finally, k-ary n-cubes of different dimensionality are compared using the new models. The comparison takes account of various traffic patterns and implementation costs, using both pin-out and bisection bandwidth as metrics. Networks with both normal and pipelined channels are considered. While previous similar studies have only taken account of network channel costs, our model incorporates router costs as well thus generating more realistic results. In fact the results of this work differ markedly from those yielded by earlier studies which assumed deterministic routing and uniform traffic, illustrating the importance of using accurate models to conduct such analyses.
216

A programming logic for Java bytecode programs

Quigley, Claire Louise January 2004 (has links)
One significant disadvantage of interpreted bytecode languages, such as Java, is their low execution speed in comparison to compiled languages like C. The mobile nature of bytecode adds to the problem, as many checks are necessary to ensure that downloaded code from untrusted sources is rendered as safe as possible. But there do exist ways of speeding up such systems. One approach is to carry out static type checking at load time, as in the case of the Java Bytecode Verifier. This reduces the number of runtime checks that must be done and also allows certain instructions to be replaced by faster versions. Another approach is the use of a Just In Time (JIT) Compiler, which takes the bytecode and produces corresponding native code at runtime. Some JIT compilers also carry out some code optimization. There are, however, limits to the amount of optimization that can safely be done by the Verifier and JITs; some operations simply cannot be carried out safely without a certain amount of runtime checking. But what if it were possible to prove that the conditions the runtime checks guard against would never arise in a particular piece of code? In this case it might well be possible to dispense with these checks altogether, allowing optimizations not feasible at present. In addition to this, because of time constraints, current JIT compilers tend to produce acceptable code as quickly as possible, rather than producing the best code possible. By removing the burden of analysis from them it may be possible to change this. We demonstrate that it is possible to define a programming logic for bytecode programs that allows the proof of bytecode programs containing loops. The instructions available to use in the programs are currently limited, but the basis is in place to extend these. The development of this logic is non-trivial and addresses several difficult problems engendered by the unstructured nature of bytecode programs.
217

Catamorphism-based program transformations for non-strict functional languages

Németh, László January 2000 (has links)
In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. These intermediate data structures are useful because they allow modularity, but they are also a cause of inefficiency: each element need to be allocated, to be examined, and to be deallocated. Warm fusion is a program transformation technique which aims to eliminate intermediate data structures. Functions in a program are first transformed into the so called build-cata form, then fused via a one-step rewrite rule, the cata-build rule. In the process of the transformation to build-cata form we attempt to replace explicit recursion with a fixed pattern of recursion (catamorphism). We analyse in detail the problem of removing - possibly mutually recursive sets of - polynomial datatypes. Wehave implemented the warm fusion method in the Glasgow Haskell Compiler, which has allowed practical feedback. One important conclusion is that catamorphisms and fusion in general deserve a more prominent role in the compilation process. We give a detailed measurement of our implementation on a suite of real application programs.
218

Memory management in a distributed system of single address space operating systems supporting quality of service

McDonald, Ian Lindsay January 2001 (has links)
The choices provided by an operating system to the application developer for managing memory came in two forms: no choice at all, with the operating system making all decisions about managing memory; or the choice to implement virtual memory management specific to the individual application. The second of these choices is, for all intents and purposes, the same as the first: no choice at all. For many application developers, the cost of implementing a customised virtual memory management system is just too high. The results is that, regardless of the level of flexibility available, the developer ends up using the system-provided default. Further exacerbating the problem is the tendency for operating system developers to be extremely unimaginative when providing that same default. Advancements in virtual memory techniques such as prefetching, remote paging, compressed caching, and user-level page replacement coupled with the provision of user-level virtual memory management should have heralded a new era of choice and an application-centric approach to memory management. Unfortunately, this has failed to materialise. This dissertation describes the design and implementation of the Heracles virtual memory management system. The Heracles approach is one of inclusion rather than exclusion. The main goal of Heracles is to provide an extensible environment that is configurable to the extent of providing application-centric memory management without the need for application developers to implement their own. However, should the application developer wish to provide a more specialised implementation for all or any part of Heracles, the system is constructed around well-defined interfaces that allow new implementations to be "plugged in" where required. The result is a virtual memory management hierarchy that is highly configurable, highly flexible, and can be adapted at run-time to meet new phases in the application's behaviour. Furthermore, different parts of an application's address space can have different hierarchies associated with managing its memory.
219

Guidelines and infrastructure for the design and implementation of highly adaptive, context-aware, mobile, peer-to-peer systems

Bell, Marek January 2007 (has links)
Through a thorough review of existing literature, and extensive study of two large ubicomp systems, problems are identified with current mobile design practices, infrastructures and a lack of required software. From these problems, a set of guidelines for the design of mobile, peer-to-peer, context-aware systems are derived. Four key items of software infrastructure that are desirable but currently unavailable for mobile systems are identified. Each of these items of software are subsequently implemented, and the thesis describes each one, and at least one system in which each was used and trialled. These four items of mobile software infrastructure are: An 802.11 wireless driver that is capable of automatically switching between ad hoc and infrastructure networks when appropriate, combined with a peer discovery mechanism that can be used to identify peers and the services running and available on them. A hybrid positioning system that combines GPS, 802.11 and GSM positioning techniques to deliver location information that is almost constantly available, and can collect further 802.11 and GSM node samples during normal use of the system. A distributed recommendation system that, in addition to providing standard recommendations, can determine the relevance of data stored on the mobile device. This information is used by the same system to prioritise data when exchanging information with peers and to determine data that may be culled when the system is low on storage space without greatly affecting overall system performance. An infrastructure for creating highly adaptive, context-aware mobile applications. The Domino infrastructure allows software functionality to be recommended, exchanged between peers, installed, and executed, at runtime.
220

Managing a reconfigurable processor in a general purpose workstation environment

Dales, Michael Winston January 2003 (has links)
This dissertation considers the problems associated with using Field Programmable Logic (FPL) within a processor to accelerate applications in the context of a general purpose workstation, where this scarce resource will need to be shared fairly and securely by the operating system between a dynamic set of competing applications with no prior knowledge of their resource usage requirements. A solution for these problems is proposed in the Proteus System, which describes a suitable set of operating system mechanisms, with appropriate hardware support, to allow the FPL resource to be virtualised and managed suitably without applications having to be aware of how the resource is allocated. We also describe a suitable programming model that would allow applications to be built with traditional techniques incorporating custom instructions. We demonstrate the practicability of this system by simulating an ARM processor extended with a suitably structured FPL resource, upon which we run multiple applications that make use of one or more blocks of FPL, all of which are managed by a custom operating system kernel. We demonstrate that applications maintain a performance gain despite having to share the FPL resource between other applications. This dissertation concludes that it is possible for an operating system to manage a reconfigurable processor within the context of a workstation environment, provided suitable hardware support, without negating the benefit of having the FPL resource, even during times of high load. It also concludes that the integration of custom hardware associated with applications into the system is manageable within existing programming techniques.

Page generated in 0.1182 seconds