• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Fast physical simulation of virtual clothing based on multilevel approximation strategies

Anderson, James N. January 1999 (has links)
This thesis presents a full account of the FIGMENT scheme (Fast Implementation Garment Modelling environmENT) which incorporates a four-point strategy (a simplified physical model, collision volume approximation, progressive meshes and a hybrid rendering algorithm) acting at multiple levels in the modelling process to reduce the quantity and complexity of the computations involved, bringing modelling times from the realm of hours to minutes and seconds whilst maintaining an acceptable level of accuracy and fidelity in the results. The physical model permits garment models obtained by various methods to be used in simulations, incorporates alternative methods of force computation to allow a range of speed-accuracy levels, and provides a robust basis for the other aspects of the scheme. The two methods of collision volume approximation presented enable collision handling in <I>O(n) </I>time rather than the <I>O(n</I>log<I>n</I>) time of optimised polygon-to-polygon detection methods whilst providing other advantages germane to the modelling process. The further development and employment of progressive mesh algorithms permits an additional increase in modelling rates without loss of fidelity. Finally, the use of a hybrid rendering algorithm which combines depth-buffering and depth-sorting techniques effectively masks the minor visual discrepancies introduced by the other points of the scheme and enables the use of multilayered complex garments without resorting to cloth-to-cloth collision methods (both of which would require considerable additional computation to otherwise achieve), whilst only marginally affecting modelling rates. When fully implemented, the FIGMENT scheme can reduce modelling times by a factor of 80 in typical cases. The aim of the thesis is to detail the design principles and the algorithms which together comprise the FIGMENT scheme and to demonstrate by way of example and user tests the benefits typically afforded by implementing a virtual mannequin service based on the scheme.
32

Design and optimisation of scientific programs in a categorical language

Ashby, Thomas James January 2005 (has links)
This thesis presents an investigation into the use of advanced computer languages for scientific computing, an examination of performance issues that arise from using such languages for such a task, and a step towards achieving portable performance from compilers by attacking these problems in a way that compensates for the complexity of and differences between modem computer architectures. The language employed is Aldor, a functional language from computer algebra, and the scientific computing area is a subset of the family of iterative linear equation solvers applied to sparse systems. The linear equation solvers that are considered have much common structure, and this is factored out and represented explicitly in the language as a framework, by means of categories and domains The flexibility introduced by decomposing the algorithms and the objects they act on into separate modules has a strong performance impact due to its negative effect on temporal locality. This necessitates breaking the barriers between modules to perform cross component optimisation In this instance the task reduces to one of collective loop fusion and array contraction Traditional approaches to this problem rely on static heuristics and simplified machine models that do not deal well with the complex trade-offs involved in targeting modem computer architectures. To rectify this we develop a technique called iterative collective loop fusion that empirically evaluates different candidate transformations in order to select the best available. We apply our technique to programs derived from the iterative solver framework to demonstrate its effectiveness, and compare it against other techniques for collective loop fusion from the literature, and more traditional approaches such as using Fortran, C and/or high-performance library routines. The use of a high level categorical language such as Aldor brings important benefits in terms of elegance of expression, comprehensibility, and code reuse. Iterative collective loop fusion outperforms the other collective loop fusion techniques. Applying it to the iterative solver framework gives programs with performance that is comparable with the traditional approaches.
33

Prescriptive formalism for constructing domain-specific evolutionary algorithms

Surry, Patrick David January 1998 (has links)
It has been widely recognised in the computational intelligence and machine learning communities that the key to understanding the behaviour of learning algorithms is to understand what <I>representation </I>is employed to capture and manipulate knowledge acquired during the learning process. However, traditional evolutionary algorithms have tended to employ a fixed representation space (binary strings), in order to allow the use of standardised genetic operators. This approach leads to complications for many problem domains, as it forces a somewhat artificial mapping between the problem variables and the canonical binary representation, especially when there are dependencies between problem variables (e.g. problems naturally defined over permutations). This often obscures the relationship between genetic structure and problem features, making it difficult to understand the actions of the standard genetic operators with reference to problem-specific structures. This thesis instead advocates making the representation of solutions of the explicit focus, in order to highlight the way in which the genetic operators (and resulting search algorithms) form and test hypotheses about the relationship between observed problem structure and films. It is clear that any search algorithm must limit the class of hypotheses which it is able to learn (its bias), if it is to select the most accurate of those hypotheses efficiently. We demonstrate this in the context of evolutionary search by exploring the so-called "no free lunch" results, and argue that it is the chosen representation which determines what kinds of hypotheses can be formed and tested by the algorithm. To do this, we exploit a general formalism for generating a representation for an arbitrary instance of a given problem domain, using a <I>characterisation</I> of that problem domain which captures beliefs about its structure. Such a characterisation is simply an explicit set of mathematical statements about the relationship between features of solutions and their fitness values, making it clear that the resulting representations encapsulate all of the domain knowledge which is available to any search algorithm.
34

Approaches to parallel performance prediction

Howell, Fred January 1996 (has links)
Designing parallel programs is both interesting and difficult. The reason for using a parallel machine is to obtain better performance, but the programmer will have little idea of the performance of a program at design time, and will only find out by actually running it. Design decisions have to be made by guesswork alone. This thesis explores an alternative by providing data sheets describing the performance of parallel building blocks, and then seeing how they may be used in practice. The simplest way of using the data sheets is based on a graphing and equation plotting tool. More detailed design information is available from a "reverse" profiling technique which adapts standard profiling to generate predictions rather than measurements. The ultimate method for prediction is based on discrete event simulation, which allows modelling of all programs but is the most complex to use. The methods are compared, and their suitability for different design problems is discussed.
35

Automated test sequence generation for finite state machines using genetic algorithms

Derderian, Karnig Agop January 2006 (has links)
Testing software implementations, formally specified using finite state automata (FSA) has been of interest. Such systems include communication protocols and control sections of safety critical systems. There is extensive literature regarding how to formally validate an FSM based specification, but testing that an implementation conforms to the specification is still an open problem. Two aspects of FSA based testing, both NP-hard problems, are discussed in this thesis and then combined. These are the generation of state verification sequences (UIOs) and the generation of sequences of conditional transitions that are easy to trigger. In order to facilitate test sequence generation a novel representation of the transition conditions and a number of fitness function algorithms are defined. An empirical study of the effectiveness on real FSA based systems and example FSAs provides some interesting positive results. The use of genetic algorithms (GAs) makes these problems scalable for large FSAs. The experiments used a software tool that was developed in Java.
36

Automatic multilevel feature abstraction in adaptable machine vision systems

Rose, Valerie January 2010 (has links)
No description available.
37

Distributed simulations on a computational Grid

Ming, Jiang January 2006 (has links)
In order to simulate a large scale and complex model, a distributed simulation normally may require to harness and organise a huge amount of computing and network resources to support the simulation. A computational Grid is a novel distributed computing system that is able to organise virtually unlimited computing and network resources together to meet the resource requirements of various computational intensive problems. This thesis focuses on the issues of the integration of a distributed simulation and a computational Grid. Particularly, the dynamic and heterogeneous nature of Grid resources and the potentiaJIy high communication latencies between these resources are identified as the main chaJIenges to the performance of a distributed simulation running on a computational Grid. This thesis proposes a generic framework that provides a systematic solution to tackle these challenges and supports the execution, management and optimisation of a distributed simulation program on a computational Grid. A prototype of the framework is also implemented and evaluated. Within the prototype implementation, an adaptive control mechanism for optimising the execution of a Time Warp Parallel Discrete Event Simulation program on a computational Grid is developed and evaluated.
38

JeX : an implementation of a Java exception analysis framework to exploit potential optimisations

Stevens, Andrew January 2002 (has links)
Exceptions in Java are generated by explicit program syntax or implicit events at runtime. The potential control flow paths introduced by these implicit exceptions must be represented in static flow graphs. The impact of these additional paths reduces the effectiveness of standard aheadof- time optimisations. This thesis presents research that focuses on measuring and reducing the effects of these implicit exceptions. In order to conduct this research, a novel static analysis framework, called JeX, has been developed. This tool provides an environment for the analysis and optimisation of Java programs using the bytecode representation. Data generated by this tool clearly shows that implicit exceptions significantly fragment the standard flow graphs used by many intraprocedural optimisation techniques. This fragmentation increases the number of flow dependence relationships and so negatively affects numerous flow analysis techniques. The main contribution of this work is the development of new algorithms that can statically prove that certain runtime exceptions can never occur. Once these exceptions have been extracted, the control flow structures are re-generated without being affected by those potential exceptions. The results show that these algorithms extract 24-29% of all implicit potential exceptions in the eight benchmark programs selected. The novel, partial stack evaluation algorithm is particularly successful at extracting potential null pointer exceptions, with reductions in these of 53-68%. This thesis also provides a simulation of perfect exception extraction by removing the effects of all implicit exceptions in the flow graphs. The secondary contribution of this research is the development of program call graph generation algorithms with novel receiver prediction analysis. This thesis presents a comparative study of the graphs generated using fast but conservative analysis with more effective rapid type analysis algorithms. This study shows that Java bytecodes are well suited to a fine-grained instance prediction type analysis technique, although this context-sensitive approach does not scale well with larger test programs. The final contribution of this work is the JeX tool itself. This is a generic, whole program analysis system for Java programs. It includes a number of general optimisation modules, algorithms for generating several static analysis data structures and a visualisation interface for viewing all data structures and Java class file contents
39

Agile computing

Suri, Niranjan January 2008 (has links)
Wirelessly networked dynamic and mobile environments, such as tactical military environments, pose many challenges to building distributed computing systems. A wide variety of node types and resources, unreliable and bandwidth constrained communications links, high chum rates, and rapidly changing user demands and requirements make it difficult to build systems that satisfy the needs of the users and provide good performance. Agile c ._mputing is an innovative metaphor for distributed computing systems and prescribes a new approach to their design and implementation. Agile computing may be defined as opportunistically discovering, manipulating, and exploiting available computing and communication resources. The term agile is used to highlight the desire to both quickly react to changes in the environment as well as to take advantage of transient resources only available for short periods of time. This thesis describes the overall agile computing metaphor as well as one concrete realisation through a middleware infrastructure. An important contribution of the thesis is the definition of a generic architecture for agile computing, which identifies the core and ancillary attributes that contribute to systems that are to be agile. The thesis also describes the design and implementation of one concrete middleware solution, which provides a number of components and capabilities that integrate together to address the challenges of the overall problem. These components include the Aroma virtual machine, the Mockets communications library, the Group Manager resource discovery component, the DisService peer-to-peer information dissemination system, and the AgServe service oriented architecture. The design and development of these components has been motivated by observing problems with real systems in tactical military environments. As a result, the components have been incorporated into real systems and used in the field. The key contribution of this thesis is the prescribed approach to combining these capabilities in order to build opportunistic systems. The capabilities of these components, both individually, as well as part of a single integrated system, are evaluated through a series of experiments and compared with existing systems and standards. The results show significant performance improvements for each of the components. For example, the Mockets library performs up to 7.6x better than TCP (Transmission Control Protocol) sockets in terms of throughput depending on the type of radio utilised. When exploiting unique features in the Mockets library, such as message replacement, the Mockets library performs up to 44x better than SCTP (Stream Control Transmission Protocol) and SCPS (Space Communications Protocol Standards) in terms of timeliness of delivery of data. Likewise, when compared to the JXTA middleware from Sun Microsystems, the Group Manager uses up to 4.8x less bandwidth to support service discovery. Finally, experiments to measure the agility of the integrated middleware show that transient resources that are available for as short a period as 10 seconds can be opportunistically exploited. The Agile Computing Middleware, as presented in this thesis, continues to evolve in terms of further optimisations, incorporation of new components, enhancement of the existing components, and test and evaluation in real-world demonstrations and exercises. It is also hoped that the definition of the concept of agile computing and a general architecture for agile computing will encourage other researchers to build new systems that adopt and advance the notions of agile computing.
40

Aspect Oriented Software Fault Tolerance for Mission Critical Systems

Hameed, Kashif January 2010 (has links)
Software fault tolerance is a means of achieving high dependability for mission and safety critical systems. Despite continuing efforts to prevent and remove faults during software development, application-level fault tolerance measures are still required to avoid failures due to residual design, programming and transient faults. In addition to functional complexity of application level software, non-functional requirements, such as diversity, redundancy, exception handling, voting and adjudication mechanisms, are introduced by fault tolerance measures, bringing additional system complexity. Current software patterns, styles and architectures do not respect the separation of concerns at design and programming layers which is desirable when striving to manage complexity, maintainability and portability issues. Moreover the lack of domain specific fault tolerance schemes, like error detection and recovery mechanisms, further makes this task complicated for developers. The main contribution of this research is to provide architectural support for software fault tolerance using an Aspect Oriented Software Development paradigm. The approach used proposes aspect oriented fault tolerance frameworks incorporating exception handling, design diversity and protective wrappers to fulfil the needs of a large range of dependable applications. The utilization of the proposed frameworks IS demonstrated to offer several advantages, involving modularization, reduced complexity, and reusability, over traditional, ad-hoc fault tolerant implementations. Three separate case studies are used to evaluate the proposed frameworks through dependability assessment and software metrics analysis. The results show that the proposed frameworks can improve dependability with higher fault coverage and better separation of fault tolerance concerns from core functionality.

Page generated in 0.0314 seconds