• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On the generation and analysis of program transformations

Warburton, Richard January 2010 (has links)
This thesis discusses the idea of using domain specific languages for program transformation, and the application, implementation and analysis of one such domain specific language that combines rewrite rules for transformation and uses temporal logic to express its side conditions. We have conducted three investigations. - An efficient implementation is described that is able to generate compiler optimizations from temporal logic specifications. Its description is accompanied by an empirical study of its performance. - We extend the fundamental ideas of this language to source code in order to write bug fixing transformations. Example transformations are given that fix common bugs within Java programs. The adaptations to the transformation language are described and a sample implementation which can apply these transformations is provided. - We describe an approach to the formal analysis of compiler optimizations that proves that the optimizations do not change the semantics of the program that they are optimizing. Some example proofs are included. The result of these combined investigations is greater than the sum of their parts. By demonstrating that a declarative language may be efficiently applied and formally reasoned about satisfies both theoretical and practical concerns, whilst our extension towards bug fixing shows more varied uses are possible.
62

Accelerating global illumination for physically-based rendering

Bashford-Rogers, Thomas January 2011 (has links)
Lighting is essential to generate realistic images using computer graphics. The computation of lighting takes into account the multitude of ways which light propagates around a virtual scene. This is termed global illumination, and is a vital part of physically-based rendering. Although providing compelling and accurate images, this is a computationally expensive process. This thesis presents several methods to improve the speed of global illumination computation, and therefore enables faster image synthesis. Global illumination can be calculated in an offline process, typically taking many minutes to hours to compute an accurate solution, or it can be approximated at interactive or real-time rates. This work proposes three methods which tackle the problem of improving the efficiency of computing global illumination. The first is an interactive method for calculating multiple-bounce global illumination on graphics hardware, which exploits the power of the graphics pipeline to create a voxelised representation of the scene through which light transport is computed. The second is an unbiased physically-based algorithm for improving the efficiency of path generation when calculating global illumination in complicated scenes. This is adaptive, and learns information about the lighting in the scene as the rendering progresses, and uses this to reduce variance in the image. In both common scenes used in graphics and situations which involve difficult light paths, this method gives a 30 - 70% boost in performance. The third method in this thesis is a sampling method which improves the efficiency of the common indoor-outdoor lighting scenario. This is done by both combining the lighting distribution with view importance, and automatically determining the important areas of the scene in which to start light paths. This gives a speed up of between three times, and two orders of magnitude, depending on scene and lighting complexity.
63

Application and development of fieldbus : executive summary

Piggin, Richard Stuart Hadley January 1999 (has links)
Confusion over fieldbus technology by manufacturers and customers alike is due to a number of factors. The goal of a single global fieldbus standard, the subsequent development of European standards, the recognition of a number of emerging de facto standards and the continued international standardisation of fieldbus technology is still perplexing potential fieldbus users. The initial low supply and demand for suitable devices and compatible controller interfaces, the high cost of control systems and inertia caused by resistance to change have all contributed to the slow adoption of fieldbus technology by industry. The variable quality of fieldbus documentation has not assisted the acceptance of this new technology. An overview of industrial control systems, fieldbus technology, present and future trends is given. The quantifiable benefits of fieldbus are identified in the assessment of fieldbus applications and guidance on the appropriate criteria for the evaluation and selection of fieldbus are presented. End users can use this and network planning to establish the viability, suitability and benefits of various control system architectures and configurations prior to implementation. The enhancements to a network configuration tool are shown to aid control system programming and the provision of comprehensive diagnostics. A guide to fieldbus documentation enables manufacturers to produce clear, consistent fieldbus documentation. The safety-related features for a machine safety fieldbus are also determined for an existing network technology. Demonstrators have been produced to show the novel use of fieldbus technology in different areas. Transitory connections are utilised to reduce complexity and increase functionality. A machine safety fieldbus is evaluated in the first installation of a fully networked control application. Interoperability of devices from many different manufacturers and the benefits of fieldbus are proven. Experience gained during the membership of the British Standards Institution AMT/7 Committee identified the impact of standards and legislation on fieldbus implementation and highlighted the flawed use of standards to promote fieldbus technology. The Committee prepared a Guide to the evaluation of fieldbus specifications, a forthcoming publication by the BSI. The Projects presented have increased and developed the appropriate use of fieldbus technology through novel application, technical enhancement, demonstration and knowledge dissemination.
64

Geometric aspects of empirical modelling : issues in design and implementation

Cartwright, Richard Ian January 1998 (has links)
Empirical modelling is a new approach to the construction of physical (typically computer-based) artefacts [sic]. Model construction proceeds in an open-ended and exploratory manner in association with the identification of observables, dependency and agency. Knowledge of the referent is acquired through experiment, and - through the use of metaphor - interaction with the artefact is contrived so as to resemble interaction with the referent. Previous research has demonstrated the potential for empirical modelling in many areas. These include concurrent engineering, virtual reality and reactive systems development. This thesis examines the relationship between empirical modelling and geometric modelling on computer systems. Empirical modelling is suggested as complementary to variational and parametric modelling techniques commonly used in software packages for geometric modelling. Effective techniques for exploiting richer geometric models in visual metaphors within empirical modelling are also developed. Technical issues arising from geometric aspects of existing empirical modelling tools and case-studies are reviewed. The aim is improve the efficiency of existing implementations, and to introduce data representations that better support geometric modelling. To achieve this, a mathematical model (the DM Model) for representing the dependency between observables is introduced, and this is used as the basis for a new algorithm for propagating updates through observables. A novel computing machine (the DAM Machine) that maintains dependencies representing indivisible relationships between words in computer store is derived from the DM Model. Examples of the use of this machine for the representation of geometry are presented. In implementation, a comparative efficiency gain is achieved by the DAM Machine over existing tools. This allows for the real-time animation of models. A novel and general approach to the representation of data, suitable for integrating empirical modelling and general Java applications, with additional support for collaborative working, is developed. Object-oriented programming methods provide the foundation for new tools to support this representation. The empirical world class library allows a programmer to implement new applications for shape modelling that support empirical modelling and integrate a wide range of shape representations. A method of integrating these geometric techniques into spreadsheet-like environments that are well-adapted to support empirical modelling is proposed.
65

Sequence distance embeddings

Cormode, Graham January 2003 (has links)
Sequences represent a large class of fundamental objects in Computer Science sets, strings, vectors and permutations are considered to be sequences. Distances between sequences measure their similarity, and computations based on distances are ubiquitous: either to compute the distance, or to use distance computation as part of a more complex problem. This thesis takes a very specific approach to solving questions of sequence distance: sequences are embedded into other distance measures, so that distance in the new space approximates the original distance. This allows the solution of a variety of problems including: Fast computation of short sketches in a variety of computing models, which allow sequences to be compared in constant time and space irrespective of the size of the original sequences. Approximate nearest neighbor and clustering problems, significantly faster than the naive exact solutions. Algorithms to find approximate occurrences of pattern sequences in long text sequences in near linear time. Efficient communication schemes to approximate the distance between, and exchange, sequences in close to the optimal amount of communication. Solutions are given for these problems for a variety of distances, including fundamental distances on sets and vectors; distances inspired by biological problems for permutations; and certain text editing distances for strings. Many of these embeddings are computable in a streaming model where the data is too large to store in memory, and instead has to be processed as and when it arrives, piece by piece. The embeddings are also shown to be practical, with a series of large scale experiments which demonstrate that given only a small space, approximate solutions to several similarity and clustering problems can be found that are as good as or better than those found with prior methods.
66

Theoretical and practical tools for validating discrete and real-time systems

Qu, Hongyang January 2005 (has links)
System validation has been investigated for a long time. Testing is used to find errors inside a system; in contrast, model checking is used to verify whether a given property holds in the system. Both methods have their own advantages and interact with each other. This thesis focuses on four methodologies for model checking and testing. In the end, they are integrated into a practical validating tool set, which is described in this thesis. Many techniques have been developed to manage the state space for a complicated system. But they still fail to reduce the state space for some large-scale concurrent systems. We propose using code annotation as a means of manually controlling the state space. This solution provides a trade-off between computability and exhaustiveness. When a suspicious execution is found either by testing or by model checking, it can be difficult to repeat this execution in a real environment due to nondeterministic choices existing in the system. We suggest enforcing a given execution by code transformation. In addition, we extend our method from a single path to partial order executions. In order to repeat at least one such execution, we need to provide appropriate values satisfying the path's initial precondition in its environment. It is easy to obtain the precondition in a discrete environment, but difficult in a real-time environment, especially for a partial order, since the computation would involve time constraints in the latter case. We present a real-time model first, and then a methodology to compute the precondition on this model. When every action in the system is associated with a probability density function, it is possible to calculate the probability of the occurrence of a particular execution. We give a method to calculate the probability by inte- gration on a group of independent continuous random variables, each of which is corresponding to an action either executed, or enabled but not fired. The research described in this thesis provides some new ideas for ap- plying formal methods to classical software development tools.
67

Statistical inference from large-scale genomic data

Yuan, Yinyin January 2009 (has links)
This thesis explores the potential of statistical inference methodologies in their applications in functional genomics. In essence, it summarises algorithmic findings in this field, providing step-by-step analytical methodologies for deciphering biological knowledge from large-scale genomic data, mainly microarray gene expression time series. This thesis covers a range of topics in the investigation of complex multivariate genomic data. One focus involves using clustering as a method of inference and another is cluster validation to extract meaningful biological information from the data. Information gained from the application of these various techniques can then be used conjointly in the elucidation of gene regulatory networks, the ultimate goal of this type of analysis. First, a new tight clustering method for gene expression data is proposed to obtain tighter and potentially more informative gene clusters. Next, to fully utilise biological knowledge in clustering validation, a validity index is defined based on one of the most important ontologies within the Bioinformatics community, Gene Ontology. The method bridges a gap in current literature, in the sense that it takes into account not only the variations of Gene Ontology categories in biological specificities and their significance to the gene clusters, but also the complex structure of the Gene Ontology. Finally, Bayesian probability is applied to making inference from heterogeneous genomic data, integrated with previous efforts in this thesis, for the aim of large-scale gene network inference. The proposed system comes with a stochastic process to achieve robustness to noise, yet remains efficient enough for large-scale analysis. Ultimately, the solutions presented in this thesis serve as building blocks of an intelligent system for interpreting large-scale genomic data and understanding the functional organisation of the genome.
68

Software process improvement as emergent change : a structurational analysis

Allison, Ian K. January 2004 (has links)
This thesis differs from the technological perspective of SPI by identifying and analysing the organisational features of process improvement. A theoretical understanding is developed of how and why software process improvements occur and what are the consequences of the change process within a specific case. A packaged information systems organisation forms the basis for a substantive case study. Adding to the growing body of qualitative research, the study takes a critical hermeneutic perspective. In doing so it overcomes some of the criticisms of the interpretive studies especially the need for the research to be reflexive in nature. By looking at SPI as an emergent rather than deterministic activity, the design and action of the change process are shown to be intertwined and shaped by their context. This understanding is based upon a structurational perspective that highlights how the process improvements are enabled and constrained by their context. The work builds on the recent recognition that the improvements can be understood from an organisational learning perspective. Fresh insights to the improvement process are developed by recognising the role of the individual to facilitate or resist the improvement. The understanding gained here can be applied by organisations to enable them to improve the effectiveness of their SPI programmes, and so improve the quality of their software. Lessons are derived that show how software organisations can support the ongoing improvement through recognition of the learning and political aspects of the change by adopting an agile approach to SPI.
69

Randomised techniques in combinatorial algorithmics

Zito, Michele A. A. January 1999 (has links)
Probabilistic techniques are becoming more and more important in Computer Science. Some of them are useful for the analysis of algorithms. The aim of this thesis is to describe and develop applications of these techniques. We first look at the problem of generating a graph uniformly at random from the set of all unlabelled graphs with n vertices, by means of efficient parallel algorithms. Our model of parallel computation is the well-known parallel random access machine (PRAM). The algorithms presented here are among the first parallel algorithms for random generation of combinatorial structures. We present two different parallel algorithms for the uniform generation of unlabelled graphs. The algorithms run in O(log<sup>2</sup> n) time with high probability on an EREW PRAM using O(n<sup>2</sup>) processors. Combinatorial and algorithmic notions of approximation are another important thread in this thesis. We look at possible ways of approximating the parameters that describe the phase transitional behaviour (similar in some sense to the transition in Physics between solid and liquid state) of two important computational problems: that of deciding whether a graph is colourable using only three colours so that no two adjacent vertices receive the same colour, and that of deciding whether a propositional boolean formula in conjunctive normal form with clauses containing at most three literals is satisfiable. A specific notion of maximal solution and, for the second problem, the use of a probabilistic model called the (young) coupon collector allows us to improve the best known results for these problems. Finally we look at two graph theoretic matching problems. We first study the computational complexity of these problems and the algorithmic approximability of the optimal solutions, in particular classes of graphs. We also derive an algorithm that solves one of them optimally in linear time when the input graph is a tree as well as a number of non-approximability results. Then we make some assumptions about the input distribution, we study the expected structure of these matchings and we derive improved approximation results on several models of random graphs.
70

From analogy-making to modelling : the history of analog computing as a modelling technology

Care, Charles January 2008 (has links)
Today, modern computers are based on digital technology. However, during the decades after 1940, digital computers were complemented by the separate technology of analog computing. But what was analog computing, what were its merits, and who were its users? This thesis investigates the conceptual and technological history of analog computing. As a concept, analog computing represents the entwinement of a complex pre-history of meanings, including calculation, modelling, continuity and analogy. These themes are not only landmarks of analog's etymology, but also represent the blend of practices, ways of thinking, and social ties that together comprise an `analog culture'. The first half of this thesis identifies how the history of this technology can be understood in terms of the two parallel themes of calculation and modelling. Structuring the history around these themes demonstrates that technologies associated with modelling have less representation in the historiography. Basing the investigation around modelling applications, the thesis investigates the formation of analog culture. The second half of this thesis applies the themes of modelling and information generation to understand analog use in context. Through looking at examples of analog use in academic research, oil reservoir modelling, aeronautical design, and meteorology, the thesis explores why certain communities used analog and considers the relationship between analog and digital in these contexts. This study demonstrates that analog modelling is an example of information generation rather than information processing. Rather than focusing on the categories of analog and digital, it is argued that future historical scholarship in this field should give greater prominence to the more general theme of modelling.

Page generated in 0.0808 seconds