• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A modular model checking algorithm for cyclic feature compositions

Wang, Xiaoning. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: modular verification; feature-oriented software development; model checking; assume-guarantee reasoning. Includes bibliographical references (p. 72-73).
162

Combining over- and under-approximating program analyses for automatic software testing

Csallner, Christoph. January 2008 (has links)
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spencer.
163

UML activity diagram reduction by graph transformations /

He, Ling. January 1900 (has links)
Thesis (M. Sc. - Carleton University, 2001. / Includes bibliographical references (p. 138-141). Also available in electronic format on the Internet.
164

Model checking for open systems a compositional approach to software verification /

Andrade-Gómez, Héctor Adolfo, January 2001 (has links)
Thesis (Ph. D.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains xi, 144 p.; also contains graphics. Vita. Includes bibliographical references (p. 139-143).
165

DNA computation

Amos, Martyn January 1997 (has links)
This is the first ever doctoral thesis in the field of DNA computation. The field has its roots in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of computing at a molecular level. Feynman's visionary idea was only realised in 1994, when Leonard Adleman performed the first ever truly molecular-level computation using DNA combined with the tools and techniques of molecular biology. Since Adleman reported the results of his seminal experiment, there has been a flurry of interest in the idea of using DNA to perform computations. The potential benefits of using this particular molecule are enormous: by harnessing the massive inherent parallelism of performing concurrent operations on trillions of strands, we may one day be able to compress the power of today's supercomputer into a single test tube. However, if we compare the development of DNA-based computers to that of their silicon counterparts, it is clear that molecular computers are still in their infancy. Current work in this area is concerned mainly with abstract models of computation and simple proof-of-principle experiments. The goal of this thesis is to present our contribution to the field, placing it in the context of the existing body of work. Our new results concern a general model of DNA computation, an error-resistant implementation of the model, experimental investigation of the implementation and an assessment of the complexity and viability of DNA computations. We begin by recounting the historical background to the search for the structure of DNA. By providing a detailed description of this molecule and the operations we may perform on it, we lay down the foundations for subsequent chapters. We then describe the basic models of DNA computation that have been proposed to date. In particular, we describe our parallel filtering model, which is the first to provide a general framework for the elegant expression of algorithms for NP-complete problems. The implementation of such abstract models is crucial to their success. Previous experiments that have been carried out suffer from their reliance on various error-prone laboratory techniques. We show for the first time how one particular operation, hybridisation extraction, may be replaced by an error-resistant enzymatic separation technique. We also describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general to allow it to be used in any experimental implementation. The results of preliminary tests of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental guidance in the future. The final contribution of this thesis is a rigorous consideration of the complexity and viability of DNA computations. We argue that existing analyses of models of DNA computation are flawed and unrealistic. In order to obtain more realistic measures of the time and space complexity of DNA computations we describe a new strong model, and reassess previously described algorithms within it. We review the search for "killer applications": applications of DNA computing that will establish the superiority of this paradigm within a certain domain. We conclude the thesis with a description of several open problems in the field of DNA computation.
166

Evaluating technologies and techniques for transitioning hydrodynamics applications to future generations of supercomputers

Mallinson, A. C. January 2016 (has links)
Current supercomputer development trends present severe challenges for scientific codebases. Moore’s law continues to hold, however, power constraints have brought an end to Dennard scaling, forcing significant increases in overall concurrency. The performance imbalance between the processor and memory sub-systems is also increasing and architectures are becoming significantly more complex. Scientific computing centres need to harness more computational resources in order to facilitate new scientific insights and maintaining their codebases requires significant investments. Centres therefore have to decide how best to develop their applications to take advantage of future architectures. To prevent vendor "lock-in" and maximise investments, achieving portableperformance across multiple architectures is also a significant concern. Efficiently scaling applications will be essential for achieving improvements in science and the MPI (Message Passing Interface) only model is reaching its scalability limits. Hybrid approaches which utilise shared memory programming models are a promising approach for improving scalability. Additionally PGAS (Partitioned Global Address Space) models have the potential to address productivity and scalability concerns. Furthermore, OpenCL has been developed with the aim of enabling applications to achieve portable-performance across a range of heterogeneous architectures. This research examines approaches for achieving greater levels of performance for hydrodynamics applications on future supercomputer architectures. The development of a Lagrangian-Eulerian hydrodynamics application is presented together with its utility for conducting such research. Strategies for improving application performance, including PGAS- and hybrid-based approaches are evaluated at large node-counts on several state-of-the-art architectures. Techniques to maximise the performance and scalability of OpenMP-based hybrid implementations are presented together with an assessment of how these constructs should be combined with existing approaches. OpenCL is evaluated as an additional technology for implementing a hybrid programming model and improving performance-portability. To enhance productivity several tools for automatically hybridising applications and improving process-to-topology mappings are evaluated. Power constraints are starting to limit supercomputer deployments, potentially necessitating the use of more energy efficient technologies. Advanced processor architectures are therefore evaluated as future candidate technologies, together with several application optimisations which will likely be necessary. An FPGA-based solution is examined, including an analysis of how effectively it can be utilised via a high-level programming model, as an alternative to the specialist approaches which currently limit the applicability of this technology.
167

Image data compression based on a multiresolution signal model

Todd, Martin Peter January 1989 (has links)
Image data compression is an important topic within the general field of image processing. It has practical applications varying from medical imagery to video telephones, and provides significant implications for image modelling theory. In this thesis a new class of linear signal models, linear interpolative multiresolution models, is presented and applied to the data compression of a range of natural images. The key property of these models is that whilst they are non- causal in the two spatial dimensions they are causal in a third dimension, the scale dimension. This leads to computationally efficient predictors which form the basis of the data compression algorithms. Models of varying complexity are presented, ranging from a simple stationary form to one which models visually important features such as lines and edges in terms of scale and orientation. In addition to theoretical results such as related rate distortion functions, the results of applying the compression algorithms to a variety of images are presented. These results compare favourably, particularly at high compression ratios, with many of the techniques described in the literature, both in terms of mean squared quantisation noise and more meaningfully, in terms of perceived visual quality. In particular the use of local orientation over various scales within the consistent spatial interpolative framework of the model significantly reduces perceptually important distortions such as the blocking artefacts often seen with high compression coders. A new algorithm for fast computation of the orientation information required by the adaptive coder is presented which results in an overall computational complexity for the coder which is broadly comparable to that of the simpler non-adaptive coder. This thesis is concluded with a discussion of some of the important issues raised by the work.
168

Multiresolution image modelling and estimation

Clippingdale, Simon January 1988 (has links)
Multiresolution representations make explicit the notion of scale in images, and facilitate the combination of information from different scales. To date, however, image modelling and estimation schemes have not exploited such representations and tend rather to be derived from two- dimensional extensions of traditional one-dimensional signal processing techniques. In the causal case, autoregressive (AR) and ARMA models lead to minimum mean square error (MMSE) estimators which are two-dimensional variants of the well-established Kalman filter. Noncausal approaches tend to be transform-based and the MMSE estimator is the two- dimensional Wiener filter. However, images contain profound nonstationarities such as edges, which are beyond the descriptive capacity of such signal models, and defects such as blurring (and streaking in the causal case) are apparent in the results obtained by the associated estimators. This thesis introduces a new multiresolution image model, defined on the quadtree data structure. The model is a one-dimensional, first-order gaussian martingale process causal in the scale dimension. The generated image, however, is noncausal and exhibits correlations at all scales unlike those generated by traditional models. The model is capable of nonstationary behaviour in all three dimensions (two position and one scale) and behaves isomorphically but independently at each scale, in keeping with the notion of scale invariance in natural images. The optimal (MMSE) estimator is derived for the case of corruption by additive white gaussian noise (AWGN). The estimator is a one-dimensional, first-order linear recursive filter with a computational burden far lower than that of traditional estimators. However, the simple quadtree data structure leads to aliasing and 'block' artifacts in the estimated images. This could be overcome by spatial filtering, but a faster method is introduced which requires no additional multiplications but involves the insertion of some extra nodes into the quadtree. Nonstationarity is introduced by a fast, scale-invariant activity detector defined on the quadtree. Activity at all scales is combined in order to achieve noise rejection. The estimator is modified at each scale and position by the detector output such that less smoothing is applied near edges and more in smooth regions. Results demonstrate performance superior to that of existing methods, and at drastically lower computational cost. The estimation scheme is further extended to include anisotropic processing, which has produced good results in image restoration. An orientation estimator controls anisotropic filtering, the output of which is made available to the image estimator.
169

A unified programming system for a multi-paradigm parallel architecture

Vaudin, John January 1991 (has links)
Real time image understanding and image generation require very large amounts of computing power. A possible way to meet these requirements is to make use of the power available from parallel computing systems. However parallel machines exhibit performance which is highly dependent on the algorithms being executed. Both image understanding and image generation involve the use of a wide variety of algorithms. A parallel machine suited to some of these algorithms may be unsuited to others. This thesis describes a novel heterogeneous parallel architecture optimised for image based applications. It achieves its performance by combining two different forms of parallel architecture, namely fine grain SIMD and course grain MIMD, into a single architecture. In this way it is possible to match the most appropriate computing resource to each algorithm in a given application. As important as the architecture itself is a method for programming it. This thesis describes a novel multi-paradigm programming language based on C++, which allows programs which make use of both control and data parallelism to be expressed in a single coherent framework, based on object oriented programming. To demonstrate the utility of both the architecture and the programming system, two applications, one from the field of image understanding the other image generation are examined. These applications combine some novel algorithms with other novel implementation approaches to provide the most effective mapping onto this architecture.
170

Self-organising techniques for tolerating faults in 2-dimensional processor arrays

Evans, Richard Anthony January 1988 (has links)
This thesis is concerned with research into techniques for tolerating the defects which inevitably occur in integrated circuits during processing. The research is motivated by the desire to permit the fabrication of very large (> 1cm2) integrated circuits having a viable yield, using standard chip processing lines. Attention is focussed on 2-dimensional arrays of identical processing elements with nearest-neighbour, orthogonal interconnections, and techniques for configuring such arrays in the presence of faults are investigated. In particular, novel algorithms based on the concept of self-organisation are proposed and studied in detail. The algorithms involve associating a small amount of control logic with each processing element in the array. The extra logic allows the processing elements to communicate with each other and come to a collective decision about how working processors should best be interconnected. The concept has been studied in considerable depth and the implications of the algorithms in a practical system have been thoroughly considered and demonstrated by construction of a small array at printed circuit board level, complete with software controlled testing procedures. The thesis can be considered in four main parts as follows. The first part (chapters 1 to 4) starts by presenting the objectives of the research and then motivates it by examining the increasing need for processor arrays. The difficulty of implementing such arrays as monolithic circuits due to integrated circuit defects is then considered. This is followed by a review of published work on hardware fault tolerance for regular arrays of processors. The second part (chapters 5 and 6) is devoted to the concept of self-organisation in processor arrays and includes a detailed description and evaluation of the algorithms followed by a comparison with other published techniques. Considerations such as hardware requirements and overheads, reducing the vulnerability of critical circuitry, self-testing, and the construction of the demonstrator are covered in the third part (chapters 7 to 10). The fourth part (chapters 11 and 12) considers potential applications for the research in both monolithic and non-monolithic systems. Finally, the conclusions and some suggestions for further work are presented.

Page generated in 0.0772 seconds