• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

New algorithms for distributed submodular maximization

Barbosa, Rafael da Ponte January 2017 (has links)
A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as submodular maximization problems. In many of these applications, the amount of data collected is quite large and it is growing at a very fast pace. For example, the wide deployment of sensors has led to the collection of large amounts of measurements of the physical world. Similarly, medical data and human activity data are being captured and stored at an ever increasing rate and level of detail. This data is often high-dimensional and complex, and it needs to be stored and/or processed in a distributed fashion. Following a recent line of work, we present here parallel algorithms for these problems, and analyze the compromise between quality of the solutions obtained and the amount of computational overhead. On the one hand, we develop strategies for bringing existing algorithms for constrained submodular maximization in the sequential setting to the distributed setting. The algorithms presented achieve constant approximation factors in two rounds, and near optimal approximation ratios in only a constant number of rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint. On the other hand, for unconstrained submodular maximization, we devise parallel algorithms combining naive random sampling and Double Greedy steps, and investigate how much the quality of the solutions degrades with less coordination.
112

Mining previously unknown patterns in time series data

Gu, Zhuoer January 2017 (has links)
The emerging importance of distributed computing systems raises the needs of gaining a better understanding of system performance. As a major indicator of system performance, analysing CPU host load helps evaluate system performance in many ways. Discovering similar patterns in CPU host load is very useful since many applications rely on the pattern mined from the CPU host load, such as pattern-based prediction, classification and relative rule mining of CPU host load. Essentially, the problem of mining patterns in CPU host load is mining the time series data. Due to the complexity of the problem, many traditional mining techniques for time series data are not suitable anymore. Comparing to mining known patterns in time series, mining unknown patterns is a much more challenging task. In this thesis, we investigate the major difficulties of the problem and develop the techniques for mining unknown patterns by extending the traditional techniques of mining the known patterns. In this thesis, we develop two different CPU host load discovery methods: the segment-based method and the reduction-based method to optimize the pattern discovery process. The segment-based method works by extracting segment features while the reduction-based method works by reducing the size of raw data. The segment-based pattern discovery method maps the CPU host load segments to a 5-dimension space, then applies the DBSCAN clustering method to discover similar segments. The reduction-based method reduces the dimensionality and numerosity of the CPU host load to reduce the search space. A cascade method is proposed to support accurate pattern mining while maintaining efficiency. The investigations into the CPU host load data inspired us to further develop a pattern mining algorithm for general time series data. The method filters out the unlikely starting positions for reoccurring patterns at the early stage and then iteratively locates all best-matching patterns. The results obtained by our method do not contain any meaningless patterns, which has been a different problematic issue for a long time. Comparing to the state of art techniques, our method is more efficient and effective in most scenarios.
113

The readying of applications for heterogeneous computing

Herdman, Andy January 2017 (has links)
High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades. Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system. These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms. Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question. This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints. This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address. Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole.
114

Temporal incident light fields

Sinha, Debmalya January 2018 (has links)
High-fidelity real-world lighting is a complex and rapidly expanding field of study in computer graphics. Rendering with real-world lighting plays a crucial part in motion pictures, computer games, Augmented Reality (AR) and Virtual Reality (VR) applications. There are, however, many constraints when capturing and representing real-world lights for rendering. In particular, dimensionality plays a significant role although existing industry-standard methods are inadequate to capture light throughout the three spatial, two angular and a temporal dimension simultaneously. Image Based Lighting (IBL) techniques addresses temporality by capturing two angular and the temporal dimension simultaneously. The Incident Light Fields (ILF) technique, on the other hand, can capture complex spatially varying real-world light incident on a static scene covering five angular and spatial dimensions. However, any changes in the positions or the radiometric properties of direct light sources in the scene over time invalidates the captured ILF due to the subsequent changes in the indirect lighting. In a dynamically varying lighting condition, ILF needs to be recaptured with each change in the lighting which is infeasible in most real-world situations. This thesis proposes a novel technique called “Dynamic Change Propagation” (DCP) that can simulate any changes made in the direct light and propagate the effects to the indirect lighting recorded in a captured ILF. Evaluations show average RMSE errors of 0.034 with absolute percentage errors of 6.8% for light source movement simulation and 0.013 (RMSE) for 3.4% for intensity change simulations. In addition to the DCP technique, this thesis proposes a novel “Temporal Incident Light Field” (Temporal ILF) technique which records the changes in the light sources over time and utilizes the DCP technique to simulate those changes into the originally recorded static ILF thus, capturing six (spatial, angular and temporal) dimensions. To the best of our knowledge, Temporal ILF is the first method which can record high-fidelity real-world light over all six spatial, angular and temporal dimensions simultaneously. The introduction of the DCP and Temporal ILF techniques in this thesis offers new ways of rendering with spatio-temporally variant high-fidelity real-world light.
115

Efficient and adaptable high dynamic range compression

Hatchett, Jonathan January 2017 (has links)
High dynamic range (HDR) imaging techniques enable the full range of light in a scene to be captured, transmitted and displayed, eliminating under- and over-exposed regions. Storing the full range of light typically requires 32 bits of information per colour channel, four times larger than the 8 bits required for low dynamic range (LDR) data. If HDR is to fulfil its potential in a wide variety of applications such as live broadcast and interactive remote gaming, fast, efficient compression is required to provide HDR video without major changes to existing communications infrastructure. A number of methods have so far been proposed for HDR video compression, however they rely on computationally expensive transformations to either split the video into multiple 8-bit streams or convert to a perceptually uniform domain. This thesis will address the question of whether high-quality HDR video compression can be achieved in a computationally efficient manner by introducing a number of novel techniques. Initially, the power-law functions used by LDR video are extended to HDR data to provide a straightforward and efficient solution to HDR compression. The Power Transfer Function (PTF) is computationally inexpensive and an objective evaluation shows that it provides improved compression quality at a thirtieth of the computational cost of other leading methods while maintaining equivalent perceived quality. Subsequently, information about the content and ambient environment at the display are used to adaptively transform the compression method. A subjective evaluation involving 40 participants demonstrates that the necessary peak luminance of content is dependent on the ambient illumination of the display, and an objective evaluation confirms that optimal compression is affected by the peak luminance. An adaptive extension to PTF, Adaptive PTF (PTFa) is proposed using iterative optimisation to calculate the ideal compression curve, gaining 2.88 VDP over non-adaptive PTF. Finally, the computational performance of PTFa is improved by four orders of magnitude to enable real-time compression with little decrease in quality through deep learning. Predictive PTF (PTFp) utilises a model to predict the compression curved based on the content, display and ambient environment. This thesis demonstrates a fast, efficient method of general HDR video compression which is extended to provide a fast, adaptive solution for content-specific compression.
116

Automatic error recovery for LR parsers in theory and practice

Dain, Julia Anne January 1989 (has links)
This thesis argues the need for good syntax error handling schemes in language translation systems such as compilers, and for the automatic incorporation of such schemes into parser-generators. Syntax errors are studied in a theoretical framework and practical methods for handling syntax errors are presented. The theoretical framework consists of a model for syntax errors based on the concept of a minimum prefix-defined error correction,a sentence obtainable from an erroneous string by performing edit operations at prefix-defined (parser defined) errors. It is shown that for an arbitrary context-free language, it is undecidable whether a better than arbitrary choice of edit operations can be made at a prefix-defined error. For common programming languages,it is shown that minimum-distance errors and prefix-defined errors do not necessarily coincide, and that there exists an infinite number of programs that differ in a single symbol only; sets of equivalent insertions are exhibited. Two methods for syntax error recovery are, presented. The methods are language independent and suitable for automatic generation. The first method consists of two stages, local repair followed if necessary by phrase-level repair. The second method consists of a single stage in which a locally minimum-distance repair is computed. Both methods are developed for use in the practical LR parser-generator yacc, requiring no additional specifications from the user. A scheme for the automatic generation of diagnostic messages in terms of the source input is presented. Performance of the methods in practice is evaluated using a formal method based on minimum-distance and prefix-defined error correction. The methods compare favourably with existing methods for error recovery.
117

Adaptive wavelet image compression

Rajpoot, Nasir Mahmood January 2001 (has links)
In recent years, there has been an explosive increase in the amount of digital image data. The requirements for its storage. and communication can be reduced considerably by compressing the data while maintaining their visual quality. The work in this thesis is concerned with the compression of still images using fixed and adaptive wavelet transforms. The wavelet transform is a suitable candidate for representing an image in a compression system, due to its being an efficient representation, having an inherent multiresolution nature, and possessing a self-similar structure which lends itself to efficient quantization strategies using zerotrees. The properties of wavelet transforms are studied from a compression viewpoint. A novel augmented zerotree wavelet image coding algorithm is presented whose compression performance is comparable to the best wavelet coding results published to date. It is demonstrated that a wavelet image coder performs much better on images consisting of smooth regions than on relatively complex images. The need thus arises to explore the wavelet bases whose time-frequency tiling is adapted to a given signal, in such a way that the resulting waveforms resemble closely those present in the signal and consequently result in a sparse representation, suitable for compression purposes. Various issues related to a generalized wavelet basis adapted to the signal or image contents, the so-called best wavelet packet basis, and its selection are addressed. A new method for wavelet packet basis selection is presented, which aims to unite the basis selection process with quantization strategy to achieve better compression performance. A general zerotree structure for any arbitrary wavelet packet basis, termed the compatible zerotree structure, is presented. The new basis selection method is applied to compatible zerotree quantization to obtain a progressive wavelet packet coder, which shows significant coding gains over its wavelet counterpart on test images of diverse nature.
118

The centralization of scientific computation in Britain, 1925-1955

Croarken, Mary G. January 1985 (has links)
This study examines the organization of scientific computation in Britain over the period 1925-1955. At the beginning of the twentieth century most scientific computation was performed by individuals using logarithm tables and slide rules. By the late 1920s desk calculators and accounting machines had become common computing tools. This thesis looks at the adoption of mechanized computing methods by scientists and traces the centralization of computing effort which subsequently took place. Chapter 1 identifies nine criteria which are used to analyse the individual computing centres discussed in the thesis, and form a basis for the study. Chapter 1 also looks at scientific computation at the beginning of the twentieth century and gives relevant background information for the remainder of the thesis. The bulk of the thesis, chapters 2 to 6, describe the computing centres which emerged during 1925-1955. The description begins by looking at L.J. Comrie's work at the Nautical Almanac Office in the late 1920s and goes on to consider the Scientific Computing Service, the Cambridge Mathematical Laboratory, the Admiralty Cotnputing Service and the NPL Mathematics Division. The NPL Mathematics Division is of particular importance as it was set up, in 1945, to act as a national computing centre and represents the pinnacle of centralized computing in Britain. Similar events in the United States and Europe are described in chapter 7 and are compared and contrasted with centralized computation in Britain. The Unesco International Computation Centre is also described in chapter 7 and some conclusions about the way in which computation in Britain was centralized are given.
119

The minimal continuous semantics of the lambda-calculus

Welch, Peter Hug January 1974 (has links)
A semantics of the ƛ-calculus is presented which is different from any of the lattice models, so far analysed, of Scott and the term models of Morris and Barendregt. The original motivation was to do for ƛ-expressions what Scott had done for flow-diagrams, namely construct a 'syntactic' inverse limit lattice, E∞ in which to represent them. .A further motivation was to abstract out the essential notion J("continuous semantics") behind the theorem that Wadsworth proved concerning some of Scott's models, namely that meaning of a ƛ-expression can be found as the (continuous) limit its approximate reductions. That this idea is relevant to E∞ can be seen since the coordinates , of the image of a ƛ-expression in E∞ .form a subset of its approximate reductions. To establish the basic fact of ß-modelship about E∞ be shown that Wadsworth's theorem does indeed apply - i.e. that the , it has to 3 E∞-coordinates provide a sufficiently complete subset of all the possible approximate reductions. Translating this back to the ƛ-calculus gives an algorithm ("i'th reductions") for producing ß-reductions which must be proven 'correct' - i.e. that it goes sufficiently far in all cases, : this notion is christened "weak completeness". l'th reductions are generalised to a non-deterministic evaluation mechanism called "inside-out reductions" which behaves in almost the opposite manner to Church's "standard reductions". This generalisation is not too drastic since it is easy to show that a weak completeness result for one implies the same for the other. The weak completeness of inside-out reductions is established. The E∞-semantics is a 'pure' a-model in that the only n-reductions modelled are when there are equivalent ß-reductions - other-wise they are not even comparable. Further, ƛ-expressions with a normal form are maximal and isolated in E∞, unsolvable expressions are i,the fixed-point combinators {Yi|i > O} are equivalenced and the model itself is substitutive, normal, solvable and implies Morris' "extensional equivalence". Finally, it is the minimal continuous semantics in the sense that Wadsworth's theorem is true in another semantics if and only if it is continuously derivable from E∞.
120

Colour image quantisation and coding for optimal perception

McColl, Roderick William January 1991 (has links)
Once a digital image is processed in some way and the reconstruction is compared to the original, the final arbiter of reconstruction quality is the human to whom the images are presented. The research presented here is concerned with the development of schemes for the quantisation of colour images and for the encoding of colour images for transmission, with the goal of minimising the perceived image distortion rather than minimising a traditional error signal statistic. In order to quantise colour images with minimum perceived distortion, a colour space is sought in which Euclidean distances correspond linearly to perceived colour difference. The response of the visual system to colour and colour difference is investigated. A new quantisation scheme is developed and implemented to achieve a colour image compression ratio of approximately 6: 1. Three variations on the basic quantiser algorithm are considered and results of applying each variation to three test images are presented. Two-component encoding of colour images for low bit-rate transmission is investigated. A new method of encoding the contents of the image regions following contour extraction is developed. Rather than using parametric surface descriptions, a quad-tree is constructed and a simple measure of perceived image contrast threshold is used to determine the transmitted data. Arithmetic entropy coding is used to discard statistical redundancy in the signal . A colour wash process recreates the colour in each region. Implementation details are presented and several examples are given to illustrate differing contrast thresholds with compression rates of up to 50: 1. An analysis of the textures in certain regions of the test images leads to the development of an algorithm to synthesise the appearance of the textures following extraction of a small block which may be repeated across the region, leading to dramatic compression rates in · some instances.

Page generated in 0.504 seconds