• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 48
  • 7
  • 1
  • Tagged with
  • 300
  • 37
  • 36
  • 25
  • 21
  • 19
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Computing chromatic adaptation

Süsstrunk, Sabine January 2005 (has links)
No description available.
32

Computer image processing with application to chemical engineering

Bishop, Nicholas E. January 1972 (has links)
A literature survey covers a wide range of picture processing topics from the general problem of manipulating digitised images to the specific task of analysing the shape of objects within an image field. There follows a discussion and development of theory relating to this latter task. A number of shape analysis techniques are inapplicable or computationally untenable when applied to objects containing concavities. A method is proposed and implemented whereby any object may be divided into convex components the algebraic sum of which constitute the original. These components may be related by a tree structure. It is observed that properties based on integral measurements, e.g. area, are less susceptible to quantisation errors than those based on linear and derivative measurements such as diameters anti slopes. A set of moments invariant with respect to size, position and orientation are derived and applied to the study of the above convex components. An outline of possible further developments is given.
33

Implementation of computer visualisation in UK planning

An, Kyungjin January 2012 (has links)
Within the processes of public consultation and development management, planners are required to consider spatial information, appreciate spatial transformations and future scenarios. In the past, conventional media such as maps, plans, illustrations, sections, and physical models have been used. Those traditional visualisations are at a high degree of abstraction, sometimes difficult to understand for lay people and inflexible in terms of the range of scenarios which can be considered. Yet due to technical advances and falling costs, the potential for computer based visualisation has much improved and has been increasingly adopted within the planning process. Despite the growth in this field, insufficient consideration has been given to the possible weakness of computerised visualisations. Reflecting this lack of research, this study critically evaluates the use and potential of computerised visualisation within this process. The research is divided into two components: case study analysis and reflections of the author following his involvement within the design and use of visualisations in a series of planning applications; and in-depth interviews with experienced practitioners in the field. Based on a critical review of existing literature, this research explores in particular the issues of credibility, realism and costs of production. The research findings illustrate the importance of the credibility of visualisations, a topic given insufficient consideration within the academic literature. Whereas the realism of visualisations has been the focus of much previous research, the results of the case studies and interviews with practitioners undertaken in this research suggest a ‘photo’ realistic level of details may not be required as long as the observer considers the visualisations to be a credible reflection of the underlying reality. Although visualisations will always be a simplification of reality and their level of realism is subjective, there is still potential for developing guidelines or protocols for image production based on commonly agreed standards. In the absence of such guidelines there is a danger that scepticism in the credibility of computer visualisations will prevent the approach being used to its full potential. These findings suggest there needs to be a balance between scientific protocols and artistic licence in the production of computer visualisation. In order to be sufficiently credible for use in decision making within the planning processes, the production of computer visualisation needs to follow a clear methodology and scientific protocols set out in good practice guidance published by professional bodies and governmental organisations.
34

Improved haptic interaction for large workspace, multi-sensory, dynamic virtual environments

Barrow, Alastair January 2010 (has links)
Virtual Reality (VR) is a rapidly advancing scientific field which enables humans to experience environments other than that which they physically inhabit. Humans use all their senses to interact in the real world and there should be no difference when using VR. This thesis explores the current state-of-the-art in Multi-Sensory Virtual Reality (MSVR) and presents new techniques for improving the level of interaction and realism in touch enabled MSVR. It is shown that, of the three senses which are commonly included in MSVR: vision, audition and haptics (touch), haptics is the least well represented. Further, it is observed that haptic interaction is particularly lacking in two areas: natural object manipulation and large workspace interaction. Object manipulation is limited by the number of contact points a haptic device can provide and there are both hardware and software challenges related to this. It is also limited by the realism of simulated object motion which is more complex for haptics than purely visual-auditory simulations. A novel haptic rendering algorithm, called the xFCA, has been designed to improve multi-finger manipulation of arbitrarily shaped objects. Also, a software platform known as MUSI has been developed which integrates the xFCA into a dynamic rigid-body simulator to allow the natural manipulation of virtual objects. The challenges in the development of MUSI, along with its advantages and limitations are discussed. Two new approaches to increasing the workspace of haptic devices have been investigated. The first, a novel haptic rendering technique which provides force feedback related to velocity is applied to a virtual shopping trolley. The second, a novel method of chaining devices together, is used to create a multi-finger haptic interface for both large and fast movements. Finally, both systems have been integrated into an MSVR simulator and the results of this are also discussed.
35

Extensions of the hit-or-miss transform for feature detection in noisy images and a novel design tool for estimating its parameters

Murray, Paul January 2012 (has links)
The work presented in this thesis focuses on extending a transform from Mathematical Morphology, known as the Hit-or-Miss transform (HMT), in order to make it more robust for detecting features of interest in the presence of noise in digital images. The extension that is described here requires that a single parameter is determined for correct functionality. A novel design tool which allows this parameter to be accurately estimated is proposed as part of this work. An efficient method for computing the extended transform is also presented. The HMT is a well known morphological transform that is capable of identifying features in digital images. When image features contain noise, texture or some other distortion, the HMT may fail. Various researchers have extended the HMT in different ways to make it more robust to noise. The most successful, and most recent extensions of the HMT for noise robustness, use rank order operators in place of standard morphological erosions and dilations. A major issue with most of these methods is that no technique is provided for calculating the parameters that are introduced to generalise the HMT, and, in most cases, these parameters are determined empirically. In this thesis, a new conceptual interpretation of the HMT is presented which uses percentage occupancy (PO) functions to implement the erosion and dilation operators of the HMT. When implemented in this way, the strictness of these PO functions can easily be relaxed in order to allow slacker fitting of the structuring elements. Relaxing the strict conditions of the transform is shown to improve the performance of the routine when processing noisy data. This thesis also introduces a novel design tool which is derived directly from the operators that are used to implement the aforementioned PO functions. This design tool can be used to determine a suitable value for the only parameter in the proposed extension of the HMT. Further, it can be used to estimate parameters for other generalisations of the HMT that have been described in the literature in order to improve their noise robustness. The power of the proposed technique is demonstrated and tested using sets of very noisy images. Further, a number of comparisons are performed in order to validate the method that is introduced in this work when compared with the most recent extensions of the HMT. One drawback with this method is that a direct implementation of the technique is computationally expensive. However, it is possible to implement the proposed method using rank-order filters in place of the percentage occupancy functions. Rank order filters are used in a multitude of image processing tasks. Their application can range from simple pre-processing tasks which aim to reduce/remove noise, to more complex problems where such filters can be used in combination to detect and segment image features. There is, therefore, a need to develop fast algorithms to compute the output of this class of filter in general. A number of methods for efficiently computing the output of specific rank order filters have been presented over the years. For example, numerous fast algorithms exist that can be used for calculating the output of the median filter. Fast algorithms for calculating morphological erosions and dilations - which, like the median filter, are a special case of the more general rank order filter - have also been proposed. In this thesis, these techniques are extended and combined such that it is possible to efficiently compute any rank, using any arbitrarily shaped window, making it possible to quickly compute the output of any rank order filter. The fast algorithm which is described is compared to an optimised technique for computing the output of this class of filter, and significant gains in speed are demonstrated when using the proposed technique. Further, it is shown that this efficient filtering algorithm can be used to produce an extremely fast implementation of the generalised HMT that is described in this work. The fast generalised HMT is compared with a number of other extensions and generalisations of the HMT that have been proposed in the literature over the years.
36

Real-time people tracking in a camera network

Limprasert, Wasit January 2012 (has links)
Visual tracking is a fundamental key to the recognition and analysis of human behaviour. In this thesis we present an approach to track several subjects using multiple cameras in real time. The tracking framework employs a numerical Bayesian estimator, also known as a particle lter, which has been developed for parallel implementation on a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single tracking unit we represent the human body by a parametric ellipsoid in a 3D world. The elliptical boundary can be projected rapidly, several hundred times per subject per frame, onto any image for comparison with the image data within a likelihood model. Adding variables to encode visibility and persistence into the state vector, we tackle the problems of distraction and short-period occlusion. However, subjects may also disappear for longer periods due to blind spots between cameras elds of view. To recognise a desired subject after such a long-period, we add coloured texture to the ellipsoid surface, which is learnt and retained during the tracking process. This texture signature improves the recall rate from 60% to 70-80% when compared to state only data association. Compared to a standard Central Processing Unit (CPU) implementation, there is a signi cant speed-up ratio.
37

Design considerations for an interactive computer graphic facility

Corbett, Christopher January 1978 (has links)
No description available.
38

Intelligent side information generation in distributed video coding

Akinola, Mobolaji January 2015 (has links)
Distributed video coding (DVC) reverses the traditional coding paradigm of complex encoders allied with basic decoding to one where the computational cost is largely incurred by the decoder. This is attractive as the proven theoretical work of Wyner-Ziv (WZ) and Slepian-Wolf (SW) shows that the performance by such a system should be exactly the same as a conventional coder. Despite the solid theoretical foundations, current DVC qualitative and quantitative performance falls short of existing conventional coders and there remain crucial limitations. A key constraint governing DVC performance is the quality of side information (SI), a coarse representation of original video frames which are not available at the decoder. Techniques to generate SI have usually been based on linear motion compensated temporal interpolation (LMCTI), though these do not always produce satisfactory SI quality, especially in sequences exhibiting non-linear motion. This thesis presents an intelligent higher order piecewise trajectory temporal interpolation (HOPTTI) framework for SI generation with original contributions that afford better SI quality in comparison to existing LMCTI-based approaches. The major elements in this framework are: (i) a cubic trajectory interpolation algorithm model that significantly improves the accuracy of motion vector estimations; (ii) an adaptive overlapped block motion compensation (AOBMC) model which reduces both blocking and overlapping artefacts in the SI emanating from the block matching algorithm; (iii) the development of an empirical mode switching algorithm; and (iv) an intelligent switching mechanism to construct SI by automatically selecting the best macroblock from the intermediate SI generated by HOPTTI and AOBMC algorithms. Rigorous analysis and evaluation confirms that significant quantitative and perceptual improvements in SI quality are achieved with the new framework.
39

Context-based video coding

Vigars, Richard George January 2015 (has links)
Although the mainstream of video coding technology continues to improve and iterate on previous generations, it seems clear that consumer demands on video content will continue to outstrip the savings made by better codecs. This is, in part, because mainstream codecs are rooted in a established paradigm that uses residual coding to maximise PSNR at a given bit rate. However, it is well known that PSNR as a metric for visual quality does not correlate well with viewers' subjective opinions. In recent years, research into residual-less approaches to video coding has become popular. The aim is to achieve t he best possible perceptual quality, irrespective of the PSNR with respect to the original. This allows the use of more advanced motion models, tuned to specific content within the video. This thesis proposes such an approach . Specifically, the motion of rigidly textured, planar regions is modelled using a perspective model, so that the decoder can interpolate these regions directly from reference frames. Prior knowledge of the scene is employed to condition the motion estimation process, in the form of keyframe models marked up under supervision. The motion estimation algorithm is able to compute planar motion parameters independently of the motion of foreground objects, and is so able to facilitate the detection of non-conforming regions. These algorithms are integrated with a host codec, which codes non-planar regions as normal. A subjective trial shows that this hybrid codec is able to achieve significant bit rate savings over the host codec, while maintaining quality.
40

Three-dimensional imaging and analysis of the morphology of oral structures from co-ordinate data

Jovanovski, Vladimir January 1999 (has links)
No description available.

Page generated in 0.0202 seconds