• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • Tagged with
  • 31
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Gaze location prediction and enhanced error resilience

Cheng, Qin January 2014 (has links)
The sensitivity of the human visual system decreases dramatically with increasing distance from the fixation location in a video frame. Accurate prediction of a viewer's gaze location has the potential to improve bit allocation, rate control, error resilience and quality evaluation in video compression. Commercially, delivery of football video content is of great interest due to the very high number of consumers. In this thesis we propose a gaze location prediction system for high definition broadcast football video. The proposed system uses knowledge about the context, extracted through analysis of a gaze tracking study that we performed, in order to build a suitable prior map. We further classify the complex context into different categories through shot classification thus allowing our model to pre-learn the task pertinence of each object category and build the prior map automatically. We thus avoid the limitation of assigning the viewers a specific task, allowing our gaze prediction system to work under free-viewing conditions. Bayesian integration of bottom-up features and top-down priors is finally applied to predict the gaze locations. Results show that the prediction performance of the proposed model is better than that of other top-down models which we adapted to this context. The next part of this thesis focuses on enhancement of error resilience in the video transmission chain. Video transmission over error prone channels can suffer from packet losses when channel conditions are not favourable. As a result the distortion/quality of the decoded video at the receiver often differs from that of the encoded video at the transmitter. Accurate estimation of the end to end distortion (the distortion due to compression and packet loss after decoder error concealment) at the encoder/transmitter can lead to more efficient and effective application of error resilience (e.g. selective intra coding, forward error correction, etc.). This proposed end to end distortion estimation model incorporates a probabilistic estimation of the distortion introduced by advanced error concealment methods, which often used by decoders to mitigate t.he effect of packets loss. The proposed model offers significant improvements in estimation accuracy relative to existing models that only consider previous frame copy as the concealment strategy of the decoder. The final goal is to foveat the end to end distortion model using the predicted gaze locations to provide the optimal subjective quality of the decoded video.
2

Parallel two pass motion estimation in video compression

Wu, Yunsong January 2008 (has links)
No description available.
3

Wide area coverage planning and dimensioning in densified SFN DVB-H networks

Zhang, Chunhui January 2006 (has links)
No description available.
4

Electron-optical developments on a new cathode ray tube

Zollman, Peter Martin January 1962 (has links)
An account is given of the writer's contribution to the development of the Gabor-N.R.D.C. flat television tube. The author's work had two main objectives: 1) The development of a novel electron lens, the electrostatic collimator with a view to replace and eliminate from the tube the only non-electrostatic component: the magnetic collimator. 2) The study and introduction of printed circuit type electron lenses i.e. lenses having multipotential boundaries as distinct from the more common equi-potential boundary lenses. By combining 1) and 2) attempts were made to develop a new purely electrostatic lens, the integrated reversor-collimator. Printed circuit lenses were also introduced as electric picture alignment controls. The state of the project at the beginning and at the end of the writer's work are analysed in the Introduction and Conclusion of this thesis, respectively.
5

An exploration of design strategies and methods in the development of digital interactive television for older people

Rice, Mark David January 2009 (has links)
Amongst a changing digital landscape, the proliferation and diversification of technology in the home has meant many underlying principles taken from the workplace now require new perspectives, in order to accommodate for the private and discursive practices associated with domestic living. This represents significant issues in the elicitation of reliable and appropriate feedback from older adults, who have not grown up with the same familiarity and understanding of present day user interfaces as younger generations. More specifically, not only in terms of the development of ill-defined technologies (and their potential functions) more suitable to this age group, but also those people who lack the experience and prior knowledge to easily identify, understand or discuss the potential uses of new systems. This thesis contributes to the challenges of embracing reluctant and inexperience older people within the development of new and emerging domestic technologies, in order that applications are more appropriately designed for this widely diverse and heterogeneous user group. Focusing on the digital interactive television (DITV) platform, five interrelated studies are presented within requirements gathering and early evaluation phases. As a starting point, identifying the constraints of traditional interviews and focus groups, the research explores a series of methods and techniques that aim to bridge disparities in conceptual thinking, by allowing older users to understand the potential utility of digital technology. In using visually creative ways to articulate and self generate ideas, these solutions are first proposed through the use of Forum Theatre, and later refined through a small set of paper prototyping sessions. As an outcome of early research findings, a more sustained series of hi-fidelity prototypes were developed to investigate more meaningful navigation approaches in support of social application areas. The results illustrate important limitations in observing and evaluating user behaviour, in an attempt to identify the potential for more tangible interactive concepts based on the theme of continuity. Drawing from conclusions, in having successfully demonstrated strengths in the methods applied, this thesis argues further research is required to establish a holistic framework in working with older adults. A number of key areas for future research have been identified, including the possibilities for building on the interface concepts developed using alternative state of the art devices.
6

Complexity management of H.264/AVC video compression

Kannangara, Chaminda Sampath January 2006 (has links)
The H. 264/AVC video coding standard offers significantly improved compression efficiency and flexibility compared to previous standards. However, the high computational complexity of H. 264/AVC is a problem for codecs running on low-power hand held devices and general purpose computers. This thesis presents new techniques to reduce, control and manage the computational complexity of an H. 264/AVC codec. A new complexity reduction algorithm for H. 264/AVC is developed. This algorithm predicts "skipped" macroblocks prior to motion estimation by estimating a Lagrange ratedistortion cost function. Complexity savings are achieved by not processing the macroblocks that are predicted as "skipped". The Lagrange multiplier is adaptively modelled as a function of the quantisation parameter and video sequence statistics. Simulation results show that this algorithm achieves significant complexity savings with a negligible loss in rate-distortion performance. The complexity reduction algorithm is further developed to achieve complexity-scalable control of the encoding process. The Lagrangian cost estimation is extended to incorporate computational complexity. A target level of complexity is maintained by using a feedback algorithm to update the Lagrange multiplier associated with complexity. Results indicate that scalable complexity control of the encoding process can be achieved whilst maintaining near optimal complexity-rate-distortion performance. A complexity management framework is proposed for maximising the perceptual quality of coded video in a real-time processing-power constrained environment. A real-time frame-level control algorithm and a per-frame complexity control algorithm are combined in order to manage the encoding process such that a high frame rate is maintained without significantly losing frame quality. Subjective evaluations show that the managed complexity approach results in higher perceptual quality compared to a reference encoder that drops frames in computationally constrained situations. These novel algorithms are likely to be useful in implementing real-time H. 264/AVC standard encoders in computationally constrained environments such as low-power mobile devices and general purpose computers.
7

Magic box : the future of television in the digital age

Kaushal, Rakesh January 2006 (has links)
This thesis explores the recent shift to digital television in order to gauge the full importance of the development. It will examine the range of central issues associated with the technology and explain its significance on a number of levels. The project begins by outlining why television has become a core social and civic resource before reviewing the angles from which it has been studied. The second chapter details the methods that have defined the project and the steps involved within the research process. The history of the medium is then detailed to show the actors and organisations responsible for its development and the ideological values they have drawn upon. Digitalisation is then outlined so that the technological differences with analogue are made clear. A chapter on theory follows which attempts to place these insights into a framework so that the shift and its overall importance can be understood. Government policy is next considered as the thesis highlights the political plans that have been devised for digital television and the objectives set out for it. A content study then attempts to compare the programming patterns of the current television system with those of the pre-multichannel era. This chapter aims to point out any significant differences within the content profiles of the two systems. The thesis concludes by drawing all of this together to show the consequences of a shift to digital broadcasting and the ideas that have directed this change.
8

Concurrency in auditory displays for connected television

Hinde, Alistair F. January 2016 (has links)
Many television experiences depend on users being both willing and able to visually attend to screen-based information. Auditory displays offer an alternative method for presenting this information and could benefit all users. This thesis explores how this may be achieved through the design and evaluation of auditory displays involving varying degrees of concurrency for two television use cases: menu navigation and presenting related content alongside a television show. The first study, on the navigation of auditory menus, looked at onset asynchrony and word length in the presentation of spoken menus. The effects of these on task duration, accuracy and workload were considered. Onset asynchrony and word length both caused significant effects on task duration and accuracy, while workload was only affected by onset asynchrony. An optimum asynchrony was identified, which was the same for both long and short words, but better performance was obtained with the shorter words that no longer overlapped. The second experiment investigated how disruption, workload, and preference are affected when presenting additional content accompanying a television programme. The content took the form of sound from different spatial locations or as text on a smartphone and the programme's soundtrack was either modified or left unaltered. Leaving the soundtrack unaltered or muting it negatively impacted user experience. Removing the speech from the television programme and presenting the secondary content as sound from a smartphone was the best auditory approach. This was found to compare well with the textual presentation, resulting in less visual disruption and imposing a similar workload. Additionally, the thesis reviews the state-of-the-art in television experiences and auditory displays. The human auditory system is introduced and important factors in the concurrent presentation of speech are highlighted. Conclusions about the utility of concurrency within auditory displays for television are made and areas for further work are identified.
9

Content-based motion compensation and its application to video compression

Servais, Marc January 2006 (has links)
Content-based approaches to motion compensation offer the advantage of being able to adapt to the spatial and temporal characteristics of a scene. Three such motion compensation techniques are described in detail, with one of the methods being integrated into a video codec. The first approach operates by performing spatio-temporal segmentation of a frame. A split and merge approach is then used to ensure that motion characteristics are relatively homogeneous within each region. Region shape information is coded (by approximating the boundaries with polygons) and a triangular mesh is generated within each region. Translational and affine motion estimation are then performed on each triangle within the mesh. This approach offers an improvement in quality when compared to a regular mesh of the same size. However, it is difficult to control the number of triangles, since this depends on the segmentation and polygon approximation stages. As a result, this approach is difficult to integrate into a rate-distortion framework. The second method involves the use of variable-size blocks, rather than a triangular mesh. Once again, a frame is first segmented into regions of homogeneous motion, which are then approximated with polygons. A grid of blocks is created in each region, with the block size inversely proportional to the motion compensation error for that region. This ensures that regions with complex motion are populated by smaller blocks. Following this, bi-directional translational and affine motion parameters are estimated for each block. In contrast to the mesh-based approach, this method allows the number of blocks to be easily controlled. Nevertheless, the number and shape of regions remains very sensitive to the segmentation parameters used. The third technique also uses variable size blocks, but the spatio-temporal segmentation stage is replaced with a simpler and more robust binary block partitioning process. If a particular block does not allow for accurate motion compensation, then it is split into two using the horizontal or vertical line that achieves the maximum reduction in motion compensation error. Starting with the entire frame as one block, the splitting process is repeated until a large enough binary tree of blocks is obtained. This method causes partitioning to occur along motion boundaries, thus substantially reducing blocking artifacts compared to regular block matching. In addition, small blocks are placed in regions of complex motion, while large blocks cover areas of uniform motion. The proposed technique provides significant gains in picture quality when compared to fixed size block matching at the same total rate. The binary partition tree method has been integrated into a hybrid video codec. (The codec also has the option of using fixed-size blocks or H.264/AVC variable- size blocks.) Results indicate that the binary partition tree method of motion compensation leads to improved rate-distortion performance over the state-of- the-art H.264/AVC variable-size block matching. This advantage is most evident at low bit-rates, and also in the case of bi-directionally predicted frames. Keywords: motion estimation, motion compensation, video coding, video compression, content-based, variable-size block matching, binary partition tree.
10

Depth-map-assisted texture and depth map super-resolution

Jin, Z. January 2015 (has links)
With the development of video technology, high definition video and 3D video applications are becoming increasingly accessible to customers. The interactive and vivid 3D video experience of realistic scenes relies greatly on the amount and quality of the texture and depth map data. However, due to the limitations of video capturing hardware and transmission bandwidth, transmitted video has to be compressed which degrades, in general, the received video quality. This means that it is hard to meet the users’ requirements of high definition and visual experience; it also limits development of future applications. Therefore, image/video super-resolution techniques have been proposed to address this issue. Image super-resolution aims to reconstruct a high resolution image from single or multiple low resolution images captured of the same scene under different conditions. Based on the image type that needs to be super-resolved, image super-resolution includes texture and depth image super-resolutions. If classified based on the implementation methods, there are three main categories: interpolation-based, reconstruction-based and learning-based super-resolution algorithms. This thesis focuses on exploiting depth data in interpolation-based super-resolution algorithms for texture video and depth maps. Two novel texture and one depth super-resolution algorithms are proposed as the main contributions of this thesis. The first texture super-resolution algorithm is carried out in the Mixed Resolution (MR) multiview video system where at least one of the views is captured at Low Resolution (LR), while the others are captured at Full Resolution (FR). In order to reduce visual uncomfortableness and adapt MR video format for free-viewpoint television, the low resolution views are super-resolved to the target full resolution by the proposed virtual view assisted super resolution algorithm. The inter-view similarity is used to determine whether to fill the missing pixels in the super-resolved frame by virtual view pixels or by spatial interpolated pixels. The decision mechanism is steered by the texture characteristics of the neighbors of each missing pixel. Thus, the proposed method can recover the details in regions with edges while maintaining good quality at smooth areas by properly exploiting the high quality virtual view pixels and the directional correlation of pixels. The second texture super-resolution algorithm is based on the Multiview Video plus Depth (MVD) system, which consists of textures and the associated per-pixel depth data. In order to further reduce the transmitted data and the quality degradation of received video, a systematical framework to downsample the original MVD data and later on to super-resolved the LR views is proposed. At the encoder side, the rows of the two adjacent views are downsampled following an interlacing and complementary fashion, whereas, at the decoder side, the discarded pixels are recovered by fusing the virtual view pixels with the directional interpolated pixels from the complementary downsampled views. Consequently, with the assistance of virtual views, the proposed approach can effectively achieve these two goals. From previous two works, we can observe that depth data has big potential to be used in 3D video enhancement. However, due to the low spatial resolution of Time-of-Flight (ToF) depth camera generated depth images, their applications have been limited. Hence, in the last contribution of this thesis, a planar-surface-based depth map super-resolution approach is presented, which interpolates depth images by exploiting the equation of each detected planar surface. Both quantitative and qualitative experimental results demonstrate the effectiveness and robustness of the proposed approach over benchmark methods.

Page generated in 0.019 seconds