281 |
Characterization of quantization noise in oversampled analog to digital convertersMultanen, Eric W. 01 January 1992 (has links)
The analog to digital converter (ADC) samples a continuous analog signal and produces a stream of digital words which approximate the analog signal. The conversion process introduces noise into the digital signal. In the case of an ideal ADC, where all noise sources are ignored, the noise due to the quantization process remains. The resolution of the ADC is defined by how many bits are in the digital output word. The amount of quantization noise is clearly related to the resolution of the ADC. Reducing the quantization noise results in higher effective resolution.
|
282 |
Scalable video coding by stream morphingMacnicol, James Roy. January 2002 (has links) (PDF)
"October 2002 (Revised May 2003)"--T.p. Includes bibliographical references (leaves 256-264).
|
283 |
Creation and spatial partitioning of mip-mappable geometry imagesDomanski, Luke, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2007 (has links)
A Geometry Image (GIM) describes a regular polygonal surface mesh using a standard 2D image format without the need for explicit connectivity information. Like many regular or semi-regular surface representations, GIMs lend themselves well to a number processing tasks performed in computer graphics. It has been suggested that GIMs could provide improvements within real-time rendering pipelines through straightforward localised surface processing and simple mip-map based level-of-detail. The simplicity of such algorithms in the case of GIMs makes them highly amenable to user transparent implementation in graphics hardware or programming libraries, shifting implementation responsibility from the application programmer and reducing the processing load on the CPU. However, these topics have received limited attention in the literature. This thesis examines a number of issues regarding mip-mapping and localised processing of GIMs. In particular, it focuses on the creation of mip-mappable multi-chart GIMs and how to spatially partition and cull GIMs such that mip-mapping and localised processing can be performed effectively. These are important processing tasks that occur before rendering takes place, but influence how mip-mapping and localised processing can be implemented and utilised during rendering. Solutions to these tasks that consider such influences are, therefore, likely to facilitate simple and effective algorithms for mip-mapping and localised processing of GIMs that are amenable to hardware implementation. The topics discussed in this thesis will form a basis for future work on low level geometric mip-mapping and localised processing of GIMs in real-time graphics pipelines. With respect to creating mip-mappable GIMs, the thesis presents a method for automatic generation of polycube parameter domains and surface mappings that can be used to create multi-chart GIMs with square or rectangular charts. As will be discussed, these GIMs provide particular advantages for mip-mapping compared to multi-chart GIMs with irregular shaped charts. The method casts the polycube generation problem as a coarse topology preserving voxelisation of the surface that simultaneously aligns the surface with the voxel set boundary. This process produces both the polycube and an initial surface-to-polycube mapping. Theorems for piecewise construction of well-composed voxel sets are also presented that facilitate a piecewise implementation of the polycube generation algorithm and support the topological guarantees it provides. This method improves on previous methods for polycube generation which require significant user interaction. For spatial partitioning of GIMs the thesis introduces the concept of locality masks, bit masks partitioning parameter space. The method stores a 2D bit mask at each spatial node which identifies the set of GIM samples inside the node’s spatial range. Like GIMs the locality masks support mip-mapping through simple image down-sampling. By unifying the masks of all nodes that pass a spatial query processing can be performed globally on the unculled set of primitives rather than on a node-by-node basis, promoting a more optimised order in which to performed localised processing. Locality masks are also well suited to compression and provide a bandwidth efficient method of transferring a list of indexed rendering primitives. The locality masks method is compared to other methods of partitioning and culling GIMs in various ways and their suitability for rendering and other task is analysed. / Doctor of Philosophy (PhD)
|
284 |
Natural feature extraction as a front end for simultaneous localization and mapping.Kiang, Kai-Ming, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
|
285 |
Hardware optimization of JPEG2000Gupta, Amit Kumar, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
The Key algorithms of JPEG2000, the new image compression standard, have high computational complexity and thus present challenges for efficient implementation. This has led to research on the hardware optimization of JPEG2000 for its efficient realization. Luckily, in the last century the growth in Microelectronics allows us to realize dedicated ASIC solutions as well as hardware/software FPGA based solutions for complex algorithms such as JPEG2000. But an efficient implementation within hard constraints of area and throughput, demands investigations of key dependencies within the JPEG2000 system. This work presents algorithms and VLSI architectures to realize a high performance JPEG2000 compression system. The embedded block coding algorithm which lies at the heart of a JPEG2000 compression system is a main contributor to enhanced JPEG2000 complexity. This work first concentrates on algorithms to realize low-cost high throughput Block Coder (BC) system. For this purpose concurrent symbol processing capable Bit Plane Coder architecture is presented. Further optimal 2 sub-bank memory and an efficient buffer architectures are designed to keep the hardware cost low. The proposed overall BC system presents the highest Figure Of Merit (FOM) in terms of throughput versus hardware cost in comparison to existing BC solutions. Further, this work also investigates the challenges involved in the efficient integration of the BC with the overall JPEG2000 system. A novel low-cost distortion estimation approach with near-optimal performance is proposed which is necessary for accurate rate-control performance of JPEG2000. Additionally low bandwidth data storage and transfer techniques are proposed for efficient transfer of subband samples to the BC. Simulation results show that the proposed techniques have approximately 4 times less bandwidth than existing architectures. In addition, an efficient high throughput block decoder architecture based on the proposed selective sample-skipping algorithm is presented. The proposed architectures are designed and analyzed on both ASIC and FPGA platforms. Thus, the proposed algorithms, architectures and efficient BC integration strategies are useful for realizing a dedicated ASIC JPEG2000 system as well as a hardware/software FPGA based JPEG2000 solution. Overall this work presents algorithms and architectures to realize a high performance JPEG2000 system without imposing any restrictions in terms of coding modes or block size for the BC system.
|
286 |
An historical survey of technology used in the production and presentation of music in the 20th CenturyLubin, Tom, University of Western Sydney, Faculty of Humanities and Social Sciences January 1997 (has links)
This paper explores the historical progression of the technological development of records and radio and its impact on popular music. It also includes the production technologies that create recorded music, the development of records, cassettes and CDs, and areas of reproduction that have an association with popular music including the sound technologies of radio, film, television, background music, and the juke-box. This paper is not a cultural or social study, but is primarily an historical account of media technology in music production and delivery. Certain social and cultural consequences and issues are included as background and sidebars to the primary topic. The technology of live performance has been omitted because it alone represents a body of material large enough for an entire paper. Western society now travels through a sea of music emanating from countless hidden sources. Such music delivery systems provide a continuous musical score for most people's personal histories. Sound, fragments of sound, and the very processes by which sound is created and manipulated have become products and commodities. The technology has allowed anyone to participate in the creation and hearing of music. This paper traces the history of the various technologies that, in so many respects, have provided a catalyst for that which is created, and the means by which music is listened to in the 20th Century. With rare exception, each new invention, delivery system, or process has had both supporters as well as detractors. Throughout this paper, both the positive as well as the negative effects of these developments will be explored. / Master of Arts (Hons)
|
287 |
An extended Mumford-Shah model and improved region merging algorithm for image segmentationTao, Trevor January 2005 (has links)
In this thesis we extend the Mumford-Shah model and propose a new region merging algorithm for image segmentation. The segmentation problem is to determine an optimal partition of an image into constituent regions such that individual regions are homogenous within and adjacent regions have contrasting properties. By optimimal, we mean one that minimizes a particular energy functional. In region merging, the image is initially divided into a very fine grid, with each pixel being a separate region. Regions are then recursively merged until it is no longer possible to decrease the energy functional. In 1994, Koepfler, Lopez and Morel developed a region merging algorithm for segmentating an image. They consider the piecewise constant Mumford-Shah model, where the energy functional consists of two terms, accuracy versus complexity, with the trade - off controlled by a scale parameter. They show that one can efficiently generate a hierarchy of segmentations from coarse to fine. This algorithm is complemented by a sound theoretical analysis of the piecewise constant model, due to Morel and Solimini. The primary motivation for extending the Mumford-Shah model stems from the fact that this model is only suitable for " cartoon " images, where each region is uncomtaminated by any form of noise. Other shortcomings also need to be addressed. In the algorithm of Koepfler et al., it is difficult to determine the order in which the regions are merged and a " schedule " is required in order to determine the number and fine - ness of segmentations in the hierarchy. Both of these difficulties mitigate the theoretical analysis of Koepfler ' s algorithm. There is no definite method for selecting the " optimal " value of the scale parameter itself. Furthermore, the mathematical analysis is not well understood for more complex models. None of these issues are convincingly answered in the literature. This thesis aims to provide some answers to the above shortcomings by introducing new techniques for region merging algorithms and a better understanding of the theoretical analysis of both the mathematics and the algorithm ' s performance. A review of general segmentation techniques is provided early in this thesis. Also discussed is the development of an " extended " model to account for white noise contamination of images, and an improvement of Koepfler ' s original algorithm which eliminates the need for a schedule. The work of Morel and Solimini is generalized to the extended model. Also considered is an application to textured images and the issue of selecting the value of the scale parameter. / Thesis (Ph.D.)--School of Mathematical Sciences, 2005.
|
288 |
Object highlighting : real-time boundary detection using a Bayesian networkJia, Jin 12 April 2004 (has links)
Image segmentation continues to be a fundamental problem in computer vision and
image understanding. In this thesis, we present a Bayesian network that we use for
object boundary detection in which the MPE (most probable explanation) before
any evidence can produce multiple non-overlapping, non-self-intersecting closed
contours and the MPE with evidence where one or more connected boundary
points are provided produces a single non-self-intersecting, closed contour that
accurately defines an object's boundary. We also present a near-linear-time
algorithm that determines the MPE by computing the minimum-path spanning tree
of a weighted, planar graph and finding the excluded edge (i.e., an edge not in the
spanning tree) that forms the most probable loop. This efficient algorithm allows for
real-time feedback in an interactive environment in which every mouse movement
produces a recomputation of the MPE based on the new evidence (i.e., the new
cursor position) and displays the corresponding closed loop. We call this interface
"object highlighting" since the boundary of various objects and sub-objects appear
and disappear as the mouse cursor moves around within an image. / Graduation date: 2004
|
289 |
Advanced image segmentation and data clustering concepts applied to digital image sequences featuring the response of biological materials to toxic agentsRoussel, Nicolas 27 March 2003 (has links)
Image segmentation is the process by which an image is divided into number of
regions. The regions are to be homogeneous with respect to some property. Definition
of homogeneity depends mainly on the expected patterns of the objects of interest. The
algorithms designed to perform these tasks can be divided into two main families: Splitting
Algorithms and Merging Algorithms. The latter comprises seeded region growing
algorithms which provide the basis for our work.
Seeded region growing methods such as Marker initiated Watershed segmentation
depend principally on the quality and relevance of the initial seeds. In situations where
the image contains a variety of aggregated objects of different shapes, finding reliable
initial seeds can be a very complex task.
This thesis describes a versatile approach for finding initial seeds on images featuring
objects distinguishable by their structural and intensity profiles. This approach
involves the use of hierarchical trees containing various information on the objects in
the image. These trees can be searched for specific pattern to generate the initial seeds
required to perform a reliable region growing process. Segmentation results are shown
in this thesis.
The above image segmentation scheme has been applied to detect isolated living
cells in a sequence of frames and monitor their behavior through the time. The tissues
utilized for these studies are isolated from the scales of Betta Splendens fish family.
Since the isolated cells or chromatophores are sensitive to various kinds of toxic agents,
a creation of cell-based toxin detector was suggested. Such sensor operation depends on
an efficient segmentation of cell images and extraction of pertinent visual features.
Our ultimate objective is to model and classify the observed cell behavior in order
to detect and recognize biological or chemical agents affecting the cells. Some possible
modelling and classification approaches are presented in this thesis. / Graduation date: 2003
|
290 |
Optimum quantization for the adaptive loops in MDFEParthasarathy, Priya 27 February 1997 (has links)
Multi-level decision feedback equalization (MDFE) is a sampled signal processing technique for data recovery from magnetic recording channels which use the 2/3(1,7) run length limited code. The key adaptive feedback loops in MDFE are those which perform the timing recovery, gain recovery, dc offset detection, and adaptive equalization of the feedback equalizer. The algorithms used by these adaptive loops are derived from the channel error which is the deviation of the equalized signal from its ideal value. It is advantageous to convert this error signal to a digital value using a flash analog-to-digital converter (flash ADC) to simplify the implementation of the adaptive loops.
In this thesis, a scheme to place the thresholds of the flash ADC is presented. The threshold placement has been optimized based on the steady-state probability density function (pdf) of the signal to be quantized. The resolution constraints imposed by this quantization scheme on the adaptive loops has been characterized. As the steady-state assumption for the signal to be quantized is not valid during the transient state of the adaptive loops, the loop transients with this quantization scheme have been analyzed through simulations. The conditions under which the channel can recover from a set of start-up errors and converge successfully into steady-state have been specified. The steady-state channel performance with the noise introduced by the iterative nature of the adaptive loops along with this quantization scheme has also been verified. / Graduation date: 1997
|
Page generated in 0.0645 seconds