101 |
A tight frame algorithm in image inpainting.January 2007 (has links)
Cheng, Kei Tsi Daniel. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 45-49). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Knowledge --- p.6 / Chapter 2.1 --- Image Restoration using Total Variation Norm --- p.6 / Chapter 2.2 --- An Example of Tight Frame system --- p.10 / Chapter 2.3 --- Sparse and compressed representation --- p.13 / Chapter 2.4 --- Existence of minimizer in convex analysis --- p.16 / Chapter 3 --- Tight Frame Based Minimization --- p.18 / Chapter 3.1 --- Tight Frames --- p.18 / Chapter 3.2 --- Minimization Problems and Algorithms --- p.19 / Chapter 3.3 --- Other Minimization Problems --- p.22 / Chapter 4 --- Algorithm from minimization problem 3 --- p.24 / Chapter 5 --- Algorithm from minimization problem 4 --- p.28 / Chapter 6 --- Convergence of Algorithm 2 --- p.31 / Chapter 6.1 --- Inner Iteration --- p.31 / Chapter 6.2 --- Outer Iteration --- p.33 / Chapter 6.2.1 --- Existence of minimizer --- p.33 / Chapter 7 --- Numerical Results --- p.37 / Chapter 8 --- Conclusion --- p.44
|
102 |
Parallel approximate string matching applied to occluded object recognitionSmith, David 01 January 1987 (has links)
This thesis develops an algorithm for approximate string matching and applies it to the problem of partially occluded object recognition. The algorithm measures the similarity of differing strings by scanning for matching substrings between strings. The length and number of matching substrings determines the amount of similarity. A classification algorithm is developed using the approximate string matching algorithm for the identification and classification of objects. A previously developed method of shape description is used for object representation.
|
103 |
Digital image processing using local segmentationSeemann, Torsten,1973- January 2002 (has links)
Abstract not available
|
104 |
Digital processing of shallow seismic refraction data with the convolution sectionPalmer, Derecke, School of Geology, UNSW January 2001 (has links)
The refraction convolution section (RCS) is a simple and efficient method for full trace processing of shallow seismic refraction data. It facilitates improved interpretation of shallow seismic refraction data through the convenient use of amplitudes as well as traveltimes. The RCS is generated by the convolution of forward and reverse shot records. The convolution operation effectively adds the first arrival traveltimes of each pair of forward and reverse traces and produces a measure of the depth to the refracting interface in units of time which is equivalent to the time-depth function of the generalized reciprocal method (GRM). The convolution operation also multiplies the amplitudes of first arrival signals. This operation compensates for the large effects of geometric spreading to a very good first approximation, with the result that the convolved amplitude is essentially proportional to the square of the head coefficient. The head coefficient is approximately proportional to the ratio of the specific acoustic impedances in the upper layer and in the refractor, where there is a reasonable contrast between the specific acoustic impedances in the layers. The RCS can also include a separation between each pair of forward and reverse traces in order to accommodate the offset distance in a manner similar to the XY spacing of the GRM. Lateral variations in the near-surface soil layers can effect amplitudes thereby causing 'amplitude statics'. Increases in the thickness of the surface soil layer correlate with increases in refraction amplitudes. These increases are adequately described and corrected with the transmission coefficients of the Zoeppritz equations. The minimum amplitudes, rather than an average, should be used where it is not possible to map the near surface layers. The use of amplitudes with 3D data effectively improves the spatial resolution by almost an order of magnitude. Amplitudes provide a measure of refractor wavespeeds at each detector, whereas the analysis of traveltimes provides a measure over several detectors, commonly a minimum of six. The ratio of amplitudes obtained with different shot azimuths provides a detailed qualitative measure of azimuthal anisotropy. Dip filtering of the RCS removes 'cross-convolution' artifacts and provides a convenient approach to the study of later events. The RCS facilitates the stacking of refraction data in a manner similar to the CMP methods of reflection seismology. It can improve signal-to-noise ratios.
|
105 |
The detection of 2D image features using local energyRobbins, Benjamin John January 1996 (has links)
Accurate detection and localization of two dimensional (2D) image features (or 'key-points') is important for vision tasks such as structure from motion, stereo matching, and line labeling. 2D image features are ideal for these vision tasks because 2D image features are high in information and yet they occur sparsely in typical images. Several methods for the detection of 2D image features have already been developed. However, it is difficult to assess the performance of these methods because no one has produced an adequate definition of corners that encompasses all types of 2D luminance variations that make up 2D image features. The fact that there does not exist a consensus on the definition of 2D image features is not surprising given the confusion surrounding the definition of 1D image features. The general perception of 1D image features has been that they correspond to 'edges' in an image and so are points where the intensity gradient in some direction is a local maximum. The Sobel [68], Canny [7] and Marr-Hildreth [37] operators all use this model of 1D features, either implicitly or explicitly. However, other profiles in an image also make up valid 1D features, such as spike and roof profiles, as well as combinations of all these feature types. Spike and roof profiles can also be found by looking for points where the rate of change of the intensity gradient is locally maximal, as Canny did in defining a 'roof-detector' in much the same way he developed his 'edge-detector'. While this allows the detection of a wider variety of 1D features profiles, it comes no closer to the goal of unifying these different feature types to an encompassing definition of 1D features. The introduction of the local energy model of image features by Marrone and Owens [45] in 1987 provided a unified definition of 1D image features for the first time. They postulated that image features correspond to points in an image where there is maximal phase congruency in the frequency domain representation of the image. That is, image features correspond to points of maximal order in the phase domain of the image signal. These points of maximal phase congruency correspond to step-edge, roof, and ramp intensity profiles, and combinations thereof. They also correspond to the Mach bands perceived by humans in trapezoidal feature profiles. This thesis extends the notion of phase congruency to 2D image features. As 1D image features correspond to points of maximal 1D order in the phase domain of the image signal, this thesis contends that 2D image features correspond to maximal 2D order in this domain. These points of maximal 2D phase congruency include all the different types of 2D image features, including grey-level corners, line terminations, blobs, and a variety of junctions. Early attempts at 2D feature detection were simple 'corner detectors' based on a model of a grey-level corner in much the same way that early 1D feature detectors were based on a model of step-edges. Some recent attempts have included more complex models of 2D features, although this is basically a more complex a priori judgement of the types of luminance profiles that are to be labeled as 2D features. This thesis develops the 2D local energy feature detector based on a new, unified definition of 2D image features that marks points of locally maximum 2D order in the phase domain representation of the image as 2D image features. The performance of an implementation of 2D local energy is assessed, and compared to several existing methods of 2D feature detection. This thesis also shows that in contrast to most other methods of 2D feature detection, 2D local energy is an idempotent operator. The extension of phase congruency to 2D image features also unifies the detection of image features. 1D and 2D image features correspond to 1D and 2D order in the phase domain respresentation of the image respectively. This definition imposes a hierarchy of image features, with 2D image features being a subset of 1D image features. This ordering of image features has been implied ever since 1D features were used as candidate points for 2D feature detection by Kitchen [28] and others. Local energy enables the extraction of both 1D and 2D image features in a consistent manner; 2D image features are extracted from the 1D image features using the same operations that are used to extract 1D image features from the input image. The consistent approach to the detection of image features presented in this thesis allows the hierarchy of primitive image features to be naturally extended to higher order image features. These higher order image features can then also be extracted from higher order image data using the same hierarchical approach. This thesis shows how local energy can be naturally extended to the detection of 1D (surface) and higher order image features in 3D data sets. Results are presented for the detection of 1D image features in 3D confocal microscope images, showing superior performance to the 3D extension of the Sobel operator [74].
|
106 |
High resolution digital imaging of bacterial cellsSiebold, William A. 02 April 2001 (has links)
The most abundant clone found in ribosomal RNA clone libraries
obtained from the world's oceans belongs to the SAR11 phylogenetic group of
environmental marine bacteria. Imaging and counting SAR11 bacterial cells in situ
has been an important research objective for the past decade. This objective has
been especially challenging due to the extremely small size, and hypothetically, the
low abundance of ribosomes contained by the cells. To facilitate the imaging of
small dim oligotrophic bacterial cells, digital imaging technology featuring very small
pixel size, high quantum yield scientific grade CCD chips was integrated with the
use of multiple oligonucleotide probes on cells mounted on a non-fluorescing solid
substrate.
Research into the composition of bacterioplankton populations in natural
marine systems follows a two-fold path. Increasing the culturability of microbes
found in the natural environment is one research path. Identifying and enumerating
the relative fractions of microorganisms in situ by culture-independent methods is
another. The accumulation and systematic comparison of ribosomal RNA clones
from the marine environment has resulted in a philosophical shift in marine
microbiology away from dependence upon cultured strains and toward
investigations of in situ molecular signals.
The design and use of oligonucleotide DNA probes targeting rRNA targets
has matured along with the growth in size and complexity of the public sequence
databases. Hybridizing a fluorescently labeled oligonucleotide probe to an rRNA
target inside an intact cell provides both phylogenetic and morphological
information (a technique called Fluorescence in situ Hybridization (FISH)). To
facilitate the imaging of small, dim oligotrophic bacterial cells, digital imaging
technology featuring very small pixel size, high quantum yield, scientific grade
CCD chips is integrated with the use of multiple oligonucleotide probes on cells
mounted on a non-fluorescing solid substrate.
This research develops the protocols necessary to acquire and analyze
digital images of marine bacterial cells. Experiments were conducted with Bermuda
Atlantic Time Series (BATS) environmental samples obtained during cruise BV21
(1998) and B138 (2000). The behavior of the SAR11⁴*Cy3 probe set when
hybridized to bacterial cells from these samples was investigated to determine the
optimal hybridization reaction conditions. The challenges of bacterial cell counting
after cell transfer from PCTE membrane to treated microslides were addressed.
Experiments with aged Oregon Coast seawater were performed to investigate the
protocol used to transfer cells from membrane to microslides, and examined the
distribution of cells and the statistics of counting cells using traditional
epifluorescence microscopy and image analysis techniques. / Graduation date: 2002
|
107 |
Efficient digital predistortion techniques for power amplifier linearizationZhuo, Min 14 September 2000 (has links)
The importance of spectral efficiency in mobile communications often requires the use of
non-constant-envelop linear digital modulation schemes. These modulation techniques
carry signal information in both magnitude and phase, thus they must be linearly amplified
to avoid nonlinear signal distortion which is not correctable in a typical receiver.
A second difficulty in utilizing these modulation formats is that nonlinear amplification
generates out-of-band power (spectral regrowth). Therefore, to achieve both high energy
efficiency and spectral efficiency, some forms of linearization must be used to compensate
for the nonlinearity of power amplifiers. One powerful technique that is amenable to
monolithic integration is digital signal predistortion. Most predistorters try to achieve
the inverse nonlinear characteristic of High Power Amplifier(HPA). In this thesis a new
multi-stage digital adaptive signal predistorter is presented. The scheme is developed
from the direct iterative method with low memory requirement proposed by Cavers [1]
in combination with the multi-stage predistortion proposed by Stonick [2]. To make
the predistorter more compact a very simple and fast method called the complementary
method is proposed. The complementary method has prominent advantages over other
digital predistorters in terms of stability of the algorithm, complexity of the algorithm
and computational load. / Graduation date: 2001
|
108 |
Interpolation-based digital quadrature frequency synthesizerLarson, Ryan John 05 June 2000 (has links)
Traditionally sinusoidal signal generation has been implemented with purely analog circuits such as phase-locked loops. The alternative of using a digital system to perform this signal generation has previously been unattractive due to limitations in clock frequency and size. However, recent advancements in sub-micron fabrication techniques have made the digital alternative tractable. The advantages of a digitally implemented signal frequency synthesizer include finer control of output frequency, reduced frequency drift due to part degradation over time, and faster response time for frequency change.
Digital frequency synthesis has been previously realized using the Tierney, Rader, and Gold phase accumulator architecture. This method utilizes a variable-increment digital integrator that is input to a read-only memory. This memory then generates a quantized amplitude value. This thesis presents an alternative method for digital frequency synthesis based on circular interpolation and compares it to the performance of a comparable phase-accumulator structure for varying bit accuracies of phase. The comparison of transistor count and required die-size for each method reveals a lower requirement of both resources in the case of the new circle interpolator. Evaluation of the discrete-time spectral purity of synthesized signals also demonstrates less out of band noise in the new design. Finally, analysis
of energy efficiency shows the new design to be generally optimal compared to the reference design. / Graduation date: 2001
|
109 |
A comparison of two types of zero-crossing FM demodulators for wireless receiversMcNeal, Jeff D. 11 February 1998 (has links)
A comparison of two novel demodulators. The first is a basic zero crossing demodulator,
as introduced by Beards. The second is an approach proposed by Hovin. The two demodulators
are compared to each other and to the conventional method of demodulation. / Graduation date: 1998
|
110 |
A high-performance, low power and memory-efficient VLD for MPEG applicationsZhang, Haowei 14 January 1997 (has links)
An extremely important area that has enabled or will enable many of the
digital video services and applications such as VideoCD, DVD, DVC, HDTV, video
conferencing, and DSS is digital video compression. The great success of digital video
compression is mainly because of two factors. The state of the art in very large scale
integrated circuit (VLSI) and a considerable body of knowledge accumulated over
the last several decades in applying video compression algorithms such as discrete
cosine transform (DCT), motion estimation (ME), motion compensation (MC) and
entropy coding techniques. The MPEG (Moving Pictures Expert Group) standard
reflects the second factor. In this thesis, MPEG standards are discussed thoroughly
and interpreted, and a VLSI chip implementation (CMOS 0.35�� technology and 3
layer metal) of a variable length decoder (VLD) for MPEG applications is developed.
The VLD developed here achieves high performance by using a parallel and pipeline
architecture. Furthermore, MPEG bitstream patterns are carefully analyzed in order
to drastically improve VLD memory efficiency. Finally, a special clock scheme is
applied to reduce the chip's power consumption. / Graduation date: 1998
|
Page generated in 0.0538 seconds