Spelling suggestions: "subject:"image processing - 4digital techniques."" "subject:"image processing - deigital techniques.""
161 |
An extended Mumford-Shah model and improved region merging algorithm for image segmentationTao, Trevor January 2005 (has links)
In this thesis we extend the Mumford-Shah model and propose a new region merging algorithm for image segmentation. The segmentation problem is to determine an optimal partition of an image into constituent regions such that individual regions are homogenous within and adjacent regions have contrasting properties. By optimimal, we mean one that minimizes a particular energy functional. In region merging, the image is initially divided into a very fine grid, with each pixel being a separate region. Regions are then recursively merged until it is no longer possible to decrease the energy functional. In 1994, Koepfler, Lopez and Morel developed a region merging algorithm for segmentating an image. They consider the piecewise constant Mumford-Shah model, where the energy functional consists of two terms, accuracy versus complexity, with the trade - off controlled by a scale parameter. They show that one can efficiently generate a hierarchy of segmentations from coarse to fine. This algorithm is complemented by a sound theoretical analysis of the piecewise constant model, due to Morel and Solimini. The primary motivation for extending the Mumford-Shah model stems from the fact that this model is only suitable for " cartoon " images, where each region is uncomtaminated by any form of noise. Other shortcomings also need to be addressed. In the algorithm of Koepfler et al., it is difficult to determine the order in which the regions are merged and a " schedule " is required in order to determine the number and fine - ness of segmentations in the hierarchy. Both of these difficulties mitigate the theoretical analysis of Koepfler ' s algorithm. There is no definite method for selecting the " optimal " value of the scale parameter itself. Furthermore, the mathematical analysis is not well understood for more complex models. None of these issues are convincingly answered in the literature. This thesis aims to provide some answers to the above shortcomings by introducing new techniques for region merging algorithms and a better understanding of the theoretical analysis of both the mathematics and the algorithm ' s performance. A review of general segmentation techniques is provided early in this thesis. Also discussed is the development of an " extended " model to account for white noise contamination of images, and an improvement of Koepfler ' s original algorithm which eliminates the need for a schedule. The work of Morel and Solimini is generalized to the extended model. Also considered is an application to textured images and the issue of selecting the value of the scale parameter. / Thesis (Ph.D.)--School of Mathematical Sciences, 2005.
|
162 |
Object highlighting : real-time boundary detection using a Bayesian networkJia, Jin 12 April 2004 (has links)
Image segmentation continues to be a fundamental problem in computer vision and
image understanding. In this thesis, we present a Bayesian network that we use for
object boundary detection in which the MPE (most probable explanation) before
any evidence can produce multiple non-overlapping, non-self-intersecting closed
contours and the MPE with evidence where one or more connected boundary
points are provided produces a single non-self-intersecting, closed contour that
accurately defines an object's boundary. We also present a near-linear-time
algorithm that determines the MPE by computing the minimum-path spanning tree
of a weighted, planar graph and finding the excluded edge (i.e., an edge not in the
spanning tree) that forms the most probable loop. This efficient algorithm allows for
real-time feedback in an interactive environment in which every mouse movement
produces a recomputation of the MPE based on the new evidence (i.e., the new
cursor position) and displays the corresponding closed loop. We call this interface
"object highlighting" since the boundary of various objects and sub-objects appear
and disappear as the mouse cursor moves around within an image. / Graduation date: 2004
|
163 |
Advanced image segmentation and data clustering concepts applied to digital image sequences featuring the response of biological materials to toxic agentsRoussel, Nicolas 27 March 2003 (has links)
Image segmentation is the process by which an image is divided into number of
regions. The regions are to be homogeneous with respect to some property. Definition
of homogeneity depends mainly on the expected patterns of the objects of interest. The
algorithms designed to perform these tasks can be divided into two main families: Splitting
Algorithms and Merging Algorithms. The latter comprises seeded region growing
algorithms which provide the basis for our work.
Seeded region growing methods such as Marker initiated Watershed segmentation
depend principally on the quality and relevance of the initial seeds. In situations where
the image contains a variety of aggregated objects of different shapes, finding reliable
initial seeds can be a very complex task.
This thesis describes a versatile approach for finding initial seeds on images featuring
objects distinguishable by their structural and intensity profiles. This approach
involves the use of hierarchical trees containing various information on the objects in
the image. These trees can be searched for specific pattern to generate the initial seeds
required to perform a reliable region growing process. Segmentation results are shown
in this thesis.
The above image segmentation scheme has been applied to detect isolated living
cells in a sequence of frames and monitor their behavior through the time. The tissues
utilized for these studies are isolated from the scales of Betta Splendens fish family.
Since the isolated cells or chromatophores are sensitive to various kinds of toxic agents,
a creation of cell-based toxin detector was suggested. Such sensor operation depends on
an efficient segmentation of cell images and extraction of pertinent visual features.
Our ultimate objective is to model and classify the observed cell behavior in order
to detect and recognize biological or chemical agents affecting the cells. Some possible
modelling and classification approaches are presented in this thesis. / Graduation date: 2003
|
164 |
Digital shape classification using local and global shape descriptorsLin, Cong January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
|
165 |
Adaptive video defogging base on background modelingYuk, Shun-cho, Jacky, 郁順祖 January 2013 (has links)
The performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms.
In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality.
Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
166 |
Active binocular vision: phase-based registration and optimal foveationMonaco, James Peter 28 August 2008 (has links)
Active binocular vision systems are powerful tools in machine vision. With a virtually unlimited field-of-view they have access to huge amounts of information, yet are able to confine their resources to specific regions of interest. Since they can dynamically interact with the environment, they are able to successfully address problems that are ill-posed to passive systems. A primary goal of an active binocular vision systems is to ascertain depth information. Since they employ two cameras and are able to sample a scene from two distinct vantage points, they are well suited for such a task. The depth recovery process is composed of two interrelated components: image registration and sampling. Image registration is the process of determining corresponding points between the stereo images. Once points in the images have been matched, 3D information can be recovered via triangulation. Image sampling determines how the image is discretized and represented. Image registration and sampling are highly interdependent. The choice of sampling scheme can profoundly impact the accuracy and complexity of the registrations process. In many situations, particular registration algorithms are simply incompatible with some sampling schemes. In this dissertation we meticulously address both registration and sampling in the context of stereopis for active binocular vision systems. Throughout the development of this work, contributions in each area are addressed with an eye toward their eventual integration into a cohesive registration procedure appropriate for active binocular vision systems. The actual synthesis is a daunting task that is beyond the scope of this single dissertation. The focus of this work is to assiduously analyze both registration and sampling, establishing a solid foundation for their future aggregation. One of the most successful approaches to image registration is phase-differencing. Phase-differencing algorithms provide a fast, powerful means for depth recovery. Unfortunately, phase-differencing techniques suffer from two significant impediments: phase nonlinearities and neglect of multispectral information. This dissertation uses the amenable properties of white noise images to analytically quantify the behavior of phase in these regions of phase nonlinearity. The improved understanding gained from this analysis enables us to create a new, more effective method for identifying these regions based on the second derivative of phase. We also suggest a novel approach that combines our method of nonlinear phase detection with strategies of both phase-differencing and local correlation. This hybrid approach retains the advantageous properties of phase-differencing while incorporating the multispectral aspects of local correlation. This task of registration is greatly simplified if the camera geometry is known and the search for corresponding points can be restricted to epipolar lines. Unfortunately, computation of epipolar lines for an active system requires calibration which can be both highly complex and inaccurate. While it is possible to register images without calibration information, such unconstrained algorithms are usually time consuming and prone to error. In this dissertation we propose compromise. Even without the instantaneous knowledge of the system geometry, we can restrict the region of correspondence by imposing limits on the possible range of configurations, and as a result, confine our search for matching points to what we refer to as epipolar spaces. For each point in one image, we define the corresponding epipolar space in the other image as the union of all associated epipolar lines over all possible system geometries. Epipolar spaces eliminate the need for calibration at the cost of an increased search region. Since the average size of a search space is directly related to the accuracy and efficiency of any registration algorithm, it is essential to mitigate the increase. The major contribution of this dissertation is the derivation of an optimal nonuniform sampling that minimizes the average area per epipolar space. / text
|
167 |
Image communication system design based on the structural similarity indexChannappayya, Sumohana S., 1977- 28 August 2008 (has links)
The amount of digital image and video content being generated and shared has grown explosively in the recent past. The primary goal of image and video communication systems is to achieve the best possible visual quality at a given rate constraint and channel conditions. In this dissertation, the focus is limited to image communication systems. In order to optimize the components of the communication system to maximize perceptual quality, it is important to use a good measure of quality. Even though this fact has been long recognized, the mean squared error (MSE), which is not the best measure of perceptual quality, has been a popular choice in the design of various components of an image communication system. Recent developments in the field of image quality assessment (IQA) have resulted in the development of powerful new algorithms. A few of these new algorithms include the structural similarity (SSIM) index, the visual information fidelity (VIF) criterion, and the visual signal to noise ratio (VSNR). The SSIM index is considered in this dissertation. I demonstrate that optimizing image processing algorithms for the SSIM index does indeed result in an improvement in the perceptual quality of the processed images. All the comparisons in this thesis are made against appropriate MSE-optimal equivalents. First, an SSIM-optimal linear estimator is derived and applied to the problem of image denoising. An algorithm for SSIM-optimal linear equalization is developed and applied to the problem of image restoration. Followed by the development of the linear solution, I addressed the problem of SSIM-optimal soft thresholding which is a nonlinear technique. The estimation, equalization, and soft-thresholding results all show a gain in the visual quality compared to their MSE-optimal counterparts. These solutions are typically used at the receiver of an image communication system. On the transmitter side of the system, bounds on the SSIM index as a function of the rate allocated to a uniform quantizer are derived.
|
168 |
Enhanced Hough transforms for image processing.Tu, Chunling. January 2014 (has links)
D. Tech. Electrical Engineering
|
169 |
Photoresist modeling for 365 nm and 257 nm laser photomask lithography and multi-analyte biosensors indexed through shape recognitionRathsack, Benjamen Michael 04 April 2011 (has links)
Not available / text
|
170 |
Robust estimation methods for image matchingFeng, Chunlin., 馮淳林. January 2004 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
|
Page generated in 0.1523 seconds