Spelling suggestions: "subject:"image egmentation"" "subject:"image asegmentation""
281 |
Topics in living cell miultiphoton laser scanning microscopy (MPLSM) image analysisZhang, Weimin 30 October 2006 (has links)
Multiphoton laser scanning microscopy (MPLSM) is an advanced fluorescence
imaging technology which can produce a less noisy microscope image and minimize the
damage in living tissue. The MPLSM image in this research is the dehydroergosterol
(DHE, a fluorescent sterol which closely mimics those of cholesterol in lipoproteins and
membranes) on living cell's plasma membrane area. The objective is to use a statistical
image analysis method to describe how cholesterol is distributed on a living cell's
membrane. Statistical image analysis methods applied in this research include image
segmentation/classification and spatial analysis. In image segmentation analysis, we
design a supervised learning method by using smoothing technique with rank statistics.
This approach is especially useful in a situation where we have only very limited
information of classes we want to segment. We also apply unsupervised leaning methods
on the image data. In image data spatial analysis, we explore the spatial correlation of
segmented data by a Monte Carlo test. Our research shows that the distributions of DHE
exhibit a spatially aggregated pattern. We fit two aggregated point pattern models, an
area-interaction process model and a Poisson cluster process model, to the data. For the area interaction process model, we design algorithms for maximum pseudo-likelihood
estimator and Monte Carlo maximum likelihood estimator under lattice data setting. For
the Poisson Cluster process parameter estimation, the method for implicit statistical
model parameter estimate is used. A group of simulation studies shows that the Monte
Carlo maximum estimation method produces consistent parameter estimates. The
goodness-of-fit tests show that we cannot reject both models. We propose to use the area
interaction process model in further research.
|
282 |
Statistical and geometric methods for visual tracking with occlusion handling and target reacquisitionLee, Jehoon 17 January 2012 (has links)
Computer vision is the science that studies how machines understand scenes and automatically make decisions based on meaningful information extracted from an image or multi-dimensional data of the scene, like human vision. One common and well-studied field of computer vision is visual tracking. It is challenging and active research area in the computer vision community. Visual tracking is the task of continuously estimating the pose of an object of interest from the background in consecutive frames of an image sequence. It is a ubiquitous task and a fundamental technology of computer vision that provides low-level information used for high-level applications such as visual navigation, human-computer interaction, and surveillance system.
The focus of the research in this thesis is visual tracking and its applications. More specifically, the object of this research is to design a reliable tracking algorithm for a deformable object that is robust to clutter and capable of occlusion handling and target reacquisition in realistic tracking scenarios by using statistical and geometric methods. To this end, the approaches developed in this thesis make extensive use of region-based active contours and particle filters in a variational framework. In addition, to deal with occlusions and target reacquisition problems, we exploit the benefits of coupling 2D and 3D information of an image and an object.
In this thesis, first, we present an approach for tracking a moving object based on 3D range information in stereoscopic temporal imagery by combining particle filtering and geometric active contours. Range information is weighted by the proposed Gaussian weighting scheme to improve segmentation achieved by active contours. In addition, this work present an on-line shape learning method based on principal component analysis to reacquire track of an object in the event that it disappears from the field of view and reappears later. Second, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose in 3D space. In this work, we take advantage of knowledge of a 3D model of an object and we employ particle filtering to generate and propagate the translation and rotation parameters in a decoupled manner. Moreover, to continuously track the object in the presence of occlusions, we propose an occlusion detection and handling scheme based on the control of the degree of dependence between predictions and measurements of the system. Third, we introduce the fast level-set based algorithm applicable to real-time applications. In this algorithm, a contour-based tracker is improved in terms of computational complexity and the tracker performs real-time curve evolution for detecting multiple windows. Lastly, we deal with rapid human motion in context of object segmentation and visual tracking. Specifically, we introduce a model-free and marker-less approach for human body tracking based on a dynamic color model and geometric information of a human body from a monocular video sequence. The contributions of this thesis are summarized as follows:
1. Reliable algorithm to track deformable objects in a sequence consisting of 3D range data by combining particle filtering and statistics-based active contour models.
2. Effective handling scheme based on object's 2D shape information for the challenging situations in which the tracked object is completely gone from the image domain during tracking.
3. Robust 2D-3D pose tracking algorithm using a 3D shape prior and particle filters on SE(3).
4. Occlusion handling scheme based on the degree of trust between predictions and measurements of the tracking system, which is controlled in an online fashion.
5. Fast level set based active contour models applicable to real-time object detection.
6. Model-free and marker-less approach for tracking of rapid human motion based on a dynamic color model and geometric information of a human body.
|
283 |
Assessment of Grapevine Vigour Using Image Processing / Tillämpning av bildbehandlingsmetoder inom vinindustrinBjurström, Håkan, Svensson, Jon January 2002 (has links)
<p>This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.</p>
|
284 |
Ventricle slice detection in MRI images using Hough Transform and Object Matching techniquesThakkar, Chintan 01 June 2006 (has links)
The determination of the center slice, defined as a slice through the lateral ventricles in the axial plane in a volume of MR images is important to the segmentation of the image into its anatomical parts. The center or ventricle slice in a set of MR images is recognized by the shape of the ventricles in the axial plane as depicted by the cerebro-spinal fluids in the image. Currently, no technique exists to detect this slice and the purpose of this thesis is to find a slice through the lateral ventricles in the axial plane from a volume of MRI brain scan slices. There are several methodologies which will be discussed in the thesis, the Hough Transform and Object Matching using deformable templates being the primary ones. It is shown, in the test cases used, that these algorithms used together provided results that had almost 80 percent accuracy. However, a simple method to spatially calculate the center slice is also competitive in accuracy.
|
285 |
λ-connectedness and its application to image segmentation, recognition and reconstructionChen, Li January 2001 (has links)
Seismic layer segmentation, oil-gas boundary surfaces recognition, and 3D volume data reconstruction are three important tasks in three-dimensional seismic image processing. Geophysical and geological parameters and properties have been known to exhibit progressive changes in a layer. However, there are also times when sudden changes can occur between two layers. λ-connectedness was proposed to describe such a phenomenon. Based on graph theory, λ-connectedness describes the relationship among pixels in an image. It is proved that λ-connectedness is an equivalence relation. That is, it can be used to partition an image into different classes and hence can be used to perform image segmentation. Using the random graph theory and λ-connectivity of the image, the length of the path in a λ-connected set can be estimated. In addition to this, the normal λ-connected subsets preserve every path that is λ-connected in the subsets. An O(nlogn) time algorithm is designed for the normal λ-connected segmentation. Techniques developed are used to find objects in 2D/3D seismic images. Finding the interface between two layers or finding the boundary surfaces of an oil-gas reserve is often asked. This is equivalent to finding out whether a λ-connected set is an interface or surface. The problem that is raised is how to recognize a surface in digital spaces. λ-connectedness is a natural and intuitive way for describing digital surfaces and digital manifolds. Fast algorithms are designed to recognize whether an arbitrary set is a digital surface. Furthermore, the classification theorem of simple surface points is deduced: there are only six classes of simple surface points in 3D digital spaces. Our definition has been proved to be equivalent to Morgenthaler-Rosenfeld's definition of digital surfaces in direct adjacency. Reconstruction of a surface and data volume is important to the seismic data processing. Given a set of guiding pixels, the problem of generating a λ-connected (subset of image) surface is an inverted problem of λ-connected segmentation. In order to simplify the fitting algorithm, gradual variation, an equivalent concept of λ-connectedness, is used to preserve the continuity of the fitted surface. The key theorem, the necessary and sufficient condition for the gradually varied interpolation, has been mathematically proven. A random gradually varied surface fitting is designed, and other theoretical aspects are investigated. The concepts are used to successfully reconstruct 3D seismic real data volumes. This thesis proposes λ-connectedness and its applications as applied to seismic data processing. It is used for other problems such as ionogram scaling and object tracking. It has the potential to become a general technique in image processing and computer vision applications. Concepts and knowledge from several areas in mathematics such as Set Theory, Fuzzy Set Theory, Graph Theory, Numerical Analysis, Topology, Discrete Geometry, Computational Complexity, and Algorithm Design and Analysis have been applied to the work of this thesis.
|
286 |
Image Segmentation With Improved Region ModelingErsoy, Ozan 01 December 2004 (has links) (PDF)
Image segmentation is an important research area in digital image processing with several applications in vision-guided autonomous robotics, product quality inspection, medical diagnosis, the analysis of remotely sensed images, etc. The aim of image segmentation can be defined as partitioning an image into homogeneous regions in terms of the features of pixels extracted from the image.
Image segmentation methods can be classified into four main categories: 1) clustering methods, 2) region-based methods, 3) hybrid methods, and 4) bayesian methods. In this thesis, major image segmentation methods belonging to first three categories are examined and tested on typical images. Moreover, improvements are also proposed to well-known Recursive Shortest-Spanning Tree (RSST) algorithm. The improvements aim to better model each region during merging stage. Namely, grayscale histogram, joint histogram and homogeneous texture are used for better region modeling.
|
287 |
Automatic Image Segmentation of Healthy and Atelectatic Lungs in Computed Tomography / Automatische Bildsegmentierung von gesunden und atelektatischen Lungen in computertomographischen BildernCuevas, Luis Maximiliano 22 July 2010 (has links) (PDF)
Computed tomography (CT) has become a standard in pulmonary imaging which allows the analysis of diseases like lung nodules, emphysema and embolism. The improved spatial and temporal resolution involves a dramatic increase in the amount of data that has to be stored and processed. This has motivated the development of computer aided diagnostics (CAD) systems that have released the physician from the tedious task of manually delineating the boundary of the structures of interest from such a large number of images, a pre-processing step known as image segmentation. Apart from being impractical, the manual segmentation is prone to high intra and inter observer subjectiveness.
Automatic segmentation of the lungs with atelectasis poses a challenge because in CT images they have similar texture and gray level as the surrounding tissue. Consequently, the available graphical information is not sufficient to distinguish the boundary of the lung.
The present work aims to close the existing gap left by the segmentation of atelectatic lungs in volume CT data. A-priori knowledge of anatomical information plays a key role in the achievement of this goal.
|
288 |
Irregularly sampled image resortation and interpolationFacciolo Furlan, Gabriele 03 March 2011 (has links)
The generation of urban digital elevation models from satellite images using stereo
reconstruction techniques poses several challenges due to its precision requirements.
In this thesis we study three problems related to the reconstruction of urban models
using stereo images in a low baseline disposition. They were motivated by the MISS project,
launched by the CNES (Centre National d'Etudes Spatiales), in order to develop a low
baseline acquisition model.
The first problem is the restoration of irregularly sampled images and image fusion
using a band limited interpolation model. A novel restoration algorithm is proposed,
which incorporates the image formation model as a set of local constraints, and uses
of a family of regularizers that allow to control the spectral behavior of the solution.
Secondly, the problem of interpolating sparsely sampled images is addressed using a
self-similarity prior. The related problem of image inpainting is also considered,
and a novel framework for exemplar-based image inpainting is proposed. This framework is
then extended to consider the interpolation of sparsely sampled images. The third problem
is the regularization and interpolation of digital elevation models imposing geometric
restrictions. The geometric restrictions come from a reference image. For this problem
three different regularization models are studied: an anisotropic minimal surface
regularizer, the anisotropic total variation and a new piecewise affine interpolation
algorithm. / La generación de modelos urbanos de elevación a partir de imágenes de satélite mediante
técnicas de reconstrucción estereoscópica presenta varios retos debido a sus requisitos
de precisión. En esta tesis se estudian tres problemas vinculados a la generación de
estos modelos partiendo de pares estereoscópicos adquiridos por satélites en una configuración
con baseline pequeño. Estos problemas fueron motivados por el proyecto MISS,
lanzado por el CNES (Centre National d'Etudes Spatiales) con el objetivo de desarrollar las
técnicas de reconstrucción para imágenes adquiridas con baseline pequeños. El primer
problema es la restauración de imágenes muestreadas irregularmente y la fusión de imágenes
usando un modelo de interpolación de banda limitada. Se propone un nuevo método de
restauración, el cual usa una familia de regularizadores que permite controlar el
decaimiento espectral de la solución e incorpora el modelo de formación de imagen como un
conjunto de restricciones locales. El segundo problema es la interpolación de imágenes
muestreadas en forma dispersa usando un prior de auto similitud, se considera también el
problema relacionado de inpainting de imágenes. Se propone un nuevo framework para
inpainting basado en ejemplares, el cual luego es extendido a la interpolación de imágenes
muestreadas en forma dispersa. El tercer problema es la regularización e interpolación de
modelos digitales de elevación imponiendo restricciones geométricas las cuales se extraen de
una imagen de referencia. Para este problema se estudian tres modelos de regularización: un
regularizador anisótropo de superficie mínima, la variación total anisótropa y un nuevo
algoritmo de interpolación afín a trozos.
|
289 |
Pixel and patch based texture synthesis using image segmentationTran, Minh Tue January 2010 (has links)
[Truncated abstract] Texture exists all around us and serves as an important visual cue for the human visual system. Captured within an image, we identify texture by its recognisable visual pattern. It carries extensive information and plays an important role in our interpretation of a visual scene. The subject of this thesis is texture synthesis, which is de ned as the creation of a new texture that shares the fundamental visual characteristics of an existing texture such that the new image and the original are perceptually similar. Textures are used in computer graphics, computer-aided design, image processing and visualisation to produce realistic recreations of what we see in the world. For example, the texture on an object communicates its shape and surface properties in a 3D scene. Humans can discriminate between two textures and decide on their similarity in an instant, yet, achieving this algorithmically is not a simple process. Textures range in complexity and developing an approach that consistently synthe- sises this immense range is a dfficult problem to solve and motivates this research. Typically, texture synthesis methods aim to replicate texture by transferring the recognisable repeated patterns from the sample texture to synthesised output. Feature transferal can be achieved by matching pixels or patches from the sample to the output. As a result, two main approaches, pixel-based and patch-based, have es- tablished themselves in the active eld of texture synthesis. This thesis contributes to the present knowledge by introducing two novel texture synthesis methods. Both methods use image segmentation to improve synthesis results. ... The sample is segmented and the boundaries of the middle patch are confined to follow segment boundaries. This prevents texture features from being cut o prematurely, a common artifact of patch-based results, and eliminates the need for patch boundary comparisons that most other patch- based synthesis methods employ. Since no user input is required, this method is simple and straight-forward to run. The tiling of pre-computed tile pairs allows outputs that are relatively large to the sample size to be generated quickly. Output results show great success for textures with stochastic and semi-stochastic clustered features but future work is needed to suit more highly structured textures. Lastly these two texture synthesis methods are applied to the areas of image restoration and image replacement. These two areas of image processing involve replacing parts of an image with synthesised texture and are often referred to as constrained texture synthesis. Images can contain a large amount of complex information, therefore replacing parts of an image while maintaining image fidelity is a difficult problem to solve. The texture synthesis approaches and constrained synthesis implementations proposed in this thesis achieve successful results comparable with present methods.
|
290 |
Εύρεση θέσης αυτοκινήτου με ψηφιακή επεξεργασία σήματος βίντεοΠαγώνης, Μελέτιος 04 May 2011 (has links)
Σκοπός της παρούσας εργασίας είναι η μελέτη, η ανάπτυξη καθώς και η μερική εφαρμογή κάποιων μεθόδων για την ανίχνευση θέσης κάποιου οχήματος. Ιδιαίτερη βάση δόθηκε στη μελέτη και την ανάλυση της οπτικής ροής που θεωρείται βασική συγκριτικά με τις υπόλοιπες μεθόδους.Τέλος αναλύεται και μια μέθοδος κατάτμησης εικόνων. / The goal of this thesis is to study, develop and implement some methods of car detection. Particular emphasis is given to the analysis of optical flow, which is considered to be critical compared to other methods. Finally an analysis of a method for image segmentation is being developed.
|
Page generated in 0.0938 seconds