• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Simultaneous Bottom-up/top-down Processing In Early And Mid Level Vision

Erdem, Mehmet Erkut 01 November 2008 (has links) (PDF)
The prevalent view in computer vision since Marr is that visual perception is a data-driven bottom-up process. In this view, image data is processed in a feed-forward fashion where a sequence of independent visual modules transforms simple low-level cues into more complex abstract perceptual units. Over the years, a variety of techniques has been developed using this paradigm. Yet an important realization is that low-level visual cues are generally so ambiguous that they could make purely bottom-up methods quite unsuccessful. These ambiguities cannot be resolved without taking account of high-level contextual information. In this thesis, we explore different ways of enriching early and mid-level computer vision modules with a capacity to extract and use contextual knowledge. Mainly, we integrate low-level image features with contextual information within uni&amp / #64257 / ed formulations where bottom-up and top-down processing take place simultaneously.
382

Image Segmentation Based On Variational Techniques

Altinoklu, Metin Burak 01 February 2009 (has links) (PDF)
In this thesis, the image segmentation methods based on the Mumford&amp / #8211 / Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
383

Geometric statistically based methods for the segmentation and registration of medical imagery

Gao, Yi 22 December 2010 (has links)
Medical image analysis aims at developing techniques to extract information from medical images. Among its many sub-fields, image registration and segmentation are two important topics. In this report, we present four pieces of work, addressing different problems as well as coupling them into a unified framework of shape based image segmentation. Specifically: 1. We link the image registration with the point set registration, and propose a globally optimal diffeomorphic registration technique for point set registration. 2. We propose an image segmentation technique which incorporates the robust statistics of the image and the multiple contour evolution. Therefore, the method is able to simultaneously extract multiple targets from the image. 3. By combining the image registration, statistical learning, and image segmentation, we perform a shape based method which not only utilizes the image information but also the shape knowledge. 4. A multi-scale shape representation based on the wavelet transformation is proposed. In particular, the shape is represented by wavelet coefficients in a hierarchical way in order to decompose the shape variance in multiple scales. Furthermore, the statistical shape learning and shape based segmentation is performed under such multi-scale shape representation framework.
384

Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms

Kéchichian, Razmig 02 July 2013 (has links) (PDF)
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
385

A contribution to mouth structure segmentation in images towards automatic mouth gesture recognition

Gómez-Mendoza, Juan Bernardo 15 May 2012 (has links) (PDF)
This document presents a series of elements for approaching the task of segmenting mouth structures in facial images, particularly focused in frames from video sequences. Each stage is treated separately in different Chapters, starting from image pre-processing and going up to segmentation labeling post-processing, discussing the technique selection and development in every case. The methodological approach suggests the use of a color based pixel classification strategy as the basis of the mouth structure segmentation scheme, complemented by a smart pre-processing and a later label refinement. The main contribution of this work, along with the segmentation methodology itself, is based in the development of a color-independent label refinement technique. The technique, which is similar to a linear low pass filter in the segmentation labeling space followed by a nonlinear selection operation, improves the image labeling iteratively by filling small gaps and eliminating spurious regions resulting from a prior pixel classification stage. Results presented in this document suggest that the refiner is complementary to image pre-processing, hence achieving a cumulative effect in segmentation quality. At the end, the segmentation methodology comprised by input color transformation, preprocessing, pixel classification and label refinement, is put to test in the case of mouth gesture detection in images aimed to command three degrees of freedom of an endoscope holder.
386

Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data

Li, Ting 04 April 2012 (has links) (PDF)
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
387

Bayesian Spatial Modeling of Complex and High Dimensional Data

Konomi, Bledar 2011 December 1900 (has links)
The main objective of this dissertation is to apply Bayesian modeling to different complex and high-dimensional spatial data sets. I develop Bayesian hierarchical spatial models for both the observed location and the observation variable. Throughout this dissertation I execute the inference of the posterior distributions using Markov chain Monte Carlo by developing computational strategies that can reduce the computational cost. I start with a "high level" image analysis by modeling the pixels with a Gaussian process and the objects with a marked-point process. The proposed method is an automatic image segmentation and classification procedure which simultaneously detects the boundaries and classifies the objects in the image into one of the predetermined shape families. Next, I move my attention to the piecewise non-stationary Gaussian process models and their computational challenges for very large data sets. I simultaneously model the non-stationarity and reduce the computational cost by using the innovative technique of full-scale approximation. I successfully demonstrate the proposed reduction technique to the Total Ozone Matrix Spectrometer (TOMS) data. Furthermore, I extend the reduction method for the non-stationary Gaussian process models to a dynamic partition of the space by using a modified Treed Gaussian Model. This modification is based on the use of a non-stationary function and the full-scale approximation. The proposed model can deal with piecewise non-stationary geostatistical data with unknown partitions. Finally, I apply the method to the TOMS data to explore the non-stationary nature of the data.
388

A Multidimensional Filtering Framework with Applications to Local Structure Analysis and Image Enhancement

Svensson, Björn January 2008 (has links)
Filtering is a fundamental operation in image science in general and in medical image science in particular. The most central applications are image enhancement, registration, segmentation and feature extraction. Even though these applications involve non-linear processing a majority of the methodologies available rely on initial estimates using linear filters. Linear filtering is a well established cornerstone of signal processing, which is reflected by the overwhelming amount of literature on finite impulse response filters and their design. Standard techniques for multidimensional filtering are computationally intense. This leads to either a long computation time or a performance loss caused by approximations made in order to increase the computational efficiency. This dissertation presents a framework for realization of efficient multidimensional filters. A weighted least squares design criterion ensures preservation of the performance and the two techniques called filter networks and sub-filter sequences significantly reduce the computational demand. A filter network is a realization of a set of filters, which are decomposed into a structure of sparse sub-filters each with a low number of coefficients. Sparsity is here a key property to reduce the number of floating point operations required for filtering. Also, the network structure is important for efficiency, since it determines how the sub-filters contribute to several output nodes, allowing reduction or elimination of redundant computations. Filter networks, which is the main contribution of this dissertation, has many potential applications. The primary target of the research presented here has been local structure analysis and image enhancement. A filter network realization for local structure analysis in 3D shows a computational gain, in terms of multiplications required, which can exceed a factor 70 compared to standard convolution. For comparison, this filter network requires approximately the same amount of multiplications per signal sample as a single 2D filter. These results are purely algorithmic and are not in conflict with the use of hardware acceleration techniques such as parallel processing or graphics processing units (GPU). To get a flavor of the computation time required, a prototype implementation which makes use of filter networks carries out image enhancement in 3D, involving the computation of 16 filter responses, at an approximate speed of 1MVoxel/s on a standard PC.
389

A General System for Supervised Biomedical Image Segmentation

Chen, Cheng 15 March 2013 (has links)
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before used in a different application. We describe a system that, with few modifications, can be used in a variety of image segmentation problems. The system is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. In summary, we have several innovations: (1) A general framework for such a system is proposed, where rotations and variations of intensity neighborhoods in scales are modeled, and a multi-scale classification framework is utilized to segment unknown images; (2) A fast algorithm for training data selection and pixel classification is presented, where a majority voting based criterion is proposed for selecting a small subset from raw training set. When combined with 1-nearest neighbor (1-NN) classifier, such an algorithm is able to provide descent classification accuracy within reasonable computational complexity. (3) A general deformable model for optimization of segmented regions is proposed, which takes the decision values from previous pixel classification process as input, and optimize the segmented regions in a partial differential equation (PDE) framework. We show that the performance of this system in several different biomedical applications, such as tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar or better than several algorithms specifically designed for each of these applications. In addition, we describe another general segmentation system for biomedical applications where a strong prior on shape is available (e.g. cells, nuclei). The idea is based on template matching and supervised learning, and we show the examples of segmenting cells and nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given data set to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting cells and nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered cells and nuclei.
390

On continuous maximum flow image segmentation algorithm

Marak, Laszlo 28 March 2012 (has links) (PDF)
In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology

Page generated in 0.1378 seconds