Spelling suggestions: "subject:"graph cut"" "subject:"raph cut""
1 |
Segmentation of Bones in 3D CT Images / Segmentation of Bones in 3D CT ImagesKrčah, Marcel January 2011 (has links)
Accurate and automatic segmentation techniques that do not require any explicit prior model have been of high interest in the medical community. We propose a fully-automatic method for segmenting the femur from 3D Computed Tomography scans, based on the graph-cut segmentation framework and the bone boundary enhancement filter analyzing second-order local structures. The presented algorithm is evaluated in large-scale experiments, conducted on 197 CT volumes, and compared to other three automatic bone segmentation methods. Out of the four tested approaches, the proposed algorithm achieved most accurate results and segmented the femur correctly in 81% of the cases.
|
2 |
Saliency Cut: an Automatic Approach for Video Object Segmentation Based on Saliency Energy MinimizationJanuary 2013 (has links)
abstract: Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is still challenging. Usually, foreground object in the video draws more attention from humans, i.e. it is salient. In this thesis we tackle the problem from the aspect of saliency, where saliency means a certain subset of visual information selected by a visual system (human or machine). We present a novel unsupervised method for video object segmentation that considers both low level vision cues and high level motion cues. In our model, video object segmentation can be formulated as a unified energy minimization problem and solved in polynomial time by employing the min-cut algorithm. Specifically, our energy function comprises the unary term and pair-wise interaction energy term respectively, where unary term measures region saliency and interaction term smooths the mutual effects between object saliency and motion saliency. Object saliency is computed in spatial domain from each discrete frame using multi-scale context features, e.g., color histogram, gradient, and graph based manifold ranking. Meanwhile, motion saliency is calculated in temporal domain by extracting phase information of the video. In the experimental section of this thesis, our proposed method has been evaluated on several benchmark datasets. In MSRA 1000 dataset the result demonstrates that our spatial object saliency detection is superior to the state-of-art methods. Moreover, our temporal motion saliency detector can achieve better performance than existing motion detection approaches in UCF sports action analysis dataset and Weizmann dataset respectively. Finally, we show the attractive empirical result and quantitative evaluation of our approach on two benchmark video object segmentation datasets. / Dissertation/Thesis / M.S. Computer Science 2013
|
3 |
Interactive Part Selection for Mesh and Point Models Using Hierarchical Graph-cut PartitioningBrown, Steven W. 16 June 2008 (has links) (PDF)
This thesis presents a method for interactive part selection for mesh and point set surface models that combines scribble-based selection methods with hierarchically accelerated graph-cut segmentation. Using graph-cut segmentation to determine optimal intuitive part boundaries enables easy part selection on complex geometries and allows for a simple, scribble-based interface that focuses on selecting within visible parts instead of precisely defining part boundaries that may be in difficult or occluded regions. Hierarchical acceleration is used to maintain interactive speeds with large models and to determine connectivity when extending the technique to point set models.
|
4 |
Hierarchical Logcut : A Fast And Efficient Way Of Energy Minimization Via Graph CutsKulkarni, Gaurav 06 1900 (has links) (PDF)
Graph cuts have emerged as an important combinatorial optimization tool for many problems in vision. Most of the computer vision problems are discrete labeling problems. For example, in stereopsis, labels represent disparity and in image restoration, labels correspond to image intensities. Finding a good labeling involves optimization of an Energy Function. In computer vision, energy functions for discrete labeling problems can be elegantly formulated through Markov Random Field (MRF) based modeling and graph cut algorithms have been found to efficiently optimize wide class of such energy functions.
The main contribution of this thesis lies in developing an efficient combinatorial optimization algorithm which can be applied to a wide class of energy functions. Generally, graph cut algorithms deal sequentially with each label in the labeling problem at hand. The time complexity of these algorithms increases linearly with number of labels. Our algorithm, finds a solution/labeling in logarithmic time complexity without compromising on quality of solution.
In our work, we present an improved Logcut algorithm [24]. Logcut algorithm [24]
deals with finding individual bit values in integer representation of labels. It has logarithmic time complexity, but requires training over data set. Our improved Logcut (Heuristic-Logcut or H-Logcut) algorithm eliminates the need for training and obtains comparable results in respect to original Logcut algorithm.
Original Logcut algorithm cannot be initialized by a known labeling. We present a
new algorithm, Sequential Bit Plane Correction (SBPC) which overcomes this drawback of Logcut algorithm. SBPC algorithm starts from a known labeling and individually corrects each bit of a label. This algorithm too has logarithmic time complexity. SBPC in combination with H-Logcut algorithm, further improves rate of convergence and quality of results.
Finally, a hierarchical approach to graph cut optimization is used to further improve on rate of convergence of our algorithm. Generally, in a hierarchical approach first, a solution at coarser level is computed and then its result is used to initialize algorithm at a finer level. Here we have presented a novel way of initializing the algorithm at finer level through fusion move [25]. The SBPC and H-Logcut algorithms are extended to accommodate for hierarchical approach. It is found that this approach drastically improves the rate of convergence and attains a very low energy labeling.
The effectiveness of our approach is demonstrated on stereopsis. It is found that the algorithm significantly out performs all existing algorithms in terms of quality of solution as well as rate of convergence.
|
5 |
Computational models for stuctural analysis of retinal imagesKaba, Djibril January 2014 (has links)
The evaluation of retina structures has been of great interest because it could be used as a non-intrusive diagnosis in modern ophthalmology to detect many important eye diseases as well as cardiovascular disorders. A variety of retinal image analysis tools have been developed to assist ophthalmologists and eye diseases experts by reducing the time required in eye screening, optimising the costs as well as providing efficient disease treatment and management systems. A key component in these tools is the segmentation and quantification of retina structures. However, the imaging artefacts such as noise, intensity homogeneity and the overlapping tissue of retina structures can cause significant degradations to the performance of these automated image analysis tools. This thesis aims to provide robust and reliable automated retinal image analysis technique to allow for early detection of various retinal and other diseases. In particular, four innovative segmentation methods have been proposed, including two for retinal vessel network segmentation, two for optic disc segmentation and one for retina nerve fibre layers detection. First, three pre-processing operations are combined in the segmentation method to remove noise and enhance the appearance of the blood vessel in the image, and a Mixture of Gaussians is used to extract the blood vessel tree. Second, a graph cut segmentation approach is introduced, which incorporates the mechanism of vectors flux into the graph formulation to allow for the segmentation of very narrow blood vessels. Third, the optic disc segmentation is performed using two alternative methods: the Markov random field image reconstruction approach detects the optic disc by removing the blood vessels from the optic disc area, and the graph cut with compensation factor method achieves that using prior information of the blood vessels. Fourth, the boundaries of the retinal nerve fibre layer (RNFL) are detected by adapting a graph cut segmentation technique that includes a kernel-induced space and a continuous multiplier based max-flow algorithm. The strong experimental results of our retinal blood vessel segmentation methods including Mixture of Gaussian, Graph Cut achieved an average accuracy of 94:33%, 94:27% respectively. Our optic disc segmentation methods including Markov Random Field and Compensation Factor also achieved an average sensitivity of 92:85% and 85:70% respectively. These results obtained on several public datasets and compared with existing methods have shown that our proposed methods are robust and efficient in the segmenting retinal structures such the blood vessels and the optic disc.
|
6 |
Core Issues in Graph Based Perceptual Organization: Spectral Cut Measures, LearningSoundararajan, Padmanabhan 29 March 2004 (has links)
Grouping is a vital precursor to object recognition. The complexity of the object recognition process can be reduced to a large extent by using a frontend grouping process. In this dissertation, a grouping framework based on spectral methods for graphs is used. The objects are segmented from the background by means of an associated learning process that decides on the relative importance of the basic salient relationships such as proximity, parallelism, continuity, junctions and common region. While much of the previous research has been focussed on using simple relationships like similarity, proximity, continuity and junctions, this work differenciates itself by using all the relationships listed above. The parameters of the grouping process is cast as probabilistic specifications of Bayesian networks that need to be learned: the learning is accomplished by a team of stochastic learning automata.
One of the stages in the grouping process is graph partitioning. There are a variety of cut measures based on which partitioning can be obtained and different measures give different partitioning results. This work looks at three popular cut measures, namely the minimum, average and normalized. Theoretical and empirical insight into the nature of these partitioning measures in terms of the underlying image statistics are provided. In particular, the questions addressed are as follows: For what kinds of image statistics would optimizing a measure, irrespective of the particular algorithm used, result in correct partitioning? Are the quality of the groups significantly different for each cut measure? Are there classes of images for which grouping by partitioning is not suitable? Does recursive bi-partitioning strategy separate out groups corresponding to K objects from each other?
The major conclusion is that optimization of none of the above three measures is guaranteed to result in the correct partitioning of K objects, in the strict stochastic order sense, for all image statistics. Qualitatively speaking, under very restrictive conditions when the average inter-object feature affinity is very weak when compared to the average intra-object feature affinity, the minimum cut measure is optimal. The average cut measure is optimal for graphs whose partition width is less than the mode of distribution of all possible partition widths. The normalized cut measure is optimal for a more restrictive subclass of graphs whose partition width is less than the mode of the partition width distributions and the strength of inter-object links is six times less than the intra-object links. The learning framework described in the first part of the work is used to empirically evaluate the cut measures. Rigorous empirical evaluation on 100 real images indicates that in practice, the quality of the groups generated using minimum or average or normalized cuts are statistically equivalent for object recognition, i.e. the best, the mean, and the variation of the qualities are statistically equivalent. Another conclusion is that for certain image classes, such as aerial and scenes with man-made objects in man-made surroundings, the performance of grouping by partitioning is the worst, irrespective of the cut measure.
|
7 |
Automated Building Detection From Satellite Images By Using Shadow Information As An Object InvariantBaris, Yuksel 01 October 2012 (has links) (PDF)
Apart from classical pattern recognition techniques applied for automated building detection in satellite images, a robust building detection methodology is proposed, where self-supervision data can be automatically extracted from the image by using shadow and its direction as an invariant for building object. In this methodology / first the vegetation, water and shadow regions are detected from a given satellite image and local directional fuzzy landscapes representing the existence of building are generated from the shadow regions using the direction of illumination obtained from image metadata. For each landscape, foreground (building) and background pixels are automatically determined and a bipartitioning is obtained using a graph-based algorithm, Grabcut. Finally, local results are merged to obtain the final building detection result. Considering performance evaluation results, this approach can be seen as a proof of concept that the shadow is an invariant for a building object and promising detection results can be obtained when even a single invariant for an object is used.
|
8 |
Fpga Implementation Of Graph Cut Method For Real Time Stereo MatchingSaglik Ozsarac, Havva 01 September 2010 (has links) (PDF)
The present graph cut methods cannot be used directly for real time stereo matching
applications because of their recursive structure. Graph cut method is modified to
change its recursive structure so that making it suitable for real time FPGA (Field
Programmable Gate Array) implementation.
The modified method is firstly tested by MATLAB on several data sets, and the
results are compared with those of previous studies. Although the disparity results
of the modified method are not better than other methods&rsquo / , computation time
performance is better. Secondly, the FPGA simulation is performed using real data
sets. Finally, the modified method is implemented in FPGA with two PAL cameras
at 25 Hz. The computation time of the implementation is 40 ms which is suitable for
real time applications.
|
9 |
Méthodes de segmentation du ventricule gauche basée sur l'algorithme graph cut pour les images par résonance magnétique et échocardiographiquesBernier, Michaël January 2017 (has links)
L’échocardiographie et l’imagerie par résonance magnétique sont toutes deux des
techniques non invasives utilisées en clinique afin de diagnostiquer ou faire le suivi
de maladies cardiaques. La première mesure un délai entre l’émission et la réception
d’ultrasons traversant le corps, tandis que l’autre mesure un signal électromagnétique
généré par des protons d’hydrogène présents dans le corps humain. Les résultats des
acquisitions de ces deux modalités d’imagerie sont fondamentalement différents, mais
contiennent dans les deux cas de l’information sur les structures du coeur humain. La
segmentation du ventricule gauche consiste à délimiter les parois internes du muscle
cardiaque, le myocarde, afin d’en calculer différentes métriques cliniques utiles au
diagnostic et au suivi de différentes maladies cardiaques, telle la quantité de sang
qui circule à chaque battement de coeur. Suite à un infarctus ou autre condition,
les performances ainsi que la forme du coeur en sont affectées. L’imagerie du ventricule
gauche est utilisée afin d’aider les cardiologues à poser les bons diagnostics.
Cependant, dessiner les tracés manuels du ventricule gauche requiert un temps non
négligeable aux cardiologues experts, d’où l’intérêt pour une méthode de segmentation
automatisée fiable et rapide.
Ce mémoire porte sur la segmentation du ventricule gauche. La plupart des méthodes
existantes sont spécifiques à une seule modalité d’imagerie. Celle proposée
dans ce document permet de traiter rapidement des acquisitions provenant de deux
modalités avec une précision de segmentation équivalente au tracé manuel d’un expert.
Pour y parvenir, elle opère dans un espace anatomique, induisant ainsi une forme
a priori implicite. L’algorithme de Graph Cut, combiné avec des stratégies telles les
cartes probabilistes et les enveloppes convexes régionales, parvient à générer des résultats
qui équivalent (ou qui, pour la majorité des cas, surpassent) l’état de l’art
ii
Sommaire
au moment de la rédaction de ce mémoire. La performance de la méthode proposée,
quant à l’état de l’art, a été démontrée lors d’un concours international. Elle est également
validée exhaustivement via trois bases de données complètes en se comparant
aux tracés manuels de deux experts et des tracés automatisés du logiciel Syngovia.
Cette recherche est un projet collaboratif avec l’Université de Bourgogne, en France.
|
10 |
Live SurfaceArmstrong, Christopher J. 24 February 2007 (has links) (PDF)
Live Surface allows users to segment and render complex surfaces from 3D image volumes at interactive (sub-second) rates using a novel, Cascading Graph Cut (CGC). Live Surface consists of two phases. (1) Preprocessing for generation of a complete 3D watershed hierarchy followed by tracking of all catchment basin surfaces. (2) User interaction in which, with each mouse movement, the 3D object is selected and rendered in real time. Real-time segmentation is ccomplished by cascading through the 3D watershed hierarchy from the top, applying graph cut successively at each level only to catchment basins bordering the segmented surface from the previous level. CGC allows the entire image volume to be segmented an order of magnitude faster than existing techniques that make use of graph cut. OpenGL rendering provides for display and update of the segmented surface at interactive rates. The user selects objects by tagging voxels with either (object) foreground or background seeds. Seeds can be placed on image cross-sections or directly on the 3D rendered surface. Interaction with the rendered surface improves the user's ability to steer the segmentation, augmenting or subtracting from the current selection. Segmentation and rendering, combined, is accomplished in about 0.5 seconds, allowing 3D surfaces to be displayed and updated dynamically as each additional seed is deposited. The immediate feedback of Live Surface allows for the segmentation of 3D image volumes with an interaction paradigm similar to the Live Wire (Intelligent Scissors) tool used in 2D images.
|
Page generated in 0.0377 seconds