• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

From interactive to semantic image segmentation

Gulshan, Varun January 2011 (has links)
This thesis investigates two well defined problems in image segmentation, viz. interactive and semantic image segmentation. Interactive segmentation involves power assisting a user in cutting out objects from an image, whereas semantic segmentation involves partitioning pixels in an image into object categories. We investigate various models and energy formulations for both these problems in this thesis. In order to improve the performance of interactive systems, low level texture features are introduced as a replacement for the more commonly used RGB features. To quantify the improvement obtained by using these texture features, two annotated datasets of images are introduced (one consisting of natural images, and the other consisting of camouflaged objects). A significant improvement in performance is observed when using texture features for the case of monochrome images and images containing camouflaged objects. We also explore adding mid-level cues such as shape constraints into interactive segmentation by introducing the idea of geodesic star convexity, which extends the existing notion of a star convexity prior in two important ways: (i) It allows for multiple star centres as opposed to single stars in the original prior and (ii) It generalises the shape constraint by allowing for Geodesic paths as opposed to Euclidean rays. Global minima of our energy function can be obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. These extensions to star convexity allow us to use such constraints in a practical segmentation system. This system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way, and it is shown that having shape constraints reduces user effort significantly compared to existing interactive systems. We also introduce a new and harder dataset which augments the existing GrabCut dataset with more realistic images and ground truth taken from the PASCAL VOC segmentation challenge. In the latter part of the thesis, we bring in object category level information in order to make the interactive segmentation tasks easier, and move towards fully automated semantic segmentation. An algorithm to automatically segment humans from cluttered images given their bounding boxes is presented. A top down segmentation of the human is obtained using classifiers trained to predict segmentation masks from local HOG descriptors. These masks are then combined with bottom up image information in a local GrabCut like procedure. This algorithm is later completely automated to segment humans without requiring a bounding box, and is quantitatively compared with other semantic segmentation methods. We also introduce a novel way to acquire large quantities of segmented training data relatively effortlessly using the Kinect. In the final part of this work, we explore various semantic segmentation methods based on learning using bottom up super-pixelisations. Different methods of combining multiple super-pixelisations are discussed and quantitatively evaluated on two segmentation datasets. We observe that simple combinations of independently trained classifiers on single super-pixelisations perform almost as good as complex methods based on jointly learning across multiple super-pixelisations. We also explore CRF based formulations for semantic segmentation, and introduce novel visual words based object boundary description in the energy formulation. The object appearance and boundary parameters are trained jointly using structured output learning methods, and the benefit of adding pairwise terms is quantified on two different datasets.
342

Three-dimensional geometric image analysis for interventional electrophysiology

McManigle, John E. January 2014 (has links)
Improving imaging hardware, computational power, and algorithmic design are driving advances in interventional medical imaging. We lay the groundwork here for more effective use of machine learning and image registration in clinical electrophysiology. To achieve identification of atrial fibrosis using image data, we registered the electroanatomic map (EAM) data of atrial fibrillation (AF) patients undergoing pulmonary vein isolation (PVI) with MR (n = 16) or CT (n = 18) images. The relationship between image features and bipolar voltage was evaluated using single-parameter regression and random forest models. Random forest performed significantly better than regression, identifying fibrosis with area under the receiver operating characteristic curve (AUC) 0.746 (MR) and 0.977 (CT). This is the first evaluation of voltage prediction using image data. Next, we compared the character of native atrial fibrosis with ablation scar in MR images. Fourteen AF patients undergoing repeat PVI were recruited. EAM data from their first PVI was registered to the MR images acquired before the first PVI (‘pre-operative’) and before the second PVI ('post-operative' with respect to the first PVI). Non-ablation map points had similar characteristics in the two images, while ablation points exhibited higher intensity and more heterogeneity in post-operative images. Ablation scar is more strongly enhancing and more heterogeneous than native fibrosis. Finally, we addressed myocardial measurement in 3-D echocardiograms. The circular Hough transform was modified with a feature asymmetry filter, epicardial edges, and a search constraint. Manual and Hough measurements were compared in 5641 slices from 3-D images. The enhanced Hough algorithm was more accurate than the unmodified version (Dice coefficient 0.77 vs. 0.58). This method promises utility in segmentation-assisted cross-modality registration. By improving the information that can be extracted from medical images and the ease with which that information can be accessed, this progress will contribute to the advancing integration of imaging in electrophysiology.
343

Tumour vessel structural analysis and its application in image analysis

Wang, Po January 2010 (has links)
Abnormal vascular structure has been identified as one of the major characteristics of tumours. In this thesis, we carry out quantitative analysis on different tumour vascular structures and research the relationship between vascular structure and its transportation efficiency. We first study segmentation methods to extract the binary vessel representations from microscope images. We found that local phase-hysteresis thresholding is able to segment vessel objects from noisy microscope images. We also study methods to extract the centre lines of segmented vessel objects, a process termed as skeletonization. We modified the conventional thinning method to regularize the extremely asymmetrical structure found in the segmented vessel objects. We found this method is capable to produce vessel skeletons with satisfactory accuracy. We have developed a software for 3D vessel structural analysis. This software is consisted of four major parts: image segmentation, vessel skeletonization, skeleton modification and structure quantification. This software has implemented local phase-hysteresis thresholding and structure regularization-thinning methods. A GUI was introduced to enable users to alter the skeleton structures based on their subjective judgements. Radius and inter branch length quantification can be conducted based on the segmentation and skeletonization results. The accuracy of segmentation, skeletonization and quantification methods have been tested on several synthesized data sets. The change of tumour vascular structure after drug treatment was then investigated. We proposed metrics to quantify tumour vascular geometry and statistically analysed the effect of tested drugs on normalizing tumour vascular structure. finally, we developed a spatio-temporal model to simulate the delivery of oxygen and 3-18 F-fluoro-1-(2-nitro-1-imidazolyl)-2-propanol (Fmiso), which is the hypoxia tracer that gives out PET signal in an Fmiso PET scanning. This model is based on compartmental models, but also considers the spatial diffusion of oxygen and Fmiso. We validated our model on in vitro spheroid data and simulated the oxygen and Fmiso distribution on the segmented vessel images. We contend that the tumour Fmiso distribution (as observed in Fmiso PET imaging) is caused by the abnormal tumour vascular structure which is further aroused from tumour angiogenesis process. We depicted a modelling framework to research the relationships between tumour angiogenesis, vessel structure and Fmiso distribution, which is going to be the focus of our future work.
344

MRI image analysis for abdominal and pelvic endometriosis

Chi, Wenjun January 2012 (has links)
Endometriosis is an oestrogen-dependent gynaecological condition defined as the presence of endometrial tissue outside the uterus cavity. The condition is predominantly found in women in their reproductive years, and associated with significant pelvic and abdominal chronic pain and infertility. The disease is believed to affect approximately 33% of women by a recent study. Currently, surgical intervention, often laparoscopic surgery, is the gold standard for diagnosing the disease and it remains an effective and common treatment method for all stages of endometriosis. Magnetic resonance imaging (MRI) of the patient is performed before surgery in order to locate any endometriosis lesions and to determine whether a multidisciplinary surgical team meeting is required. In this dissertation, our goal is to use image processing techniques to aid surgical planning. Specifically, we aim to improve quality of the existing images, and to automatically detect bladder endometriosis lesion in MR images as a form of bladder wall thickening. One of the main problems posed by abdominal MRI is the sparse anisotropic frequency sampling process. As a consequence, the resulting images consist of thick slices and have gaps between those slices. We have devised a method to fuse multi-view MRI consisting of axial/transverse, sagittal and coronal scans, in an attempt to restore an isotropic densely sampled frequency plane of the fused image. In addition, the proposed fusion method is steerable and is able to fuse component images in any orientation. To achieve this, we apply the Riesz transform for image decomposition and reconstruction in the frequency domain, and we propose an adaptive fusion rule to fuse multiple Riesz-components of images in different orientations. The adaptive fusion is parameterised and switches between combining frequency components via the mean and maximum rule, which is effectively a trade-off between smoothing the intrinsically noisy images while retaining the sharp delineation of features. We first validate the method using simulated images, and compare it with another fusion scheme using the discrete wavelet transform. The results show that the proposed method is better in both accuracy and computational time. Improvements of fused clinical images against unfused raw images are also illustrated. For the segmentation of the bladder wall, we investigate the level set approach. While the traditional gradient based feature detection is prone to intensity non-uniformity, we present a novel way to compute phase congruency as a reliable feature representation. In order to avoid the phase wrapping problem with inverse trigonometric functions, we devise a mathematically elegant and efficient way to combine multi-scale image features via geometric algebra. As opposed to the original phase congruency, the proposed method is more robust against noise and hence more suitable for clinical data. To address the practical issues in segmenting the bladder wall, we suggest two coupled level set frameworks to utilise information in two different MRI sequences of the same patients - the T2- and T1-weighted image. The results demonstrate a dramatic decrease in the number of failed segmentations done using a single kind of image. The resulting automated segmentations are finally validated by comparing to manual segmentations done in 2D.
345

Computer-assisted volumetric tumour assessment for the evaluation of patient response in malignant pleural mesothelioma

Chen, Mitchell January 2011 (has links)
Malignant pleural mesothelioma (MPM) is a form of aggressive tumour that is almost always associated with prior exposure to asbestos. Currently responsible for over 47,000 deaths worldwide each year and rising, it poses a serious threat to global public health. Many clinical studies of MPM, including its diagnosis, prognostic planning, and the evaluation of a treatment, necessitate the accurate quantification of tumours based on medical image scans, primarily computed tomography (CT). Currently, clinical best practice requires application of the MPM-adapted Response Evaluation Criteria in Solid Tumours (MPM-RECIST) scheme, which provides a uni-dimensional measure of the tumour's size. However, the low CT contrast between the tumour and surrounding tissues, the extensive elongated growth pattern characteristic of MPM, and, as a consequence, the pronounced partial volume effect, collectively contribute to the significant intra- and inter-observer variations in MPM-RECIST values seen in clinical practice, which in turn greatly affect clinical judgement and outcome. In this thesis, we present a novel computer-assisted approach to evaluate MPM patient response to treatments, based on the volumetric segmentation of tumours (VTA) on CT. We have developed a 3D segmentation routine based on the Random Walk (RW) segmentation framework by L. Grady, which is notable for its good performance in handling weak tissue boundaries and the ability to segment any arbitrary shapes with appropriately placed initialisation points. Results also show its benefit with regard to computation time, as compared to other candidate methods such as level sets. We have also added a boundary enhancement regulariser to RW, to improve its performance with smooth MPM boundaries. The regulariser is inspired by anisotropic diffusion. To reduce the required level of user supervision, we developed a registration-assisted segmentation option. Finally, we achieved effective and highly manoeuvrable partial volume correction by applying a reverse diffusion-based interpolation. To assess its clinical utility, we applied our method to a set of 48 CT studies from a group of 15 MPM patients and compared the findings to the MPM-RECIST observations made by a clinical specialist. Correlations confirm the utility of our algorithm for assessing MPM treatment response. Furthermore, our 3D algorithm found applications in monitoring the patient quality of life and palliative care planning. For example, segmented aerated lungs demonstrated very good correlation with the VTA-derived patient responses, suggesting their use in assessing the pulmonary function impairment caused by the disease. Likewise, segmented fluids highlight sites of pleural effusion and may potentially assist in intra-pleural fluid drainage planning. Throughout this thesis, to meet the demands of probabilistic analyses of data, we have used the Non-Parametric Windows (NPW) probability density estimator. NPW outperforms the histogram in terms of its smoothness and kernel density estimator in its parameter setting, and preserves signal properties such as the order of occurrence and band-limitedness of the sample, which are important for tissue reconstruction from discrete image data. We have also worked on extending this estimator to analysing vector-valued quantities; which are essential for multi-feature studies involving values such as image colour, texture, heterogeneity and entropy.
346

Segmentation and analysis of vascular networks

Allen, K. E. January 2010 (has links)
From a clinical perspective retinal vascular segmentation and analysis are important tasks in aiding quantification of vascular disease progression for such prevalent pathologies as diabetic retinopathy, arteriolosclerosis and hypertension. Combined with the emergence of inexpensive digital imaging, retinal fundus images are becoming increasingly available through public databases fuelling interest in retinal vessel research. Vessel segmentation is a challenging task which needs to fulfil many requirements: the accurate segmentation of both normal and pathological vessels; the extraction of vessels of different sizes from large high contrast to small low contrast; minimal user interaction; low computational requirements; and the potential for application among different imaging modalities. We demonstrate a novel and significant improvement on an emerging stochastic vessel segmentation technique, particle filtering, in terms of improved performance at vascular bifurcations and extensibility. An alternative deterministic approach is also presented in the form of a framework utilising morphological Tramline filtering and non-parametric windows pdf estimation. Results of the deterministic algorithm on retinal images match those of state-of-art unsupervised methods in terms of pixel accuracy. In analysing retinal vascular networks, an important initial step is to distinguish between arteries and veins in order to proceed with pathological metrics such as branching angle, diameter, length and arteriole to venule diameter ratio. Practical difficulties include the lack of intensity and textural differences between arteries and veins in all but the largest vessels and the obstruction of vessels and connectivity by low contrast or other vessels. To this end, an innovative Markov Chain Monte Carlo Metropolis-Hastings framework is formulated for the separation of vessel trees. It is subsequently applied to both synthetic and retinal image data with promising results.
347

Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis / Segmentation d'images hétérogènes de documents : une approche basée sur l'apprentissage automatique de données, l'analyse en composantes connexes et l'analyse de texture

Bonakdar Sakhi, Omid 06 December 2012 (has links)
La segmentation de page est l'une des étapes les plus importantes de l'analyse d'images de documents. Idéalement, une méthode de segmentation doit être capable de reconstituer la structure complète de toute page de document, en distinguant les zones de textes, les parties graphiques, les photographies, les croquis, les figures, les tables, etc. En dépit de nombreuses méthodes proposées à ce jour pour produire une segmentation de page correcte, les difficultés sont toujours nombreuses. Le chef de file du projet qui a rendu possible le financement de ce travail de thèse (*) utilise une chaîne de traitement complète dans laquelle les erreurs de segmentation sont corrigées manuellement. Hormis les coûts que cela représente, le résultat est subordonné au réglage de nombreux paramètres. En outre, certaines erreurs échappent parfois à la vigilance des opérateurs humains. Les résultats des méthodes de segmentation de page sont généralement acceptables sur des documents propres et bien imprimés; mais l'échec est souvent à constater lorsqu'il s'agit de segmenter des documents manuscrits, lorsque la structure de ces derniers est vague, ou lorsqu'ils contiennent des notes de marge. En outre, les tables et les publicités présentent autant de défis supplémentaires à relever pour les algorithmes de segmentation. Notre méthode traite ces problèmes. La méthode est divisée en quatre parties : - A contrario de ce qui est fait dans la plupart des méthodes de segmentation de page classiques, nous commençons par séparer les parties textuelles et graphiques de la page en utilisant un arbre de décision boosté. - Les parties textuelles et graphiques sont utilisées, avec d'autres fonctions caractéristiques, par un champ conditionnel aléatoire bidimensionnel pour séparer les colonnes de texte. - Une méthode de détection de lignes, basée sur les profils partiels de projection, est alors lancée pour détecter les lignes de texte par rapport aux frontières des zones de texte. - Enfin, une nouvelle méthode de détection de paragraphes, entraînée sur les modèles de paragraphes les plus courants, est appliquée sur les lignes de texte pour extraire les paragraphes, en s'appuyant sur l'apparence géométrique des lignes de texte et leur indentation. Notre contribution sur l'existant réside essentiellement dans l'utilisation, ou l'adaptation, d'algorithmes empruntés aux méthodes d'apprentissage automatique de données, pour résoudre les cas les plus difficiles. Nous démontrons en effet un certain nombre d'améliorations : sur la séparation des colonnes de texte lorsqu'elles sont proches l'une de l'autre~; sur le risque de fusion d'au moins deux cellules adjacentes d'une même table~; sur le risque qu'une région encadrée fusionne avec d'autres régions textuelles, en particulier les notes de marge, même lorsque ces dernières sont écrites avec une fonte proche de celle du corps du texte. L'évaluation quantitative, et la comparaison des performances de notre méthode avec des algorithmes concurrents par des métriques et des méthodologies d'évaluation reconnues, sont également fournies dans une large mesure.(*) Cette thèse a été financée par le Conseil Général de Seine-Saint-Denis, par l'intermédiaire du projet Demat-Factory, initié et conduit par SAFIG SA / Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
348

Segmentation par contours actifs basés alpha-divergences : application à la segmentation d’images médicales et biomédicales / Active contours segmentation based on alpha-divergences : Segmentation of medical and biomedical images

Meziou, Leïla Ikram 28 November 2013 (has links)
La segmentation de régions d'intérêt dans le cadre de l'analyse d'images médicales et biomédicales reste encore à ce jour un challenge en raison notamment de la variété des modalités d'acquisition et des caractéristiques associées (bruit par exemple).Dans ce contexte particulier, cet exposé présente une méthode de segmentation de type contour actif dont l ‘énergie associée à l'obtention de l'équation d'évolution s'appuie sur une mesure de similarité entre les densités de probabilités (en niveau de gris) des régions intérieure et extérieure au contour au cours du processus itératif de segmentation. En particulier, nous nous intéressons à la famille particulière des alpha-divergences. L'intérêt principal de cette méthode réside (i) dans la flexibilité des alpha-divergences dont la métrique intrinsèque peut être paramétrisée via la valeur du paramètre alpha et donc adaptée aux distributions statistiques des régions de l'image à segmenter ; et (ii) dans la capacité unificatrice de cette mesure statistique vis-à-vis des distances classiquement utilisées dans ce contexte (Kullback- Leibler, Hellinger...). Nous abordons l'étude de cette mesure statistique tout d'abord d'un point de vue supervisé pour lequel le processus itératif de segmentation se déduit de la minimisation de l'alpha-divergence (au sens variationnel) entre la densité de probabilité courante et une référence définie a priori. Puis nous nous intéressons au point de vue non supervisé qui permet de s'affranchir de l'étape de définition des références par le biais d'une maximisation de distance entre les densités de probabilités intérieure et extérieure au contour. Par ailleurs, nous proposons une démarche d'optimisation de l'évolution du paramètre alpha conjointe au processus de minimisation ou de maximisation de la divergence permettant d'adapter itérativement la divergence à la statistique des données considérées. Au niveau expérimental, nous proposons une étude comparée des différentes approches de segmentation : en premier lieu, sur des images synthétiques bruitées et texturées, puis, sur des images naturelles. Enfin, nous focalisons notre étude sur différentes applications issues des domaines biomédicaux (microscopie confocale cellulaire) et médicaux (radiographie X, IRM) dans le contexte de l'aide au diagnotic. Dans chacun des cas, une discussion sur l'apport des alpha-divergences est proposée. / In the particular field of Computer-Aided-Diagnosis, the segmentation of particular regions of interest corresponding usually to organs is still a challenging issue mainly because of the various existing for which the charateristics of acquisition are very different (corrupting noise for instance). In this context, this PhD work introduces an original histogram-based active contour segmentation using alpha-divergence family as similarity measure. The method keypoint are twofold: (i) the flexibility of alpha-divergences whose metric could be parametrized using alpha value can be adaptedto the statistical distribution of the different regions of the image and (ii) the ability of alpha-divergence ability to enbed standard distances like the Kullback-Leibler's divergence or the Hellinger's one makes these divergences an interesting unifying tool.In this document, first, we propose a supervised version of proposed approach:. In this particular case, the iterative process of segmentation comes from alpha-divergenceminimization between the current probability density function and a reference one which can be manually defined for instance. In a second part, we focus on the non-supervised version of the method inorder to be able.In that particular case, the alpha-divergence maximization between probabilitydensity functions of inner and outer regions defined by the active contour is maximized. In addition, we propose an optimization scheme of the alpha parameter jointly with the optimization of the divergence in order to adapt iteratively the divergence to the inner statistics of processed data. Furthermore, a comparative study is proposed between the different segmentation schemes : first, on synthetic images then, on natural images. Finally, we focus on different kinds of biomedical images (cellular confocal microscopy) and medical ones (X-ray) for computer-aided diagnosis.
349

Graph-based variational optimization and applications in computer vision / Optimisation variationnelle discrète et applications en vision par ordinateur

Couprie, Camille 10 October 2011 (has links)
De nombreuses applications en vision par ordinateur comme le filtrage, la segmentation d'images, et la stéréovision peuvent être formulées comme des problèmes d'optimisation. Récemment les méthodes discrètes, convexes, globalement optimales ont reçu beaucoup d'attention. La méthode des "graph cuts'", très utilisée en vision par ordinateur est basée sur la résolution d'un problème de flot maximum discret, mais les solutions souffrent d'un effet de blocs,notamment en segmentation d'images. Une nouvelle formulation basée sur le problème continu est introduite dans le premier chapitre et permet d'éviter cet effet. La méthode de point interieur employée permet d'optimiser le problème plus rapidement que les méthodes existantes, et la convergence est garantie. Dans le second chapitre, la formulation proposée est efficacement étendue à la restauration d'image. Grâce à une approche du à la contrainte et à un algorithme proximal parallèle, la méthode permet de restaurer (débruiter, déflouter, fusionner) des images rapidement et préserve un meilleur contraste qu'avec la méthode de variation totale classique. Le chapitre suivant met en évidence l'existence de liens entre les méthodes de segmentation "graph-cuts'", le "randomwalker'', et les plus courts chemins avec un algorithme de segmentation par ligne de partage des eaux (LPE). Ces liens ont inspiré un nouvel algorithme de segmentation multi-labels rapide produisant une ligne de partage des eaux unique, moins sensible aux fuites que la LPE classique. Nous avons nommé cet algorithme "LPE puissance''. L'expression de la LPE sous forme d'un problème d'optimisation a ouvert la voie à de nombreuses applications possibles au delà de la segmentation d'images, par exemple dans le dernier chapitre en filtrage pour l'optimisation d'un problème non convexe, en stéréovision, et en reconstruction rapide de surfaces lisses délimitant des objets à partir de nuages de points bruités / Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
350

Parallélisation de la ligne de partage des eaux dans le cadre des graphes à arêtes valuées sur architecture multi-cœurs / Parallelization of the watershed transform in weighted graphs on multicore architecture

Braham, Yosra 24 November 2018 (has links)
Notre travail s'inscrit dans le cadre de la parallélisation d’algorithmes de calcul de la Ligne de Partage des Eaux (LPE) en particulier la LPE d’arêtes qui est une notion de la LPE introduite dans le cadre des Graphes à Arêtes Valuées. Nous avons élaboré un état d'art sur les algorithmes séquentiels de calcul de la LPE afin de motiver le choix de l'algorithme qui fait l'objet de notre étude qui est l'algorithme de calcul de noyau par M-bord. L'objectif majeur de cette thèse est de paralléliser cet algorithme afin de réduire son temps de calcul. En premier lieu, nous avons présenté les travaux qui se sont intéressés à la parallélisation des différentes variantes de la LPE et ce afin de dégager les problématiques que soulèvent cette tâche et les solutions adéquates à notre contexte. Dans un second lieu, nous avons montré que malgré la localité de l'opération de base de cet algorithme qui est l’abaissement de la valeur de certaines arêtes nommées arêtes M-bord, son exécution parallèle se trouve pénaliser par un problème de dépendance de données, en particulier au niveau des arêtes M-bord qui ont un sommet non minimum commun. Dans ce contexte, nous avons proposé trois stratégies de parallélisation de cet algorithme visant à résoudre ce problème de dépendance de données. La première stratégie consiste à diviser le graphe de départ en des bandes appelées partitions, et les traiter en parallèle sur P processeurs. La deuxième stratégie consiste à diviser les arêtes du graphe de départ en alternance en des sous-ensembles d’arêtes indépendantes. La troisième stratégie consiste à examiner les sommets au lieu des arêtes du graphe initial tout en préservant le paradigme d’amincissement sur lequel est basé l’algorithme séquentiel initial. Par conséquent, l’ensemble des sommets non-minima adjacents aux sommets minima sont traités en parallèle. En dernier lieu, nous avons étudié la parallélisation d'une technique de segmentation basée sur l'algorithme de calcul de noyau par M-bord. Cette technique comprend les étapes suivantes : la recherche des minima régionaux, la pondération des sommets et le calcul des sommets minima et enfin calcul du noyau par M-bord. A cet égard, nous avons commencé par faire une étude relative à la dépendance des données des différentes étapes qui la constituent et nous avons proposé des algorithmes parallèles pour chacune d'entre elles. Afin d'évaluer nos contributions, nous avons implémenté les différents algorithmes parallèles proposés dans le cadre de cette thèse sur une architecture multi-cœurs à mémoire partagée. Les résultats obtenus ont montré des gains en termes de temps d’exécution. Ce gain est traduit par des facteurs d’accélération qui augmentent avec le nombre de processeurs et ce quel que soit la taille des images à segmenter / Our work is a contribution of the parallelization of the Watershed Transform in particular the Watershed cuts which are a notion of watershed introduced in the framework of Edge Weighted Graphs. We have developed a state of art on the sequential watershed algorithms in order to motivate the choice of the algorithm that is the subject of our study, which is the M-border Kernel algorithm. The main objective of this thesis is to parallelize this algorithm in order to reduce its running time. First, we presented a review on the works that have treated the parallelization of the different types of Watershed in order to identify the issues raised by this task and the appropriate solutions to our context. In a second place, we have shown that despite the locality of the basic operation of this algorithm which is the lowering of some edges named the M-border edges; its parallel execution raises a data dependency problem, especially at the M-border edges which have a common non-minimum vertex. In this context, we have proposed three strategies of parallelization of this algorithm that solve this problematic: the first strategy consists of dividing the initial graph into bands called partitions processed in parallel by P processors. The second strategy is to divide the edges of the initial graph alternately into subsets of independent edges. The third strategy consists in examining the vertices instead of the edges of the initial graph while preserving the thinning paradigm on which the sequential algorithm is based. Therefore, the set of non-minima vertices adjacent to the minima ones are processed in parallel. Finally, we studied the parallelization of a segmentation technique based on the M-border kernel algorithm. This technique consists of three main steps which are: regional minima detection, vertices valuation and M-border kernel computation. For this purpose, we began by studying the data dependency of the different stages of this technique and we proposed parallel algorithms for each one of them. In order to evaluate our contributions, we implemented the parallel algorithms proposed in this thesis, on a shared memory multi-core architecture. The results obtained showed a notable gain in terms of execution time. This gain is translated by speedup factors that increase with the number of processors whatever is the resolution of the input images

Page generated in 0.1428 seconds