Spelling suggestions: "subject:"image segmentation."" "subject:"lmage segmentation.""
41 |
vU-net: edge detection in time-lapse fluorescence live cell images based on convolutional neural networksZhang, Xitong 23 April 2018 (has links)
Time-lapse fluorescence live cell imaging has been widely used to study various dynamic processes in cell biology. As the initial step of image analysis, it is important to localize and segment cell edges with higher accuracy. However, fluorescence live-cell images usually have issues such as low contrast, noises, uneven illumination in comparison to immunofluorescence images. Deep convolutional neural networks, which learn features directly from training images, have successfully been applied in natural image analysis problems. However, the limited amount of training samples prevents their routine application in fluorescence live-cell image analysis. In this thesis, by exploiting the temporal coherence in time-lapse movies together with VGG-16 [1] pre-trained model, we demonstrate that we can train a deep neural network using a limited number of image frames to segment the entire time-lapse movies. We propose a novel framework, vU-net, which combines the advantages of VGG-16 [1] in feature extraction and U-net [2] in feature reconstruction. Moreover, we design an auxiliary convolutional block at the end of the architecture to enhance edge detection. We evaluate our framework using dice coefficient and the distance between the predicted edge and the ground truth on high-resolution image datasets of an adhesion marker, paxillin, acquired by a Total Internal Reflection Fluorescence (TIRF) microscope. Our results demonstrate that, on difficult datasets: (i) The testing dice coefficient of vU-net is 3.2% higher than U-net with the same amount of training images. (ii) vU-net can achieve the best prediction results of U-net with one third of training images needed by U-net. (iii) vU-net produces more robust prediction than U-net. Therefore, vU-net can be more practically applied to challenging live cell movies than U-net since it requires a small size of training sets and achieved accurate segmentation.
|
42 |
Segmentação de imagens coloridas baseada na mistura de cores e redes neurais / Segmentation of color images based on color mixture and neural networksDiego Rafael Moraes 26 March 2018 (has links)
O Color Mixture é uma técnica para segmentação de imagens coloridas, que cria uma \"Retina Artificial\" baseada na mistura de cores, e faz a quantização da imagem projetando todas as cores em 256 planos no cubo RGB. Em seguida, atravessa todos esses planos com um classificador Gaussiano, visando à segmentação da imagem. Porém, a abordagem atual possui algumas limitações. O classificador atual resolve exclusivamente problemas binários. Inspirado nesta \"Retina Artificial\" do Color Mixture, esta tese define uma nova \"Retina Artificial\", propondo a substituição do classificador atual por uma rede neural artificial para cada um dos 256 planos, com o objetivo de melhorar o desempenho atual e estender sua aplicação para problemas multiclasse e multiescala. Para esta nova abordagem é dado o nome de Neural Color Mixture. Para a validação da proposta foram realizadas análises estatísticas em duas áreas de aplicação. Primeiramente para a segmentação de pele humana, tendo sido comparado seus resultados com oito métodos conhecidos, utilizando quatro conjuntos de dados de tamanhos diferentes. A acurácia de segmentação da abordagem proposta nesta tese superou a de todos os métodos comparados. A segunda avaliação prática do modelo proposto foi realizada com imagens de satélite devido à vasta aplicabilidade em áreas urbanas e rurais. Para isto, foi criado e disponibilizado um banco de imagens, extraídas do Google Earth, de dez regiões diferentes do planeta, com quatro escalas de zoom (500 m, 1000 m, 1500 m e 2000 m), e que continham pelo menos quatro classes de interesse: árvore, solo, rua e água. Foram executados quatro experimentos, sendo comparados com dois métodos, e novamente a proposta foi superior. Conclui-se que a nova proposta pode ser utilizada para problemas de segmentação de imagens coloridas multiclasse e multiescala. E que possivelmente permite estender o seu uso para qualquer aplicação, pois envolve uma fase de treinamento, em que se adapta ao problema. / The Color Mixture is a technique for color images segmentation, which creates an \"Artificial Retina\" based on the color mixture, and quantizes the image by projecting all the colors in 256 plans into the RGB cube. Then, it traverses all those plans with a Gaussian classifier, aiming to reach the image segmentation. However, the current approach has some limitations. The current classifier solves exclusively binary problems. Inspired by this \"Artificial Retina\" of the Color Mixture, we defined a new \"Artificial Retina\", as well as we proposed the replacement of the current classifier by an artificial neural network for each of the 256 plans, with the goal of improving current performance and extending your application to multiclass and multiscale issues. We called this new approach \"Neural Color Mixture\". To validate the proposal, we analyzed it statistically in two areas of application. Firstly for the human skin segmentation, its results were compared with eight known methods using four datasets of different sizes. The segmentation accuracy of the our proposal in this thesis surpassed all the methods compared. The second practical evaluation of the our proposal was carried out with satellite images due to the wide applicability in urban and rural areas. In order to do this, we created and made available a database of satellite images, extracted from Google Earth, from ten different regions of the planet, with four zoom scales (500 m, 1000 m, 1500 m and 2000 m), which contained at least four classes of interest: tree, soil, street and water. We compared our proposal with a neural network of the multilayer type (ANN-MLP) and an Support Vector Machine (SVM). Four experiments were performed, compared to two methods, and again the proposal was superior. We concluded that our proposal can be used for multiclass and multiscale color image segmentation problems, and that it possibly allows to extend its use to any application, as it involves a training phase, in which our methodology adapts itself to any kind of problem.
|
43 |
Application of image segmentation in inspection of welding : Practical research in MATLABShen, Jiannan January 2012 (has links)
As one of main methods in modern steel production, welding plays a very important role in our national economy, which has been widely applied in many fields such as aviation, petroleum, chemicals, electricity, railways and so on. The craft of welding can be improved in terms of welding tools, welding technology and welding inspection. However, so far welding inspection has been a very complicated problem. Therefore, it is very important to effectively detect internal welding defects in the welded-structure part and it is worth to furtherly studying and researching.In this paper, the main task is research about the application of image segmentation in welding inspection. It is introduced that the image enhancement techniques and image segmentation techniques including image conversion, noise removal as well as threshold, clustering, edge detection and region extraction. Based on the MATLAB platform, it focuses on the application of image segmentation in ray detection of steeled-structure, found out the application situation of three different image segmentation method such as threshold, clustering and edge detection.Application of image segmentation is more competitive than image enhancement because that:1. Gray-scale based FCM clustering of image segmentation performs well, which can exposure pixels in terms of grey value level so as that it can show hierarchical position of related defects by grey value.2. Canny detection speeds also fast and performs well, that gives enough detail information around edges and defects with smooth lines.3. Image enhancement only could improve image quality including clarity and contrast, which can’t give other helpful information to detect welding defects.This paper comes from the actual needs of the industrial work and it proves to be practical at some extent. Moreover, it also demonstrates the next improvement direction including identification of welding defects based on the neural networks, and improved clustering algorithm based on the genetic ideas. / Program: Magisterutbildning i informatik
|
44 |
ScatterNet hybrid frameworks for deep learningSingh, Amarjot January 2019 (has links)
Image understanding is the task of interpreting images by effectively solving the individual tasks of object recognition and semantic image segmentation. An image understanding system must have the capacity to distinguish between similar looking image regions while being invariant in its response to regions that have been altered by the appearance-altering transformation. The fundamental challenge for any such system lies within this simultaneous requirement for both invariance and specificity. Many image understanding systems have been proposed that capture geometric properties such as shapes, textures, motion and 3D perspective projections using filtering, non-linear modulus, and pooling operations. Deep learning networks ignore these geometric considerations and compute descriptors having suitable invariance and stability to geometric transformations using (end-to-end) learned multi-layered network filters. These deep learning networks in recent years have come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite the success of these deep networks, there remains a fundamental lack of understanding in the design and optimization of these networks which makes it difficult to develop them. Also, training of these networks requires large labeled datasets which in numerous applications may not be available. In this dissertation, we propose the ScatterNet Hybrid Framework for Deep Learning that is inspired by the circuitry of the visual cortex. The framework uses a hand-crafted front-end, an unsupervised learning based middle-section, and a supervised back-end to rapidly learn hierarchical features from unlabelled data. Each layer in the proposed framework is automatically optimized to produce the desired computationally efficient architecture. The term `Hybrid' is coined because the framework uses both unsupervised as well as supervised learning. We propose two hand-crafted front-ends that can extract locally invariant features from the input signals. Next, two ScatterNet Hybrid Deep Learning (SHDL) networks (a generative and a deterministic) were introduced by combining the proposed front-ends with two unsupervised learning modules which learn hierarchical features. These hierarchical features were finally used by a supervised learning module to solve the task of either object recognition or semantic image segmentation. The proposed front-ends have also been shown to improve the performance and learning of current Deep Supervised Learning Networks (VGG, NIN, ResNet) with reduced computing overhead.
|
45 |
Segmentação de imagens coloridas baseada na mistura de cores e redes neurais / Segmentation of color images based on color mixture and neural networksMoraes, Diego Rafael 26 March 2018 (has links)
O Color Mixture é uma técnica para segmentação de imagens coloridas, que cria uma \"Retina Artificial\" baseada na mistura de cores, e faz a quantização da imagem projetando todas as cores em 256 planos no cubo RGB. Em seguida, atravessa todos esses planos com um classificador Gaussiano, visando à segmentação da imagem. Porém, a abordagem atual possui algumas limitações. O classificador atual resolve exclusivamente problemas binários. Inspirado nesta \"Retina Artificial\" do Color Mixture, esta tese define uma nova \"Retina Artificial\", propondo a substituição do classificador atual por uma rede neural artificial para cada um dos 256 planos, com o objetivo de melhorar o desempenho atual e estender sua aplicação para problemas multiclasse e multiescala. Para esta nova abordagem é dado o nome de Neural Color Mixture. Para a validação da proposta foram realizadas análises estatísticas em duas áreas de aplicação. Primeiramente para a segmentação de pele humana, tendo sido comparado seus resultados com oito métodos conhecidos, utilizando quatro conjuntos de dados de tamanhos diferentes. A acurácia de segmentação da abordagem proposta nesta tese superou a de todos os métodos comparados. A segunda avaliação prática do modelo proposto foi realizada com imagens de satélite devido à vasta aplicabilidade em áreas urbanas e rurais. Para isto, foi criado e disponibilizado um banco de imagens, extraídas do Google Earth, de dez regiões diferentes do planeta, com quatro escalas de zoom (500 m, 1000 m, 1500 m e 2000 m), e que continham pelo menos quatro classes de interesse: árvore, solo, rua e água. Foram executados quatro experimentos, sendo comparados com dois métodos, e novamente a proposta foi superior. Conclui-se que a nova proposta pode ser utilizada para problemas de segmentação de imagens coloridas multiclasse e multiescala. E que possivelmente permite estender o seu uso para qualquer aplicação, pois envolve uma fase de treinamento, em que se adapta ao problema. / The Color Mixture is a technique for color images segmentation, which creates an \"Artificial Retina\" based on the color mixture, and quantizes the image by projecting all the colors in 256 plans into the RGB cube. Then, it traverses all those plans with a Gaussian classifier, aiming to reach the image segmentation. However, the current approach has some limitations. The current classifier solves exclusively binary problems. Inspired by this \"Artificial Retina\" of the Color Mixture, we defined a new \"Artificial Retina\", as well as we proposed the replacement of the current classifier by an artificial neural network for each of the 256 plans, with the goal of improving current performance and extending your application to multiclass and multiscale issues. We called this new approach \"Neural Color Mixture\". To validate the proposal, we analyzed it statistically in two areas of application. Firstly for the human skin segmentation, its results were compared with eight known methods using four datasets of different sizes. The segmentation accuracy of the our proposal in this thesis surpassed all the methods compared. The second practical evaluation of the our proposal was carried out with satellite images due to the wide applicability in urban and rural areas. In order to do this, we created and made available a database of satellite images, extracted from Google Earth, from ten different regions of the planet, with four zoom scales (500 m, 1000 m, 1500 m and 2000 m), which contained at least four classes of interest: tree, soil, street and water. We compared our proposal with a neural network of the multilayer type (ANN-MLP) and an Support Vector Machine (SVM). Four experiments were performed, compared to two methods, and again the proposal was superior. We concluded that our proposal can be used for multiclass and multiscale color image segmentation problems, and that it possibly allows to extend its use to any application, as it involves a training phase, in which our methodology adapts itself to any kind of problem.
|
46 |
Graph-based segmentation of lymph nodes in CT dataWang, Yao 01 December 2010 (has links)
The quantitative assessment of lymph node size plays an important role in treatment of diseases like cancer. In current clinical practice, lymph nodes are analyzed manually based on very rough measures of long and/or short axis length, which is error prone. In this paper we present a graph-based lymph node segmentation method to enable the computer-aided three-dimensional (3D) assessment of lymph node size. Our method has been validated on 111 cases of enlarged lymph nodes imaged with X-ray computed tomography (CT). For unsigned surface positioning error, Hausdorff distance and Dice coefficient, the mean was around 0.5 mm, under 3.26 mm and above 0.77 respectively. On average, 5.3 seconds were required by our algorithm for the segmentation of a lymph node.
|
47 |
Learning object segmentation from video dataRoss, Michael G., Kaelbling, Leslie Pack 08 September 2003 (has links)
This memo describes the initial results of a project to create a self-supervised algorithm for learning object segmentation from video data. Developmental psychology and computational experience have demonstrated that the motion segmentation of objects is a simpler, more primitive process than the detection of object boundaries by static image cues. Therefore, motion information provides a plausible supervision signal for learning the static boundary detection task and for evaluating performance on a test set. A video camera and previously developed background subtraction algorithms can automatically produce a large database of motion-segmented images for minimal cost. The purpose of this work is to use the information in such a database to learn how to detect the object boundaries in novel images using static information, such as color, texture, and shape. This work was funded in part by the Office of Naval Research contract #N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of 11/6/98, and in part by a National Science Foundation Graduate Student Fellowship.
|
48 |
Low and Mid-level Shape Priors for Image SegmentationLevinshtein, Alex 15 February 2011 (has links)
Perceptual grouping is essential to manage the complexity of real world scenes. We explore bottom-up grouping at three different levels. Starting from low-level grouping, we propose a novel method for oversegmenting an image into compact superpixels, reducing the complexity of many high-level tasks. Unlike most low-level segmentation techniques, our geometric flow formulation enables us to impose additional compactness constraints, resulting in a fast method with minimal undersegmentation. Our subsequent work utilizes compact superpixels to detect two important mid-level shape regularities, closure and symmetry. Unlike the majority of closure detection approaches, we transform the closure detection problem into one of finding a subset of superpixels whose collective boundary has strong edge support in the image. Building on superpixels, we define a closure cost which is a ratio of a novel learned boundary gap measure to area, and show how it can be globally minimized to recover a small set of promising shape hypotheses. In our final contribution, motivated by the success of shape skeletons, we recover and group symmetric parts without assuming prior figure-ground segmentation. Further exploiting superpixel compactness, superpixels are this time used as an approximation to deformable maximal discs that comprise a medial axis. A learned measure of affinity between neighboring superpixels and between symmetric parts enables the purely bottom-up recovery of a skeleton-like structure, facilitating indexing and generic object recognition in complex real images.
|
49 |
Sea-Ice Detection from RADARSAT Images by Gamma-based Bilateral FilteringXie, Si January 2013 (has links)
Spaceborne Synthetic Aperture Radar (SAR) is commonly considered a powerful sensor to detect sea ice. Unfortunately, the sea-ice types in SAR images are difficult to be interpreted due to speckle noise. SAR image denoising therefore becomes a critical step of SAR sea-ice image processing and analysis. In this study, a two-phase approach is designed and implemented for SAR sea-ice image segmentation. In the first phase, a Gamma-based bilateral filter is introduced and applied for SAR image denoising in the local domain. It not only perfectly inherits the conventional bilateral filter with the capacity of smoothing SAR sea-ice imagery while preserving edges, but also enhances it based on the homogeneity in local areas and Gamma distribution of speckle noise. The Gamma-based bilateral filter outperforms other widely used filters, such as Frost filter and the conventional bilateral filter. In the second phase, the K-means clustering algorithm, whose initial centroids are optimized, is adopted in order to obtain better segmentation results. The proposed approach is tested using both simulated and real SAR images, compared with several existing algorithms including K-means, K-means based on the Frost filtered images, and K-means based on the conventional bilateral filtered images. The F1 scores of the simulated results demonstrate the effectiveness and robustness of the proposed approach whose overall accuracies maintain higher than 90% as variances of noise range from 0.1 to 0.5. For the real SAR images, the proposed approach outperforms others with average overall accuracy of 95%.
|
50 |
Low and Mid-level Shape Priors for Image SegmentationLevinshtein, Alex 15 February 2011 (has links)
Perceptual grouping is essential to manage the complexity of real world scenes. We explore bottom-up grouping at three different levels. Starting from low-level grouping, we propose a novel method for oversegmenting an image into compact superpixels, reducing the complexity of many high-level tasks. Unlike most low-level segmentation techniques, our geometric flow formulation enables us to impose additional compactness constraints, resulting in a fast method with minimal undersegmentation. Our subsequent work utilizes compact superpixels to detect two important mid-level shape regularities, closure and symmetry. Unlike the majority of closure detection approaches, we transform the closure detection problem into one of finding a subset of superpixels whose collective boundary has strong edge support in the image. Building on superpixels, we define a closure cost which is a ratio of a novel learned boundary gap measure to area, and show how it can be globally minimized to recover a small set of promising shape hypotheses. In our final contribution, motivated by the success of shape skeletons, we recover and group symmetric parts without assuming prior figure-ground segmentation. Further exploiting superpixel compactness, superpixels are this time used as an approximation to deformable maximal discs that comprise a medial axis. A learned measure of affinity between neighboring superpixels and between symmetric parts enables the purely bottom-up recovery of a skeleton-like structure, facilitating indexing and generic object recognition in complex real images.
|
Page generated in 0.135 seconds