• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1489
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 11
  • Tagged with
  • 3676
  • 1096
  • 750
  • 488
  • 460
  • 450
  • 419
  • 390
  • 389
  • 348
  • 346
  • 328
  • 321
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

A Methodology for Extracting Human Bodies from Still Images

Tsitsoulis, Athanasios January 2013 (has links)
No description available.
382

Atlas-based Segmentation of Temporal Bone Anatomy

Liang, Tong 28 July 2017 (has links)
No description available.
383

Simultaneous object detection and segmentation using top-down and bottom-up processing

Sharma, Vinay 07 January 2008 (has links)
No description available.
384

Unsupervised Segmentation of Time Series Data

Svensson, Martin January 2021 (has links)
In a modern vehicle system the amount of data generated are time series large enough for big data. Many of the time series contains interesting patterns, either densely populated or scarcely distributed over the data. For engineers to review the data a segmentation is crucial for data reduction, which is why this thesis investigates unsupervised segmentation of time series. This report uses two different methods, Fast Low-cost Unipotent Semantic Segmentation (FLUSS) and  Information Gain-based Temporal Segmentation (IGTS). These have different approaches, shape and statistical respectively. The goal is to evaluate the strength and weaknesses on tailored time series data, that has properties suiting one or more of the models. The data is constructed from an open dataset, the cricket dataset, that contains labelled segments. These are then concatenated to create datasets with specific properties. Evaluation metrics suitable for segmentation are discussed and evaluated. From the experiments it is clear that all models has strength and weaknesses, so outcome will depend on the data and model combination.  The shape based model, FLUSS, cannot handle reoccurring events or regimes. However, linear transitions between regimes, e.g. A to B to C, gives very good results if the regimes are not too similar. Statistical model, IGTS, yields a non-intuitive segmentation for humans, but could be a good way to reduce data in a preprocess step. It does have the ability to automatically reduce the number of segments to the optimal value based on entropy, which depending on the goal can be desirable or not.  Overall the methods delivered at worst the same as the random segmentation model, but in every test one or more models has better results than this baseline model. Unsupervised segmentation of time series is a difficult problem and will be highly dependent on the target data.
385

SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science research

Sivaramakrishnan, Upasana 30 May 2024 (has links)
Comparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging. / Master of Science / Studying different plant species' anatomy is crucial for understanding how genes work, especially those related to moving substances and signaling in plant roots. Scientists often use advanced techniques like confocal microscopy to examine plant tissues in detail. Traditional techniques like PlantSeg in automatically segmenting plant cells require a lot of computational resources and manual effort in preparing the dataset and training the model. In this study, we develop a novel technique using Segment-Anything-Model that could learn to identify cells without needing as much training data. We found that SAM performed better than other methods, detecting cells more accurately and making fewer mistakes. By comparing SAM with PlantSeg, we could see how well they worked with different types of images. Our results show that SAM is a reliable option for studying plant structures using confocal imaging. This research highlights the importance of using tailored approaches like SAM to get accurate results from complex images, offering a promising solution for plant scientists.
386

COUNTING SORGHUM LEAVES FROM RGB IMAGES BY PANOPTIC SEGMENTATION

Ian Ostermann (15321589) 19 April 2023 (has links)
<p dir="ltr">Meeting the nutritional requirements of an increasing population in a changing climate is the foremost concern of agricultural research in recent years. A solution to some of the many questions posed by this existential threat is breeding crops that more efficiently produce food with respect to land and water use. A key aspect to this optimization is geometric aspects of plant physiology such as canopy architecture that, while based in the actual 3D structure of the organism, does not necessarily require such a representation to measure. Although deep learning is a powerful tool to answer phenotyping questions that do not require an explicit intermediate 3D representation, training a network traditionally requires a large number of hand-segmented ground truth images. To bypass the enormous time and expense of hand- labeling datasets, we utilized a procedural sorghum image pipeline from another student in our group that produces images similar enough to the ground truth images from the phenotyping facility that the network can be directly used on real data while training only on automatically generated data. The synthetic data was used to train a deep segmentation network to identify which pixels correspond to which leaves. The segmentations were then processed to find the number of leaves identified in each image to use for the leaf-counting task in high-throughput phenotyping. Overall, our method performs comparably with human annotation accuracy by correctly predicting within a 90% confidence interval of the true leaf count in 97% of images while being faster and cheaper. This helps to add another expensive- to-collect phenotypic trait to the list of those that can be automatically collected.</p>
387

Efficient hierarchical layered graph approach for multi-region segmentation / Abordagem eficiente baseada em grafo hierárquico em camadas para a segmentação de múltiplas regiões

Leon, Leissi Margarita Castaneda 15 March 2019 (has links)
Image segmentation refers to the process of partitioning an image into meaningful regions of interest (objects) by assigning distinct labels to their composing pixels. Images are usually composed of multiple objects with distinctive features, thus requiring distinct high-level priors for their appropriate modeling. In order to obtain a good segmentation result, the segmentation method must attend all the individual priors of each object, as well as capture their inclusion/exclusion relations. However, many existing classical approaches do not include any form of structural information together with different high-level priors for each object into a single energy optimization. Consequently, they may be inappropriate in this context. We propose a novel efficient seed-based method for the multiple object segmentation of images based on graphs, named Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). It uses a tree of the relations between the image objects, being each object represented by a node. Each tree node may contain different individual high-level priors and defines a weighted digraph, named as layer. The layer graphs are then integrated into a hierarchical graph, considering the hierarchical relations of inclusion and exclusion. A single energy optimization is performed in the hierarchical layered weighted digraph leading to globally optimal results satisfying all the high-level priors. The experimental evaluations of HLOIFT and its extensions, on medical, natural and synthetic images, indicate promising results comparable to the state-of-the-art methods, but with lower computational complexity. Compared to hierarchical segmentation by the min cut/max-flow algorithm, our approach is less restrictive, leading to globally optimal results in more general scenarios, and has a better running time. / A segmentação de imagem refere-se ao processo de particionar uma imagem em regiões significativas de interesse (objetos), atribuindo rótulos distintos aos seus pixels de composição. As imagens geralmente são compostas de vários objetos com características distintas, exigindo, assim, restrições de alto nível distintas para a sua modelagem apropriada. Para obter um bom resultado de segmentação, o método de segmentação deve atender a todas as restrições individuais de cada objeto, bem como capturar suas relações de inclusão/ exclusão. No entanto, muitas abordagens clássicas existentes não incluem nenhuma forma de informação estrutural, juntamente com diferentes restrições de alto nível para cada objeto em uma única otimização de energia. Consequentemente, elas podem ser inapropriadas nesse contexto. Estamos propondo um novo método eficiente baseado em sementes para a segmentação de múltiplos objetos em imagens baseado em grafos, chamado Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). Ele usa uma árvore das relações entre os objetos de imagem, sendo cada objeto representado por um nó. Cada nó da árvore pode conter diferentes restrições individuais de alto nível, que são usadas para definir um dígrafo ponderado, nomeado como camada. Os grafos das camadas são então integrados em um grafo hierárquico, considerando as relações hierárquicas de inclusão e exclusão. Uma otimização de energia única é realizada no dígrafo hierárquico em camadas, levando a resultados globalmente ótimos, satisfazendo todas as restrições de alto nível. As avaliações experimentais do HLOIFT e de suas extensões, em imagens médicas, naturais e sintéticas,indicam resultados promissores comparáveis aos métodos do estado-da-arte, mas com menor complexidade computacional. Comparada à segmentação hierárquica pelo algoritmo min-cut/max-flow, nossa abordagem é menos restritiva, levando a resultados globalmente ótimo sem cenários mais gerais e com melhor tempo de execução.
388

Unraveling Complexity: Panoptic Segmentation in Cellular and Space Imagery

Emanuele Plebani (18403245) 03 June 2024 (has links)
<p dir="ltr">Advancements in machine learning, especially deep learning, have facilitated the creation of models capable of performing tasks previously thought impossible. This progress has opened new possibilities across diverse fields such as medical imaging and remote sensing. However, the performance of these models relies heavily on the availability of extensive labeled datasets.<br>Collecting large amounts of labeled data poses a significant financial burden, particularly in specialized fields like medical imaging and remote sensing, where annotation requires expert knowledge. To address this challenge, various methods have been developed to mitigate the necessity for labeled data or leverage information contained in unlabeled data. These encompass include self-supervised learning, few-shot learning, and semi-supervised learning. This dissertation centers on the application of semi-supervised learning in segmentation tasks.<br><br>We focus on panoptic segmentation, a task that combines semantic segmentation (assigning a class to each pixel) and instance segmentation (grouping pixels into different object instances). We choose two segmentation tasks in different domains: nerve segmentation in microscopic imaging and hyperspectral segmentation in satellite images from Mars.<br>Our study reveals that, while direct application of methods developed for natural images may yield low performance, targeted modifications or the development of robust models can provide satisfactory results, thereby unlocking new applications like machine-assisted annotation of new data.<br><br>This dissertation begins with a challenging panoptic segmentation problem in microscopic imaging, systematically exploring model architectures to improve generalization. Subsequently, it investigates how semi-supervised learning may mitigate the need for annotated data. It then moves to hyperspectral imaging, introducing a Hierarchical Bayesian model (HBM) to robustly classify single pixels. Key contributions of include developing a state-of-the-art U-Net model for nerve segmentation, improving the model's ability to segment different cellular structures, evaluating semi-supervised learning methods in the same setting, and proposing HBM for hyperspectral segmentation. <br>The dissertation also provides a dataset of labeled CRISM pixels and mineral detections, and a software toolbox implementing the full HBM pipeline, to facilitate the development of new models.</p>
389

From interactive to semantic image segmentation

Gulshan, Varun January 2011 (has links)
This thesis investigates two well defined problems in image segmentation, viz. interactive and semantic image segmentation. Interactive segmentation involves power assisting a user in cutting out objects from an image, whereas semantic segmentation involves partitioning pixels in an image into object categories. We investigate various models and energy formulations for both these problems in this thesis. In order to improve the performance of interactive systems, low level texture features are introduced as a replacement for the more commonly used RGB features. To quantify the improvement obtained by using these texture features, two annotated datasets of images are introduced (one consisting of natural images, and the other consisting of camouflaged objects). A significant improvement in performance is observed when using texture features for the case of monochrome images and images containing camouflaged objects. We also explore adding mid-level cues such as shape constraints into interactive segmentation by introducing the idea of geodesic star convexity, which extends the existing notion of a star convexity prior in two important ways: (i) It allows for multiple star centres as opposed to single stars in the original prior and (ii) It generalises the shape constraint by allowing for Geodesic paths as opposed to Euclidean rays. Global minima of our energy function can be obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. These extensions to star convexity allow us to use such constraints in a practical segmentation system. This system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way, and it is shown that having shape constraints reduces user effort significantly compared to existing interactive systems. We also introduce a new and harder dataset which augments the existing GrabCut dataset with more realistic images and ground truth taken from the PASCAL VOC segmentation challenge. In the latter part of the thesis, we bring in object category level information in order to make the interactive segmentation tasks easier, and move towards fully automated semantic segmentation. An algorithm to automatically segment humans from cluttered images given their bounding boxes is presented. A top down segmentation of the human is obtained using classifiers trained to predict segmentation masks from local HOG descriptors. These masks are then combined with bottom up image information in a local GrabCut like procedure. This algorithm is later completely automated to segment humans without requiring a bounding box, and is quantitatively compared with other semantic segmentation methods. We also introduce a novel way to acquire large quantities of segmented training data relatively effortlessly using the Kinect. In the final part of this work, we explore various semantic segmentation methods based on learning using bottom up super-pixelisations. Different methods of combining multiple super-pixelisations are discussed and quantitatively evaluated on two segmentation datasets. We observe that simple combinations of independently trained classifiers on single super-pixelisations perform almost as good as complex methods based on jointly learning across multiple super-pixelisations. We also explore CRF based formulations for semantic segmentation, and introduce novel visual words based object boundary description in the energy formulation. The object appearance and boundary parameters are trained jointly using structured output learning methods, and the benefit of adding pairwise terms is quantified on two different datasets.
390

Pre-Attentive Segmentation in the Primary Visual Cortex

Li, Zhaoping 30 June 1998 (has links)
Stimuli outside classical receptive fields have been shown to exert significant influence over the activities of neurons in primary visual cortexWe propose that contextual influences are used for pre-attentive visual segmentation, in a new framework called segmentation without classification. This means that segmentation of an image into regions occurs without classification of features within a region or comparison of features between regions. This segmentation framework is simpler than previous computational approaches, making it implementable by V1 mechanisms, though higher leve l visual mechanisms are needed to refine its output. However, it easily handles a class of segmentation problems that are tricky in conventional methods. The cortex computes global region boundaries by detecting the breakdown of homogeneity or translation invariance in the input, using local intra-cortical interactions mediated by the horizontal connections. The difference between contextual influences near and far from region boundaries makes neural activities near region boundaries higher than elsewhere, making boundaries more salient for perceptual pop-out. This proposal is implemented in a biologically based model of V1, and demonstrated using examples of texture segmentation and figure-ground segregation. The model performs segmentation in exactly the same neural circuit that solves the dual problem of the enhancement of contours, as is suggested by experimental observations. Its behavior is compared with psychophysical and physiological data on segmentation, contour enhancement, and contextual influences. We discuss the implications of segmentation without classification and the predictions of our V1 model, and relate it to other phenomena such as asymmetry in visual search.

Page generated in 0.084 seconds