• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1491
  • 473
  • 437
  • 372
  • 104
  • 76
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 19
  • 18
  • Tagged with
  • 3692
  • 1099
  • 754
  • 489
  • 461
  • 454
  • 421
  • 390
  • 389
  • 348
  • 347
  • 328
  • 325
  • 318
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Performance Evaluation of Lumen Segmentation in Ultrasound Images

Kadeby, Alexander January 2023 (has links)
Automatic segmentation of the lumen of carotid arteries in ultrasound images is a starting step in providing preventive care for patients with atherosclerosis. To perform the segmentation this paper introduces a model utilizing a threshold algorithm. The model was tested with two different threshold algorithms, Otsu and Sauvola, then scored against professionally drawn masks. The scores were calculated with Dice and Jaccard-Needham as well as specificity, recall, and f1-score. The results showed promising mean and median similarity between the predictions and masks. Future work includes either optimizing the current model or augmenting it to give an even better ground to continue the work on providing preventive care for atherosclerosis patients.
382

Pixel-level video understanding with efficient deep models

Hu, Ping 02 February 2024 (has links)
The ability to understand videos at the level of pixels plays a key role in a wide range of computer vision applications. For example, a robot or autonomous vehicle relies on classifying each pixel in the video stream into semantic categories to holistically understand the surrounding environment, and video editing software needs to exploit the spatiotemporal context of video pixels to generate various visual effects. Despite the great progress of Deep Learning (DL) techniques, applying DL-based vision models to process video pixels remains practically challenging, due to the high volume of video data and the compute-intensive design of DL approaches. In this thesis, we aim to design efficient and robust deep models for pixel-level video understanding of high-level semantics, mid-level grouping, and low-level interpolation. Toward this goal, in Part I, we address the semantic analysis of video pixels with the task of Video Semantic Segmentation (VSS), which aims to assign pixel-level semantic labels to video frames. We introduce methods that utilize temporal redundancy and context to efficiently recognize video pixels without sacrificing performance. Extensive experiments on various datasets demonstrate our methods' effectiveness and efficiency on both common GPUs and edge devices. Then, in Part II, we show that pixel-level motion patterns help to differentiate video objects from their background. In particular, we propose a fast and efficient contour-based algorithm to group and separate motion patterns for video objects. Furthermore, we present learning-based models to solve the tracking of objects across frames. We show that by explicitly separating the object segmentation and object tracking problems, our framework achieves efficiency during both training and inference. Finally, in Part III, we study the temporal interpolation of pixels given their spatial-temporal context. We show that intermediate video frames can be inferred via interpolation in a very efficient way, by introducing the many-to-many splatting framework that can quickly warp and fuse pixels at any number of arbitrary intermediate time steps. We also propose a dynamic refinement mechanism to further improve the interpolation quality by reducing redundant computation. Evaluation on various types of datasets shows that our method can interpolate videos with state-of-the-art quality and efficiency. To summarize, we discuss and propose efficient pipelines for pixel-level video understanding tasks across high-level semantics, mid-level grouping, and low-level interpolation. The proposed models can contribute to tackling a wide range of real-world video perception and understanding problems in future research.
383

A Methodology for Extracting Human Bodies from Still Images

Tsitsoulis, Athanasios January 2013 (has links)
No description available.
384

Atlas-based Segmentation of Temporal Bone Anatomy

Liang, Tong 28 July 2017 (has links)
No description available.
385

Simultaneous object detection and segmentation using top-down and bottom-up processing

Sharma, Vinay 07 January 2008 (has links)
No description available.
386

Unsupervised Segmentation of Time Series Data

Svensson, Martin January 2021 (has links)
In a modern vehicle system the amount of data generated are time series large enough for big data. Many of the time series contains interesting patterns, either densely populated or scarcely distributed over the data. For engineers to review the data a segmentation is crucial for data reduction, which is why this thesis investigates unsupervised segmentation of time series. This report uses two different methods, Fast Low-cost Unipotent Semantic Segmentation (FLUSS) and  Information Gain-based Temporal Segmentation (IGTS). These have different approaches, shape and statistical respectively. The goal is to evaluate the strength and weaknesses on tailored time series data, that has properties suiting one or more of the models. The data is constructed from an open dataset, the cricket dataset, that contains labelled segments. These are then concatenated to create datasets with specific properties. Evaluation metrics suitable for segmentation are discussed and evaluated. From the experiments it is clear that all models has strength and weaknesses, so outcome will depend on the data and model combination.  The shape based model, FLUSS, cannot handle reoccurring events or regimes. However, linear transitions between regimes, e.g. A to B to C, gives very good results if the regimes are not too similar. Statistical model, IGTS, yields a non-intuitive segmentation for humans, but could be a good way to reduce data in a preprocess step. It does have the ability to automatically reduce the number of segments to the optimal value based on entropy, which depending on the goal can be desirable or not.  Overall the methods delivered at worst the same as the random segmentation model, but in every test one or more models has better results than this baseline model. Unsupervised segmentation of time series is a difficult problem and will be highly dependent on the target data.
387

SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science research

Sivaramakrishnan, Upasana 30 May 2024 (has links)
Comparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging. / Master of Science / Studying different plant species' anatomy is crucial for understanding how genes work, especially those related to moving substances and signaling in plant roots. Scientists often use advanced techniques like confocal microscopy to examine plant tissues in detail. Traditional techniques like PlantSeg in automatically segmenting plant cells require a lot of computational resources and manual effort in preparing the dataset and training the model. In this study, we develop a novel technique using Segment-Anything-Model that could learn to identify cells without needing as much training data. We found that SAM performed better than other methods, detecting cells more accurately and making fewer mistakes. By comparing SAM with PlantSeg, we could see how well they worked with different types of images. Our results show that SAM is a reliable option for studying plant structures using confocal imaging. This research highlights the importance of using tailored approaches like SAM to get accurate results from complex images, offering a promising solution for plant scientists.
388

COUNTING SORGHUM LEAVES FROM RGB IMAGES BY PANOPTIC SEGMENTATION

Ian Ostermann (15321589) 19 April 2023 (has links)
<p dir="ltr">Meeting the nutritional requirements of an increasing population in a changing climate is the foremost concern of agricultural research in recent years. A solution to some of the many questions posed by this existential threat is breeding crops that more efficiently produce food with respect to land and water use. A key aspect to this optimization is geometric aspects of plant physiology such as canopy architecture that, while based in the actual 3D structure of the organism, does not necessarily require such a representation to measure. Although deep learning is a powerful tool to answer phenotyping questions that do not require an explicit intermediate 3D representation, training a network traditionally requires a large number of hand-segmented ground truth images. To bypass the enormous time and expense of hand- labeling datasets, we utilized a procedural sorghum image pipeline from another student in our group that produces images similar enough to the ground truth images from the phenotyping facility that the network can be directly used on real data while training only on automatically generated data. The synthetic data was used to train a deep segmentation network to identify which pixels correspond to which leaves. The segmentations were then processed to find the number of leaves identified in each image to use for the leaf-counting task in high-throughput phenotyping. Overall, our method performs comparably with human annotation accuracy by correctly predicting within a 90% confidence interval of the true leaf count in 97% of images while being faster and cheaper. This helps to add another expensive- to-collect phenotypic trait to the list of those that can be automatically collected.</p>
389

Efficient hierarchical layered graph approach for multi-region segmentation / Abordagem eficiente baseada em grafo hierárquico em camadas para a segmentação de múltiplas regiões

Leon, Leissi Margarita Castaneda 15 March 2019 (has links)
Image segmentation refers to the process of partitioning an image into meaningful regions of interest (objects) by assigning distinct labels to their composing pixels. Images are usually composed of multiple objects with distinctive features, thus requiring distinct high-level priors for their appropriate modeling. In order to obtain a good segmentation result, the segmentation method must attend all the individual priors of each object, as well as capture their inclusion/exclusion relations. However, many existing classical approaches do not include any form of structural information together with different high-level priors for each object into a single energy optimization. Consequently, they may be inappropriate in this context. We propose a novel efficient seed-based method for the multiple object segmentation of images based on graphs, named Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). It uses a tree of the relations between the image objects, being each object represented by a node. Each tree node may contain different individual high-level priors and defines a weighted digraph, named as layer. The layer graphs are then integrated into a hierarchical graph, considering the hierarchical relations of inclusion and exclusion. A single energy optimization is performed in the hierarchical layered weighted digraph leading to globally optimal results satisfying all the high-level priors. The experimental evaluations of HLOIFT and its extensions, on medical, natural and synthetic images, indicate promising results comparable to the state-of-the-art methods, but with lower computational complexity. Compared to hierarchical segmentation by the min cut/max-flow algorithm, our approach is less restrictive, leading to globally optimal results in more general scenarios, and has a better running time. / A segmentação de imagem refere-se ao processo de particionar uma imagem em regiões significativas de interesse (objetos), atribuindo rótulos distintos aos seus pixels de composição. As imagens geralmente são compostas de vários objetos com características distintas, exigindo, assim, restrições de alto nível distintas para a sua modelagem apropriada. Para obter um bom resultado de segmentação, o método de segmentação deve atender a todas as restrições individuais de cada objeto, bem como capturar suas relações de inclusão/ exclusão. No entanto, muitas abordagens clássicas existentes não incluem nenhuma forma de informação estrutural, juntamente com diferentes restrições de alto nível para cada objeto em uma única otimização de energia. Consequentemente, elas podem ser inapropriadas nesse contexto. Estamos propondo um novo método eficiente baseado em sementes para a segmentação de múltiplos objetos em imagens baseado em grafos, chamado Hierarchical Layered Oriented Image Foresting Transform (HLOIFT). Ele usa uma árvore das relações entre os objetos de imagem, sendo cada objeto representado por um nó. Cada nó da árvore pode conter diferentes restrições individuais de alto nível, que são usadas para definir um dígrafo ponderado, nomeado como camada. Os grafos das camadas são então integrados em um grafo hierárquico, considerando as relações hierárquicas de inclusão e exclusão. Uma otimização de energia única é realizada no dígrafo hierárquico em camadas, levando a resultados globalmente ótimos, satisfazendo todas as restrições de alto nível. As avaliações experimentais do HLOIFT e de suas extensões, em imagens médicas, naturais e sintéticas,indicam resultados promissores comparáveis aos métodos do estado-da-arte, mas com menor complexidade computacional. Comparada à segmentação hierárquica pelo algoritmo min-cut/max-flow, nossa abordagem é menos restritiva, levando a resultados globalmente ótimo sem cenários mais gerais e com melhor tempo de execução.
390

Unraveling Complexity: Panoptic Segmentation in Cellular and Space Imagery

Emanuele Plebani (18403245) 03 June 2024 (has links)
<p dir="ltr">Advancements in machine learning, especially deep learning, have facilitated the creation of models capable of performing tasks previously thought impossible. This progress has opened new possibilities across diverse fields such as medical imaging and remote sensing. However, the performance of these models relies heavily on the availability of extensive labeled datasets.<br>Collecting large amounts of labeled data poses a significant financial burden, particularly in specialized fields like medical imaging and remote sensing, where annotation requires expert knowledge. To address this challenge, various methods have been developed to mitigate the necessity for labeled data or leverage information contained in unlabeled data. These encompass include self-supervised learning, few-shot learning, and semi-supervised learning. This dissertation centers on the application of semi-supervised learning in segmentation tasks.<br><br>We focus on panoptic segmentation, a task that combines semantic segmentation (assigning a class to each pixel) and instance segmentation (grouping pixels into different object instances). We choose two segmentation tasks in different domains: nerve segmentation in microscopic imaging and hyperspectral segmentation in satellite images from Mars.<br>Our study reveals that, while direct application of methods developed for natural images may yield low performance, targeted modifications or the development of robust models can provide satisfactory results, thereby unlocking new applications like machine-assisted annotation of new data.<br><br>This dissertation begins with a challenging panoptic segmentation problem in microscopic imaging, systematically exploring model architectures to improve generalization. Subsequently, it investigates how semi-supervised learning may mitigate the need for annotated data. It then moves to hyperspectral imaging, introducing a Hierarchical Bayesian model (HBM) to robustly classify single pixels. Key contributions of include developing a state-of-the-art U-Net model for nerve segmentation, improving the model's ability to segment different cellular structures, evaluating semi-supervised learning methods in the same setting, and proposing HBM for hyperspectral segmentation. <br>The dissertation also provides a dataset of labeled CRISM pixels and mineral detections, and a software toolbox implementing the full HBM pipeline, to facilitate the development of new models.</p>

Page generated in 0.1178 seconds