Spelling suggestions: "subject:"aynthetic image 1generation"" "subject:"aynthetic image 4egeneration""
1 |
Synthetic Image Generation Using GANs : Generating Class Specific Images of Bacterial Growth / Syntetisk bildgenerering med GANsMattila, Marianne January 2021 (has links)
Mastitis is the most common disease affecting Swedish milk cows. Automatic image classification can be useful for quickly classifying the bacteria causing this inflammation, in turn making it possible to start treatment more quickly. However, training an automatic classifier relies on the availability of data. Data collection can be a slow process, and GANs are a promising way to generate synthetic data to add plausible samples to an existing data set. The purpose of this thesis is to explore the usefulness of GANs for generating images of bacteria. This was done through researching existing literature on the subject, implementing a GAN, and evaluating the generated images. A cGAN capable of generating class-specific bacteria was implemented and improvements upon it made. The images generated by the cGAN were evaluated using visual examination, rapid scene categorization, and an expert interview regarding the generated images. While the cGAN was able to replicate certain features in the real images, it fails in crucial aspects such as symmetry and detail. It is possible that other GAN variants may be better suited to the task. Lastly, the results highlight the challenges of evaluating GANs with current evaluation methods.
|
2 |
Image Analysis For Plant PhenotypingEnyu Cai (15533216) 17 May 2023 (has links)
<p>Plant phenotyping focuses on the measurement of plant characteristics throughout the growing season, typically with the goal of evaluating genotypes for plant breeding and management practices related to nutrient applications. Estimating plant characteristics is important for finding the relationship between the plant's genetic data and observable traits, which is also related to the environment and management practices. Recent machine learning approaches provide promising capabilities for high-throughput plant phenotyping using images. In this thesis, we focus on estimating plant traits for a field-based crop using images captured by Unmanned Aerial Vehicles (UAVs). We propose a method for estimating plant centers by transferring an existing model to a new scenario using limited ground truth data. We describe the use of transfer learning using a model fine-tuned for a single field or a single type of plant on a varied set of similar crops and fields. We introduce a method for rapidly counting panicles using images acquired by UAVs. We evaluate three different deep neural network structures for panicle counting and location. We propose a method for sorghum flowering time estimation using multi-temporal panicle counting. We present an approach that uses synthetic training images from generative adversarial networks for data augmentation to enhance the performance of sorghum panicle detection and counting. We reduce the amount of training data for sorghum panicle detection via semi-supervised learning. We create synthetic sorghum and maize images using diffusion models. We propose a method for tomato plant segmentation by color correction and color space conversion. We also introduce the methods for detecting and classifying bacterial tomato wilting from images.</p>
|
3 |
MORP: Monocular Orientation Regression PipelineGunderson, Jacob 01 June 2024 (has links) (PDF)
Orientation estimation of objects plays a pivotal role in robotics, self-driving cars, and augmented reality. Beyond mere position, accurately determining the orientation of objects is essential for constructing precise models of the physical world. While 2D object detection has made significant strides, the field of orientation estimation still faces several challenges. Our research addresses these hurdles by proposing an efficient pipeline which facilitates rapid creation of labeled training data and enables direct regression of object orientation from a single image. We start by creating a digital twin of a physical object using an iPhone, followed by generating synthetic images using the Unity game engine and domain randomization. Our deep learning model, trained exclusively on these synthetic images, demonstrates promising results in estimating the orientations of common objects. Notably, our model achieves a median geodesic distance error of 3.9 degrees and operates at a brisk 15 frames per second.
|
4 |
Three-Dimensional Fluorescence Microscopy Image Synthesis and Analysis Using Machine LearningLiming Wu (6622538) 07 February 2023 (has links)
<p>Recent advances in fluorescence microscopy enable deeper cellular imaging in living tissues with near-infrared excitation light. </p>
<p>High quality fluorescence microscopy images provide useful information for analyzing biological structures and diagnosing diseases.</p>
<p>Nuclei detection and segmentation are two fundamental steps for quantitative analysis of microscopy images.</p>
<p>However, existing machine learning-based approaches are hampered by three main challenges: (1) Hand annotated ground truth is difficult to obtain especially for 3D volumes, (2) Most of the object detection methods work only on 2D images and are difficult to extend to 3D volumes, (3) Segmentation-based approaches typically cannot distinguish different object instances without proper post-processing steps.</p>
<p>In this thesis, we propose various new methods for microscopy image analysis including nuclei synthesis, detection, and segmentation. </p>
<p>Due to the limitation of manually annotated ground truth masks, we first describe how we generate 2D/3D synthetic microscopy images using SpCycleGAN and use them as a data augmentation technique for our detection and segmentation networks.</p>
<p>For nuclei detection, we describe our RCNN-SliceNet for nuclei counting and centroid detection using slice-and-cluster strategy. </p>
<p>Then we introduce our 3D CentroidNet for nuclei centroid estimation using vector flow voting mechanism which does not require any post-processing steps.</p>
<p>For nuclei segmentation, we first describe our EMR-CNN for nuclei instance segmentation using ensemble learning and slice fusion strategy.</p>
<p>Then we present the 3D Nuclei Instance Segmentation Network (NISNet3D) for nuclei instance segmentation using gradient vector field array.</p>
<p>Extensive experiments have been conducted on a variety of challenging microscopy volumes to demonstrate that our approach can accurately detect and segment the cell nuclei and outperforms other compared methods.</p>
<p>Finally, we describe the Distributed and Networked Analysis of Volumetric Image Data (DINAVID) system we developed for biologists to remotely analyze large microscopy volumes using machine learning. </p>
|
5 |
Live Cell Imaging Analysis Using Machine Learning and Synthetic Food Image GenerationYue Han (18390447) 17 April 2024 (has links)
<p dir="ltr">Live cell imaging is a method to optically investigate living cells using microscopy images. It plays an increasingly important role in biomedical research as well as drug development. In this thesis, we focus on label-free mammalian cell tracking and label-free abnormally shaped nuclei segmentation of microscopy images. We propose a method to use a precomputed velocity field to enhance cell tracking performance. Additionally, we propose an ensemble method, Weighted Mask Fusion (WMF), combining the results of multiple segmentation models with shape analysis, to improve the final nuclei segmentation mask. We also propose an edge-aware Mask RCNN and introduce a hybrid architecture, an ensemble of CNNs and Swin-Transformer Edge Mask R-CNNs (HER-CNN), to accurately segment irregularly shaped nuclei of microscopy images. Our experiments indicate that our proposed method outperforms other existing methods for cell tracking and abnormally shaped nuclei segmentation.</p><p dir="ltr">While image-based dietary assessment methods reduce the time and labor required for nutrient analysis, the major challenge with deep learning-based approaches is that the performance is heavily dependent on the quality of the datasets. Challenges with food data include suffering from high intra-class variance and class imbalance. In this thesis, we present an effective clustering-based training framework named ClusDiff for generating high-quality and representative food images. From experiments, we showcase our method’s effectiveness in enhancing food image generation. Additionally, we conduct a study on the utilization of synthetic food images to address the class imbalance issue in long-tailed food classification.</p>
|
6 |
Segmentation and Deconvolution of Fluorescence Microscopy VolumesSoonam Lee (6738881) 14 August 2019 (has links)
<div>Recent advances in optical microscopy have enabled biologists collect fluorescence microscopy volumes cellular and subcellular structures of living tissue. This results in collecting large datasets of microscopy volume and needs image processing aided automated quantification method. To quantify biological structures a first and fundamental step is segmentation. Yet, the quantitative analysis of the microscopy volume is hampered by light diffraction, distortion created by lens aberrations in different directions, complex variation of biological structures. This thesis describes several proposed segmentation methods to identify various biological structures such as nuclei or tubules observed in fluorescence microscopy volumes. To achieve nuclei segmentation, multiscale edge detection method and 3D active contours with inhomogeneity correction method are used for segmenting nuclei. Our proposed 3D active contours with inhomogeneity correction method utilizes 3D microscopy volume information while addressing intensity inhomogeneity across vertical and horizontal directions. To achieve tubules segmentation, ellipse model fitting to tubule boundary method and convolutional neural networks with inhomogeneity correction method are performed. More specifically, ellipse fitting method utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting steps to delineate tubular objects. Also, the deep learning based method combines intensity inhomogeneity correction, data augmentation, followed by convolutional neural networks architecture. Moreover, this thesis demonstrates a new deconvolution method to improve microscopy image quality without knowing the 3D point spread function using a spatially constrained cycle-consistent adversarial networks. The results of proposed methods are visually and numerically compared with other methods. Experimental results demonstrate that our proposed methods achieve better performance than other methods for nuclei/tubules segmentation as well as deconvolution.</div>
|
Page generated in 0.1028 seconds