Spelling suggestions: "subject:"egmentation"" "subject:"asegmentation""
471 |
Using Multiview Annotation to Annotate Multiple Images SimultaneouslyPrice, Timothy C. 01 June 2017 (has links)
In order for a system to learn a model for object recognition, it must have a lot of positive images to learn from. Because of this, datasets of similar objects are built to train the model. These object datasets used for learning models are best when large, diverse and have annotations. But the process of obtaining the images and creating the annotations often times take a long time, and are costly. We use a method that obtains many images of the same objects in different angles very quickly and then reconstructs those images into a 3D model. We then use the 3D reconstruction of these images of an object to connect information about the different images of the same object together. We use that information to annotate all of the images taken very quickly and cheaply. These annotated images are then used to train the model.
|
472 |
A Semi-Automated Algorithm for Segmenting the Hippocampus in Patient and Control PopulationsMuncy, Nathan McKay 01 June 2016 (has links)
Calculating hippocampal volume from Magnetic Resonance (MR) images is an essential task in many studies of neurocognition in healthy and diseased populations. The `gold standard' method involves hand tracing, which is accurate but laborious, requiring expertly trained researchers and significant amounts of time. As such, segmenting large datasets with the standard method is impractical. Current automated pipelines are inaccurate at hippocampal demarcation and volumetry. We developed a semi-automated hippocampal segmentation pipeline based on the Advanced Normalization Tools (ANTs) suite of programs to segment the hippocampus. We applied the semi-automated segmentation pipeline to 70 participant scans (26 female) from groups that included participants diagnosed with autism spectrum disorder, healthy older adults (mean age 74) and healthy younger controls. We found that hippocampal segmentations obtained with the semi-automated pipeline more closely matched the segmentations of an expert rater than those obtained using FreeSurfer or the segmentations of novice raters. Further, we found that the pipeline performed best when including manually- placed landmarks and when using a template generated from a heterogeneous sample (that included the full variability of group assignments) than a template generated from more homogeneous samples (using only individuals within a given age or with a specific neuropsychiatric diagnosis). Additionally, the semi-automated pipeline required much less time (5 minutes per brain) than manual segmentation (30-60 minutes per brain) or FreeSurfer (8 hours per brain).
|
473 |
Multi-scale convolutional neural networks for segmentation of pulmonary structures in computed tomographyGerard, Sarah E. 01 December 2018 (has links)
Computed tomography (CT) is routinely used for diagnosing lung disease and developing treatment plans using images of intricate lung structure with submillimeter resolution. Automated segmentation of anatomical structures in such images is important to enable efficient processing in clinical and research settings. Convolution neural networks (ConvNets) are largely successful at performing image segmentation with the ability to learn discriminative abstract features that yield generalizable predictions. However, constraints in hardware memory do not allow deep networks to be trained with high-resolution volumetric CT images. Restricted by memory constraints, current applications of ConvNets on volumetric medical images use a subset of the full image; limiting the capacity of the network to learn informative global patterns. Local patterns, such as edges, are necessary for precise boundary localization, however, they suffer from low specificity. Global information can disambiguate structures that are locally similar.
The central thesis of this doctoral work is that both local and global information is important for segmentation of anatomical structures in medical images. A novel multi-scale ConvNet is proposed that divides the learning task across multiple networks; each network learns features over different ranges of scales. It is hypothesized that multi-scale ConvNets will lead to improved segmentation performance, as no compromise needs to be made between image resolution, image extent, and network depth. Three multi-scale models were designed to specifically target segmentation of three pulmonary structures: lungs, fissures, and lobes.
The proposed models were evaluated on a diverse datasets and compared to architectures that do not use both local and global features. The lung model was evaluated on humans and three animal species; the results demonstrated the multi-scale model outperformed single scale models at different resolutions. The fissure model showed superior performance compared to both a traditional Hessian filter and a standard U-Net architecture that is limited in global extent.
The results demonstrated that multi-scale ConvNets improved pulmonary CT segmentation by incorporating both local and global features using multiple ConvNets within a constrained-memory system. Overall, the proposed pipeline achieved high accuracy and was robust to variations resulting from different imaging protocols, reconstruction kernels, scanners, lung volumes, and pathological alterations; demonstrating its potential for enabling high-throughput image analysis in clinical and research settings.
|
474 |
Automated delineation and quantitative analysis of blood vessels in retinal fundus imageXu, Xiayu 01 May 2012 (has links)
Automated fundus image analysis plays an important role in the computer aided diagnosis of ophthalmologic disorders. A lot of eye disorders, as well as cardiovascular disorders, are known to be related with retinal vasculature changes. Many studies has been done to explore these relationships. However, most of the studies are based on limited data obtained using manual or semi-automated methods due to the lack of automated techniques in the measurement and analysis of retinal vasculature. In this thesis, a fully automated retinal vessel width measurement technique is proposed. This novel method models the accurate vessel boundary delineation problem in two-dimension into an optimal surface segmentation problem in threedimension. Then the optimal surface segmentation problem is transformed into finding a minimum-cost closed set problem in a vertex-weighted geometric graph. The problem is modeled differently for straight vessel and for branch point because of the different conditions in straight vessel and in branch point. Furthermore, many of the retinal image analysis needs the location of the optic disc and fovea as a prerequisite information, for example, in the analysis of the relationship between vessel width and the distance to the optic disc. Hence, a simultaneous optic disc and fovea detection method is presented, which includes a two-step classification of three classes. The major contributions of this thesis include: 1) developing a fully automated vessel width measurement technique for retinal blood vessels, 2) developing a simultaneous optic disc and fovea detection method, 3) validating the methods using multiple datasets, and 4) applying the proposed methods in multiple retinal vasculature analysis studies.
|
475 |
Foreground Removal in a Multi-Camera SystemMortensen, Daniel T. 01 December 2019 (has links)
Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system.
The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used.
|
476 |
Segmentation of lung tissue in CT images with disease and pathologyHua, Panfang 01 December 2010 (has links)
Lung segmentation is an important first step for quantitative lung CT image analysis and computer aided diagnosis. However, accurate and automated lung CT image segmentation may be made difficult by the presence of the abnormalities. Since many lung diseases change tissue density, resulting in intensity changes in CT image data, intensity-only segmentation algorithms will not work for most pathological lung cases. This thesis presents two automatic algorithms for pathological lung segmentation. One is based on the geodesic active contour, another method uses graph search driven by a cost function combining the intensity, gradient, boundary smoothness, and the rib information. The methods were tested on several 3D thorax CT data sets with lung disease. Given the manual segmentation result as gold standard, we validate our methods by comparing our automatic segmentation results with Hu's method. Sensitivity, specificity, and Hausdorff distance were calculated to evaluate the methods.
|
477 |
Novel use of video and image analysis in a video compression systemStobaugh, John David 01 May 2015 (has links)
As consumer demand for higher quality video at lower bit-rate increases, so does the need for more sophisticated methods of compressing videos into manageable file sizes. This research attempts to address these concerns while still maintaining reasonable encoding times. Modern segmentation and grouping analysis are used with code vectorization techniques and other optimization paradigms to improve quality and performance within the next generation coding standard, High Efficiency Video Coding. This research saw on average a 50% decrease in run-time by the encoder with marginal decreases in perceived quality.
|
478 |
Background subtraction using ensembles of classifiers with an extended feature setKlare, Brendan F 30 June 2008 (has links)
The limitations of foreground segmentation in difficult environments using standard color space features often result in poor performance during autonomous tracking. This work presents a new approach for classification of foreground and background pixels in image sequences by employing an ensemble of classifiers, each operating on a different feature type such as the three RGB features, gradient magnitude and orientation features, and eight Haar features. These thirteen features are used in an ensemble classifier where each classifier operates on a single image feature. Each classifier implements a Mixture of Gaussians-based unsupervised background classification algorithm. The non-thresholded, classification decision score of each classifier are fused together by taking the average of their outputs and creating one single hypothesis. The results of using the ensemble classifier on three separate and distinct data sets are compared to using only RGB features through ROC graphs. The extended feature vector outperforms the RGB features on all three data sets, and shows a large scale improvement on two of the three data sets. The two data sets with the greatest improvements are both outdoor data sets with global illumination changes and the other has many local illumination changes. When using the entire feature set, to operate at a 90% true positive rate, the per pixel, false alarm rate is reduced five times in one data set and six times in the other data set.
|
479 |
The Tip of the Blade: Self-Injury Among Early AdolescentsAlfonso, Moya L 25 June 2007 (has links)
This study described self-injury within a general adolescent population. This study involved secondary analysis of data gathered using the middle school Youth Risk Behavior Survey (YRBS) from 1,748 sixth- and eighth-grade students in eight middle schools in a large, southeastern county in Florida. A substantial percentage of students surveyed (28.4%) had tried self-injury. The prevalence of having ever tried self-injury did not vary by race or ethnicity, grade, school attended, or age but did differ by gender. When controlling for all other variables in the multivariate model including suicide, having ever tried self-injury was associated with peer self-injury, inhalant use, belief in possibilities, abnormal eating behaviors, and suicide scale scores. Youth who knew a friend who had self-injured, had used inhalants, had higher levels of abnormal eating behaviors, and higher levels of suicidal tendencies were at increased risk for having tried self-injury. Youth who had high belief in their possibilities were at decreased risk for having tried self-injury. During the past month, most youth had never harmed themselves on purpose. Approximately 15% had harmed themselves one time. Smaller proportions of youth had harmed themselves more frequently, including two or three different times (5%), four or five different times (2%), and six or more different times (3%). The frequency of self-injury did not vary by gender, race or ethnicity, grade, or school attended. Almost half of students surveyed (46.8%) knew a friend who had harmed themselves on purpose. Peer self-injury demonstrated multivariate relationships with gender, having ever been cyberbullied, having ever tried self-injury, grade level, and substance use. Being female, having been cyberbullied, having tried self-injury, being in eighth grade, and higher levels of substance use placed youth at increased risk of knowing a peer who had self-injured. Chi-squared Automatic Interaction Detection (CHAID) was used to identify segments of youth at greatest and least risk of self-injury, frequent self-injury, and knowing a friend who had harmed themselves on purpose (i.e., peer self-injury).
|
480 |
Airway segmentation of the ex-vivo mouse lung volume using voxel based classificationYavarna, Tarunashree 01 December 2010 (has links)
The spread of the pulmonary disease among humans is a very rapid process and it stands as the third highest killer in the United States of America. Computed Tomography (CT) scanning allows us to obtain detailed images of the pulmonary anatomy including the airways. The complexity of the tree makes the process of manual segmentation tedious, time-consuming, and variant across individuals. The resultant airway segmentation, whether arrived at manually or through the aid of computers, can then be used to measure airway geometry, study airway reactivity, and guide surgical interventions.
The thesis addresses these problems and suggests a fully automated technique for segmenting the airway tree in three-dimensional (3-D) micro-CT images of the thorax of an ex-vivo mouse. This novel technique is a several step approach consisting of:
1. The feature calculation of individual voxels of the micro-CT image,
2. Selection of the best features for classification (obtained from 1),
3. KNN-classification of voxels by the best selected features (from 2) and
4. Region growing segmentation of the KNN classified probability image.
KNN classification algorithm has been used for the classification of the voxels of the image (into airway and non-airway voxels) based on the image features, the results of which have then been processed using the region growing segmentation algorithm to obtain the final set of results for segmentation. The segmented airway of the ex-vivo mouse lung volume can be analyzed using a commercial software package to obtain the measurements.
|
Page generated in 0.1169 seconds