481 |
The Tip of the Blade: Self-Injury Among Early AdolescentsAlfonso, Moya L 25 June 2007 (has links)
This study described self-injury within a general adolescent population. This study involved secondary analysis of data gathered using the middle school Youth Risk Behavior Survey (YRBS) from 1,748 sixth- and eighth-grade students in eight middle schools in a large, southeastern county in Florida. A substantial percentage of students surveyed (28.4%) had tried self-injury. The prevalence of having ever tried self-injury did not vary by race or ethnicity, grade, school attended, or age but did differ by gender. When controlling for all other variables in the multivariate model including suicide, having ever tried self-injury was associated with peer self-injury, inhalant use, belief in possibilities, abnormal eating behaviors, and suicide scale scores. Youth who knew a friend who had self-injured, had used inhalants, had higher levels of abnormal eating behaviors, and higher levels of suicidal tendencies were at increased risk for having tried self-injury. Youth who had high belief in their possibilities were at decreased risk for having tried self-injury. During the past month, most youth had never harmed themselves on purpose. Approximately 15% had harmed themselves one time. Smaller proportions of youth had harmed themselves more frequently, including two or three different times (5%), four or five different times (2%), and six or more different times (3%). The frequency of self-injury did not vary by gender, race or ethnicity, grade, or school attended. Almost half of students surveyed (46.8%) knew a friend who had harmed themselves on purpose. Peer self-injury demonstrated multivariate relationships with gender, having ever been cyberbullied, having ever tried self-injury, grade level, and substance use. Being female, having been cyberbullied, having tried self-injury, being in eighth grade, and higher levels of substance use placed youth at increased risk of knowing a peer who had self-injured. Chi-squared Automatic Interaction Detection (CHAID) was used to identify segments of youth at greatest and least risk of self-injury, frequent self-injury, and knowing a friend who had harmed themselves on purpose (i.e., peer self-injury).
|
482 |
Airway segmentation of the ex-vivo mouse lung volume using voxel based classificationYavarna, Tarunashree 01 December 2010 (has links)
The spread of the pulmonary disease among humans is a very rapid process and it stands as the third highest killer in the United States of America. Computed Tomography (CT) scanning allows us to obtain detailed images of the pulmonary anatomy including the airways. The complexity of the tree makes the process of manual segmentation tedious, time-consuming, and variant across individuals. The resultant airway segmentation, whether arrived at manually or through the aid of computers, can then be used to measure airway geometry, study airway reactivity, and guide surgical interventions.
The thesis addresses these problems and suggests a fully automated technique for segmenting the airway tree in three-dimensional (3-D) micro-CT images of the thorax of an ex-vivo mouse. This novel technique is a several step approach consisting of:
1. The feature calculation of individual voxels of the micro-CT image,
2. Selection of the best features for classification (obtained from 1),
3. KNN-classification of voxels by the best selected features (from 2) and
4. Region growing segmentation of the KNN classified probability image.
KNN classification algorithm has been used for the classification of the voxels of the image (into airway and non-airway voxels) based on the image features, the results of which have then been processed using the region growing segmentation algorithm to obtain the final set of results for segmentation. The segmented airway of the ex-vivo mouse lung volume can be analyzed using a commercial software package to obtain the measurements.
|
483 |
Graph-based segmentation of lymph nodes in CT dataWang, Yao 01 December 2010 (has links)
The quantitative assessment of lymph node size plays an important role in treatment of diseases like cancer. In current clinical practice, lymph nodes are analyzed manually based on very rough measures of long and/or short axis length, which is error prone. In this paper we present a graph-based lymph node segmentation method to enable the computer-aided three-dimensional (3D) assessment of lymph node size. Our method has been validated on 111 cases of enlarged lymph nodes imaged with X-ray computed tomography (CT). For unsigned surface positioning error, Hausdorff distance and Dice coefficient, the mean was around 0.5 mm, under 3.26 mm and above 0.77 respectively. On average, 5.3 seconds were required by our algorithm for the segmentation of a lymph node.
|
484 |
Multistructure segmentation of multimodal brain images using artificial neural networksKim, Eun Young 01 December 2009 (has links)
A method for simultaneously segmenting multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors' descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.
|
485 |
A graph-based method for segmentation of tumors and lymph nodes in volumetric PET imagesVan Tol, Markus Lane 01 December 2014 (has links)
For radiation treatment of cancer and image-based quantitative assessment of treatment response, target structures like tumors and lymph nodes need to be segmented. In current clinical practice, this is done manually, which is time consuming and error-prone. To address this issue, a semi-automated graph-based segmentation approach was developed.
It was validated with 60 real datasets, segmented by two users manually and with this new algorithm, and 44 scans of a phantom dataset. The results showed a statistically significant improvement in intra- and interoperator consistency of segmentations, a statistically significant improvement in speed of segmentation, and reasonable accuracy against consensus images and phantoms. As such, the algorithm can be applied in cases that otherwise would use manual segmentation.
|
486 |
A combined machine-learning and graph-based framework for the 3-D automated segmentation of retinal structures in SD-OCT imagesAntony, Bhavna Josephine 01 December 2013 (has links)
Spectral-domain optical coherence tomography (SD-OCT) is a non-invasive imaging modality that allows for the quantitative study of retinal structures. SD-OCT has begun to find widespread use in the diagnosis and management of various ocular diseases. While commercial scanners provide limited analysis of a small number of retinal layers, the automated segmentation of retinal layers and other structures within these volumetric images is quite a challenging problem, especially in the presence of disease-induced changes.
The incorporation of a priori information, ranging from qualitative assessments of the data to automatically learned features, can significantly improve the performance of automated methods. Here, a combined machine learning-based approach and graph-theoretic approach is presented for the automated segmentation of retinal structures in SD-OCT images. Machine-learning based approaches are used to learn textural features from a training set, which are then incorporated into the graph- theoretic approach. The impact of the learned features on the final segmentation accuracy of the graph-theoretic approach is carefully evaluated so as to avoid incorporating learned components that do not improve the method. The adaptability of this versatile combination of a machine-learning and graph-theoretic approach is demonstrated through the segmentation of retinal surfaces in images obtained from humans, mice and canines.
In addition to this framework, a novel formulation of the graph-theoretic approach is described whereby surfaces with a disruption can be segmented. By incorporating the boundary of the "hole" into the feasibility definition of the set of surfaces, the final result consists of not only the surfaces but the boundary of the hole as well. Such a formulation can be used to model the neural canal opening (NCO) in SD-OCT images, which appears as a 3-D planar hole disrupting the surfaces in its vicinity. A machine-learning based approach was also used here to learn descriptive features of the NCO.
Thus, the major contributions of this work include 1) a method for the automated correction of axial artifacts in SD-OCT images, 2) a combined machine-learning and graph-theoretic framework for the segmentation of retinal surfaces in SD-OCT images (applied to humans, mice and canines), 3) a novel formulation of the graph-theoretic approach for the segmentation of multiple surfaces and their shared hole (applied to the segmentation of the neural canal opening), and 4) the investigation of textural markers that could precede structural and functional change in degenerative retinal diseases.
|
487 |
Medical imaging segmentation assessment via Bayesian approaches to fusion, accuracy and variability estimation with application to head and neck cancerGhattas, Andrew Emile 01 August 2017 (has links)
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability.
The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
|
488 |
Inner-Shelf Bottom Boundary Layer Development and Sediment Suspension During Tropical Storm Isadore on the West Florida Shelf.Brodersen, Justin G 18 June 2004 (has links)
Observations of the bottom boundary layer on the inner West Florida Shelf were made with a downward looking pulse coherent acoustic Doppler profiler throughout the passage of Tropical Storm Isadore during September 2002. The storm passed through the Gulf of Mexico roughly 780 km offshore of the Florida study site. Significant wave heights ranged from 0 m to 2.5 m within a span of eight days. The excellent, non-invasive, 5 cm resolution of the near bed (bottom meter) mean flows were used to estimate bed shear velocity and bottom roughness using the standard log-layer approach. A unique opportunity to examine boundary layer structure was provided by the high-resolution data. Calculated friction velocity due to currents (u*c) and apparent bottom roughness (z0) reduced considerably when velocity measurements closer to the bed were emphasized. This observation may be indicative of segmentation within the bottom boundary layer and has implications for common practices of estimating bed shear stress measurements from distances greater than a few tens of centimeters above the bed. Acoustic backscatter strength was used as a proxy for sediment suspension in the water column revealing no relationship between current parameters and sediment resuspension during the ten-day data set. Wave effects were included following the work of Grant and Madsen and others with strong relationships between wave and wave-current parameters and the ABS as a proxy for sediment resuspension evident.
|
489 |
Demand analysis and privacy of floating car dataCamilo, Giancarlo 13 September 2019 (has links)
This thesis investigates two research problems in analyzing floating car data (FCD): automated segmentation and privacy. For the former, we design an automated segmentation method based on the social functions of an area to enhance existing traffic demand analysis. This segmentation is used to create an extension of the traditional origin-destination matrix that can represent origins of traffic demand. The methods are then combined for interactive visualization of traffic demand, using a floating car dataset from a ride-hailing application. For the latter, we investigate the properties in FCD that may lead to privacy leaks. We present an attack on a real-world taxi dataset, showing that FCD, even though anonymized, can potentially leak privacy. / Graduate
|
490 |
Virtual image sensors to track human activity in a smart houseTun, Min Han January 2007 (has links)
With the advancement of computer technology, demand for more accurate and intelligent monitoring systems has also risen. The use of computer vision and video analysis range from industrial inspection to surveillance. Object detection and segmentation are the first and fundamental task in the analysis of dynamic scenes. Traditionally, this detection and segmentation are typically done through temporal differencing or statistical modelling methods. One of the most widely used background modeling and segmentation algorithms is the Mixture of Gaussians method developed by Stauffer and Grimson (1999). During the past decade many such algorithms have been developed ranging from parametric to non-parametric algorithms. Many of them utilise pixel intensities to model the background, but some use texture properties such as Local Binary Patterns. These algorithms function quite well under normal environmental conditions and each has its own set of advantages and short comings. However, there are two drawbacks in common. The first is that of the stationary object problem; when moving objects become stationary, they get merged into the background. The second problem is that of light changes; when rapid illumination changes occur in the environment, these background modelling algorithms produce large areas of false positives. / These algorithms are capable of adapting to the change, however, the quality of the segmentation is very poor during the adaptation phase. In this thesis, a framework to suppress these false positives is introduced. Image properties such as edges and textures are utilised to reduce the amount of false positives during adaptation phase. The framework is built on the idea of sequential pattern recognition. In any background modelling algorithm, the importance of multiple image features as well as different spatial scales cannot be overlooked. Failure to focus attention on these two factors will result in difficulty to detect and reduce false alarms caused by rapid light change and other conditions. The use of edge features in false alarm suppression is also explored. Edges are somewhat more resistant to environmental changes in video scenes. The assumption here is that regardless of environmental changes, such as that of illumination change, the edges of the objects should remain the same. The edge based approach is tested on several videos containing rapid light changes and shows promising results. Texture is then used to analyse video images and remove false alarm regions. Texture gradient approach and Laws Texture Energy Measures are used to find and remove false positives. It is found that Laws Texture Energy Measure performs better than the gradient approach. The results of using edges, texture and different combination of the two in false positive suppression are also presented in this work. This false positive suppression framework is applied to a smart house senario that uses cameras to model ”virtual sensors” to detect interactions of occupants with devices. Results show the accuracy of virtual sensors compared with the ground truth is improved.
|
Page generated in 0.1337 seconds