• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1489
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 11
  • Tagged with
  • 3676
  • 1096
  • 750
  • 488
  • 460
  • 450
  • 419
  • 390
  • 389
  • 348
  • 346
  • 328
  • 321
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Graph-based segmentation of lymph nodes in CT data

Wang, Yao 01 December 2010 (has links)
The quantitative assessment of lymph node size plays an important role in treatment of diseases like cancer. In current clinical practice, lymph nodes are analyzed manually based on very rough measures of long and/or short axis length, which is error prone. In this paper we present a graph-based lymph node segmentation method to enable the computer-aided three-dimensional (3D) assessment of lymph node size. Our method has been validated on 111 cases of enlarged lymph nodes imaged with X-ray computed tomography (CT). For unsigned surface positioning error, Hausdorff distance and Dice coefficient, the mean was around 0.5 mm, under 3.26 mm and above 0.77 respectively. On average, 5.3 seconds were required by our algorithm for the segmentation of a lymph node.
482

Multistructure segmentation of multimodal brain images using artificial neural networks

Kim, Eun Young 01 December 2009 (has links)
A method for simultaneously segmenting multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors' descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.
483

A graph-based method for segmentation of tumors and lymph nodes in volumetric PET images

Van Tol, Markus Lane 01 December 2014 (has links)
For radiation treatment of cancer and image-based quantitative assessment of treatment response, target structures like tumors and lymph nodes need to be segmented. In current clinical practice, this is done manually, which is time consuming and error-prone. To address this issue, a semi-automated graph-based segmentation approach was developed. It was validated with 60 real datasets, segmented by two users manually and with this new algorithm, and 44 scans of a phantom dataset. The results showed a statistically significant improvement in intra- and interoperator consistency of segmentations, a statistically significant improvement in speed of segmentation, and reasonable accuracy against consensus images and phantoms. As such, the algorithm can be applied in cases that otherwise would use manual segmentation.
484

A combined machine-learning and graph-based framework for the 3-D automated segmentation of retinal structures in SD-OCT images

Antony, Bhavna Josephine 01 December 2013 (has links)
Spectral-domain optical coherence tomography (SD-OCT) is a non-invasive imaging modality that allows for the quantitative study of retinal structures. SD-OCT has begun to find widespread use in the diagnosis and management of various ocular diseases. While commercial scanners provide limited analysis of a small number of retinal layers, the automated segmentation of retinal layers and other structures within these volumetric images is quite a challenging problem, especially in the presence of disease-induced changes. The incorporation of a priori information, ranging from qualitative assessments of the data to automatically learned features, can significantly improve the performance of automated methods. Here, a combined machine learning-based approach and graph-theoretic approach is presented for the automated segmentation of retinal structures in SD-OCT images. Machine-learning based approaches are used to learn textural features from a training set, which are then incorporated into the graph- theoretic approach. The impact of the learned features on the final segmentation accuracy of the graph-theoretic approach is carefully evaluated so as to avoid incorporating learned components that do not improve the method. The adaptability of this versatile combination of a machine-learning and graph-theoretic approach is demonstrated through the segmentation of retinal surfaces in images obtained from humans, mice and canines. In addition to this framework, a novel formulation of the graph-theoretic approach is described whereby surfaces with a disruption can be segmented. By incorporating the boundary of the "hole" into the feasibility definition of the set of surfaces, the final result consists of not only the surfaces but the boundary of the hole as well. Such a formulation can be used to model the neural canal opening (NCO) in SD-OCT images, which appears as a 3-D planar hole disrupting the surfaces in its vicinity. A machine-learning based approach was also used here to learn descriptive features of the NCO. Thus, the major contributions of this work include 1) a method for the automated correction of axial artifacts in SD-OCT images, 2) a combined machine-learning and graph-theoretic framework for the segmentation of retinal surfaces in SD-OCT images (applied to humans, mice and canines), 3) a novel formulation of the graph-theoretic approach for the segmentation of multiple surfaces and their shared hole (applied to the segmentation of the neural canal opening), and 4) the investigation of textural markers that could precede structural and functional change in degenerative retinal diseases.
485

Medical imaging segmentation assessment via Bayesian approaches to fusion, accuracy and variability estimation with application to head and neck cancer

Ghattas, Andrew Emile 01 August 2017 (has links)
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability. The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
486

Inner-Shelf Bottom Boundary Layer Development and Sediment Suspension During Tropical Storm Isadore on the West Florida Shelf.

Brodersen, Justin G 18 June 2004 (has links)
Observations of the bottom boundary layer on the inner West Florida Shelf were made with a downward looking pulse coherent acoustic Doppler profiler throughout the passage of Tropical Storm Isadore during September 2002. The storm passed through the Gulf of Mexico roughly 780 km offshore of the Florida study site. Significant wave heights ranged from 0 m to 2.5 m within a span of eight days. The excellent, non-invasive, 5 cm resolution of the near bed (bottom meter) mean flows were used to estimate bed shear velocity and bottom roughness using the standard log-layer approach. A unique opportunity to examine boundary layer structure was provided by the high-resolution data. Calculated friction velocity due to currents (u*c) and apparent bottom roughness (z0) reduced considerably when velocity measurements closer to the bed were emphasized. This observation may be indicative of segmentation within the bottom boundary layer and has implications for common practices of estimating bed shear stress measurements from distances greater than a few tens of centimeters above the bed. Acoustic backscatter strength was used as a proxy for sediment suspension in the water column revealing no relationship between current parameters and sediment resuspension during the ten-day data set. Wave effects were included following the work of Grant and Madsen and others with strong relationships between wave and wave-current parameters and the ABS as a proxy for sediment resuspension evident.
487

Demand analysis and privacy of floating car data

Camilo, Giancarlo 13 September 2019 (has links)
This thesis investigates two research problems in analyzing floating car data (FCD): automated segmentation and privacy. For the former, we design an automated segmentation method based on the social functions of an area to enhance existing traffic demand analysis. This segmentation is used to create an extension of the traditional origin-destination matrix that can represent origins of traffic demand. The methods are then combined for interactive visualization of traffic demand, using a floating car dataset from a ride-hailing application. For the latter, we investigate the properties in FCD that may lead to privacy leaks. We present an attack on a real-world taxi dataset, showing that FCD, even though anonymized, can potentially leak privacy. / Graduate
488

Virtual image sensors to track human activity in a smart house

Tun, Min Han January 2007 (has links)
With the advancement of computer technology, demand for more accurate and intelligent monitoring systems has also risen. The use of computer vision and video analysis range from industrial inspection to surveillance. Object detection and segmentation are the first and fundamental task in the analysis of dynamic scenes. Traditionally, this detection and segmentation are typically done through temporal differencing or statistical modelling methods. One of the most widely used background modeling and segmentation algorithms is the Mixture of Gaussians method developed by Stauffer and Grimson (1999). During the past decade many such algorithms have been developed ranging from parametric to non-parametric algorithms. Many of them utilise pixel intensities to model the background, but some use texture properties such as Local Binary Patterns. These algorithms function quite well under normal environmental conditions and each has its own set of advantages and short comings. However, there are two drawbacks in common. The first is that of the stationary object problem; when moving objects become stationary, they get merged into the background. The second problem is that of light changes; when rapid illumination changes occur in the environment, these background modelling algorithms produce large areas of false positives. / These algorithms are capable of adapting to the change, however, the quality of the segmentation is very poor during the adaptation phase. In this thesis, a framework to suppress these false positives is introduced. Image properties such as edges and textures are utilised to reduce the amount of false positives during adaptation phase. The framework is built on the idea of sequential pattern recognition. In any background modelling algorithm, the importance of multiple image features as well as different spatial scales cannot be overlooked. Failure to focus attention on these two factors will result in difficulty to detect and reduce false alarms caused by rapid light change and other conditions. The use of edge features in false alarm suppression is also explored. Edges are somewhat more resistant to environmental changes in video scenes. The assumption here is that regardless of environmental changes, such as that of illumination change, the edges of the objects should remain the same. The edge based approach is tested on several videos containing rapid light changes and shows promising results. Texture is then used to analyse video images and remove false alarm regions. Texture gradient approach and Laws Texture Energy Measures are used to find and remove false positives. It is found that Laws Texture Energy Measure performs better than the gradient approach. The results of using edges, texture and different combination of the two in false positive suppression are also presented in this work. This false positive suppression framework is applied to a smart house senario that uses cameras to model ”virtual sensors” to detect interactions of occupants with devices. Results show the accuracy of virtual sensors compared with the ground truth is improved.
489

Defining the green consumer : a legitimisation of the process of marketing products with lower environmental impacts

Said, David Michael, University of Western Sydney, Hawkesbury, Faculty of Health, Humanities and Social Ecology, School of Social Ecology January 1996 (has links)
Everything manufactured has an impact on the environment, either by consuming unrenewable resources as raw materials, or consuming energy, or adding excess nutrients to soils and waterways, or generating greenhouse gasses, wastes or pollutants. Many environmental critics believe that the most effective way to reduce this damage is to regulate to force manufacturers to produce and distribute goods with lower environmental impacts. Others believe that consumers should be educated to demand these improvements from manufacturers. The author of this thesis believes the most effective way to persuade the private sector to reduce the environmental impacts of its products would be to convince them that this would be profitable. At this point in time, most Australian manufacturers do not believe this to be the case, otherwise there would be many more green products in the marketplace. Many marketers have a negative attitude to green marketing, while others who would like to investigate the potential of the green market lack the data to do so. The original research for this thesis takes the form of a commercial market segmentation study designed to analyse the green market and provide answers to the following questions : Which segment or segments of the Australian population are actual or potential green consumers? What are their motivations, attitudes and buying habits? What new products would they welcome in the future? The findings of the research are that at least 50 percent of the Australian market has made considerable behavioural adjustments for environmental reasons and would welcome greener products. Marketers can therefore only ignore the green market at the risk of ignoring the needs and wants of 50 percent of the population. Thus, the original research provides a map of the Australian Green Market which will legitimise the corporate decision, develop and promote greener products / Master of Science (Hons) (Social Ecology)
490

Market segementation and domestic electricity supply in Victoria

Sharam, Andrea, n/a January 2005 (has links)
If the observations of unregulated and recently deregulated essential services were to hold for electricity reform, we could expect to see market segmentation of household customers. This is a corporate strategy aimed at the acquisition of attractive customers and the avoidance of unattractive customers. It is a function of market relations and commodification. Some markets already segment and assign unattractive customers to 'residual' markets, 'sub-prime' markets or 'markets of last resort'. Residual markets tend to involve market abuse by suppliers because these customers lack market power. It is possible therefore to suggest that segmented markets are characterised by simultaneous competition and monopoly. The implications for the supply of essential services, such as electricity, are profound. This research sought to identify whether there is evidence of emerging segmentation of the domestic electricity market in Victoria. In practice, few essential services areas are completely deregulated. The history of segmentation in the US insurance and lending industries provides valuable insights into markets, market failure and social protections. Taking this history and the more recent experiences of reforms in the US, the UK and Australia, it has been possible to identify three models of social protection: 'universal service', a 'civil rights' model, and a 'market' model. The Victorian reforms reflect some elements of each of these. The social protections included in the reform package both encourage and present barriers to market segmentation. At the time of the research, some elements of the safety net arrangements and customer inertia (born out of negative attitudes to competition) have acted to inhibit segmentation. Customer inertia in its own right poses questions for the efficacy of competition policy. The key understanding that is gained from this research is that both civil rights and socioeconomic entitlements (social rights) are required to prevent markets in essential services acting upon and exacerbating inequality. This suggests that universal service, as a model of social protection, is most likely to ameliorate the impacts of inequality.

Page generated in 0.0843 seconds