• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of a spectral unmixing procedure using a genetic algorithm and spectral shape

Chowdhury, Subir January 2012 (has links)
Spectral unmixing produces spatial abundance maps of endmembers or ‘pure’ materials using sub-pixel scale decomposition. It is particularly well suited to extracting a greater portion of the rich information content in hyperspectral data in support of real-world issues such as mineral exploration, resource management, agriculture and food security, pollution detection, and climate change. However, illumination or shading effects, signature variability, and the noise are problematic. The Least Square (LS) based spectral unmixing technique such as Non-Negative Sum Less or Equal to One (NNSLO) depends on “shade” endmembers to deal with the amplitude errors. Furthermore, the LS-based method does not consider amplitude errors in abundance constraint calculations, thus, often leads to abundance errors. The Spectral Angle Constraint (SAC) reduces the amplitude errors, but the abundance errors remain because of using fully constrained condition. In this study, a Genetic Algorithm (GA) was adapted to resolve these issues using a series of iterative computations based on the Darwinian strategy of ‘survival of the fittest’ to improve the accuracy of abundance estimates. The developed GA uses a Spectral Angle Mapper (SAM) based fitness function to calculate abundances by satisfying a SAC-based weakly constrained condition. This was validated using two hyperspectral data sets: (i) a simulated hyperspectral dataset with embedded noise and illumination effects and (ii) AVIRIS data acquired over Cuprite, Nevada, USA. Results showed that the new GA-based unmixing method improved the abundance estimation accuracies and was less sensitive to illumination effects and noise compared to existing spectral unmixing methods, such as the SAC and NNSLO. In case of synthetic data, the GA increased the average index of agreement between true and estimated abundances by 19.83% and 30.10% compared to the SAC and the NNSLO, respectively. Furthermore, in case of real data, GA improved the overall accuracy by 43.1% and 9.4% compared to the SAC and NNSLO, respectively. / xvi, 85 leaves : ill. (chiefly col.) ; 29 cm
2

Markov random fields based image and video processing. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2010 (has links)
In this dissertation, we propose three methods to solve the problems of interactive image segmentation, video completion, and image denoising, which are all formulated as MRF-based energy minimization problems. In our algorithms, different MRF-based energy functions with particular techniques according to the characteristics of different tasks are designed to well fit the problems. With the energy functions, different optimization schemes are proposed to find the optimal results in these applications. In interactive image segmentation, an iterative optimization based framework is proposed, where in each iteration an MRF-based energy function incorporating an estimated initial probabilistic map of the image is optimized with a relaxed global optimal solution. In video completion, a well-defined MRF energy function involving both spatial and temporal coherence relationship is constructed based on the local motions calculated in the first step of the algorithm. A hierarchical belief propagation optimization scheme is proposed to efficiently solve the problem. In image denoising, label relaxation based optimization on a Gaussian MRF energy is used to achieve the global optimal closed form solution. / Many problems in computer vision involve assigning each pixel a label, which represents some spatially varying quantity such as image intensity in image denoising or object index label in image segmentation. In general, such quantities in image processing tend to be spatially piecewise smooth, since they vary smoothly in the object surface and change dramatically at object boundaries, while in video processing, additional temporal smoothness is satisfied as the corresponding pixels in different frames should have similar labels. Markov random field (MRF) models provide a robust and unified framework for many image and video applications. The framework can be elegantly expressed as an MRF-based energy minimization problem, where two penalty terms are defined with different forms. Many approaches have been proposed to solve the MRF-based energy optimization problem, such as simulated annealing, iterated conditional modes, graph cuts, and belief propagation. / Promising results obtained by the proposed algorithms, with both quantitative and qualitative comparisons to the state-of-the-art methods, demonstrate the effectiveness of our algorithms in these image and video processing applications. / Liu, Ming. / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 79-89). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
3

Wall shear patterns of a 50% asymmetric stenosis model using photochromic molecular flow visualization

Chin, David, 1982- January 2008 (has links)
Photochromic Molecular Flow Visualization is an in vitro, experimental technique that uses high speed image acquisition combined with an ultraviolet laser to capture instantaneous flow profiles. It is particularly adept at measuring near wall velocities which are necessary for accurate wall shear rate measurements. This thesis describes the implementation and validation of the technique at McGill. The system was used to investigate the wall shear rate patterns in an idealized 50% asymmetric stenosis model under steady flow for Reynolds numbers 206, 99 and 50. A large recirculation zone with flow reattachment was seen downstream of the stenosis with maximum shear values occurring slightly upstream of peak stenosis for Reynolds number 206. This information is vital to ongoing dynamic cell culture experiments aimed at understanding the progression of atherosclerosis.
4

Wall shear patterns of a 50% asymmetric stenosis model using photochromic molecular flow visualization

Chin, David, 1982- January 2008 (has links)
No description available.
5

Syntactic models with applications in image analysis

Evans, Fiona H January 2007 (has links)
[Truncated abstract] The field of pattern recognition aims to develop algorithms and computer programs that can learn patterns from data, where learning encompasses the problems of recognition, representation, classification and prediction. Syntactic pattern recognition recognises that patterns may be hierarchically structured. Formal language theory is an example of a syntactic approach, and is used extensively in computer languages and speech processing. However, the underlying structure of language and speech is strictly one-dimensional. The application of syntactic pattern recognition to the analysis of images requires an extension of formal language theory. Thus, this thesis extends and generalises formal language theory to apply to data that have possibly multi-dimensional underlying structure and also hierarchic structure . . . As in the case for curves, shapes are modelled as a sequence of local relationships between the curves, and these are estimated using a training sample. Syntactic square detection was extremely successful – detecting 100% of squares in images containing only a single square, and over 50% of the squares in images containing ten squares highly likely to be partially or severely occluded. The detection and classification of polygons was successful, despite a tendency for occluded squares and rectangles to be confused. The algorithm also peformed well on real images containing fish. The success of the syntactic approaches for detecting edges, detecting curves and detecting, classifying and counting occluded shapes is evidence of the potential of syntactic models.
6

A probabilistic model to learn, detect, localize and classify patterns in arbitrary images /

Toews, Matthew. January 2008 (has links)
No description available.
7

Measurement techniques to characterize bubble motion in swarms

Acuña Pérez, Claudio Abraham January 2007 (has links)
No description available.
8

Hyperspectral Hypertemporal Feature Extraction Methods with Applications to Aquatic Invasives Target Detection

Mathur, Abhinav 13 May 2006 (has links)
In this dissertation, methods are designed and validated for the utilization of hyperspectral hypertemporal remotely sensed data in target detection applications. Two new classes of methods are designed to optimize the selection of target detection features from spectro-temporal space data. The first method is based on the consideration that all the elements of the spectro-temporal map are independent of each other. The second method is based on the consideration that the elements of the spectro-temporal map have some vicinal dependency among them. Methods designed for these two approaches include various stepwise selection methods, windowing approaches, and clustering techniques. These techniques are compared to more traditional feature extraction methods such as Normalized Difference Vegetation Index (NDVI), spectral analysis, and Principal Component Analysis (PCA). The efficacies of the new methods are demonstrated within an aquatic invasive species detection application, namely discriminating waterhyacinth from other aquatic vegetation such as American lotus. These two aquatic plant species are chosen for testing the proposed methods as they have very similar physical characteristics and they represent a practical life target detection problem. It is observed from the overall classification accuracy estimates that the proposed feature extraction methods show a marked improvement over conventional methods. Along with improving the accuracy estimates, these methods demonstrate a capability to drastically reduce the dimensionality while retaining the desired hyperspectral hypertemporal features. Furthermore, the feature set extracted using the newly developed methods provide information about the optimum subset of the hyperspectral hypertemporal data for a specific target detection application, which makes these methods serve as tools to strategize more intelligent data collection plans.
9

Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images

Hill, Evelyn June January 2004 (has links)
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
10

Detecting informal buildings from high resolution quickbird satellite image, an application for insitu [sic.] upgrading of informal setellement [sic.] for Manzese area - Dar es Salaam, Tanzania.

Ezekia, Ibrahim S. K. January 2005 (has links)
Documentation and formalization of informal settlements ("insitu" i.e. while people continue to live in the settlement) needs appropriate mapping and registration system of real property that can finally lead into integrating an informal city to the formal city. For many years extraction of geospatial data for informal settlement upgrading have been through the use of conventional mapping, which included manual plotting from aerial photographs and the use of classical surveying methods that has proved to be slow because of manual operation, very expensive, and requires well-trained personnel. The use of high-resolution satellite image like QuickBird and GIS tools has recently been gaining popularity to various aspects of urban mapping and planning, thereby opening-up new opportunities for efficient management of rapidly changing environment of informal settlements. This study was based on Manzese informal area in the city of Dar es salaam, Tanzania for which the Ministry of Lands and Human Settlement Development is committed at developing strategic information and decision making tools for upgrading informal areas using digital database, Orthophotos and Quickbird satellite image. A simple prototype approach developed in this study, that is, 'automatic detection and extraction of informal buildings and other urban features', is envisaged to simplify and speedup the process of land cover mapping that can be used by various governmental and private segments in our society. The proposed method, first tests the utility of high resolution QuickBird satellite image to classify the detailed 11 classes of informal buildings and other urban features using different image classification methods like the Box, maximum likelihood and minimum distance classifier, followed by segmentation and finally editing of feature outlines. The overall mapping accuracy achieved for detailed classification of urban land cover was 83%. The output demonstrates the potential application of the proposed approach for urban feature extraction and updating. The study constrains and recommendations for future work are also discussed. / Thesis (M.Env.Dev.)-University of KwaZulu-Natal, Pietermaritzburg, 2005.

Page generated in 0.0783 seconds