• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 66
  • 66
  • 66
  • 27
  • 16
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Generalized Survey Propagation

Tu, Ronghui 09 May 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
32

DNA microarray image processing based on advanced pattern recognition techniques / Επεξεργασία εικόνων μικροσυστοιχιών DNA με χρήση σύγχρονων μεθόδων ταξινόμησης προτύπων

Αθανασιάδης, Εμμανουήλ 26 August 2010 (has links)
In the present thesis, a novel gridding technique, as well as, two new segmentation methods applied to complementary DNA (cDNA) microarray images is proposed. More precise, a new gridding method based on continuous wavelet transform (CWT) was performed. Line profiles of x and y axis were calculated, resulting to 2 different signals. These signals were independently processed by means of CWT at 15 different levels, using daubechies 4 mother wavelet. A summation, point by point, was performed on the processed signals, in order to suppress noise and enhance spot’s differences. Additionally, a wavelet based hard thresholding filter was applied to each signal for the task of alleviating the noise of the signals. 10 real microarray images were used in order to visually assess the performance of our gridding method. Each microarray image contained 4 sub-arrays, each sub-array 40x40 spots, thus, 6400 spots totally. According to our results, the accuracy of our algorithm was 98% in all 10 images and in all spots. Additionally, processing time was less than 3 sec on a 1024×1024×16 microarray image, rendering the method a promising technique for an efficient and fully automatic gridding processing. Following the gridding process, the Gaussian Mixture Model (GMM) and the Fuzzy GMM algorithms were applied to each cell, with the purpose of discriminating foreground from background. In addition, markov random field (MRF), as well as, a proposed wavelet based MRF model (SMRF) were implemented. The segmentation abilities of all the algorithms were evaluated by means of the segmentation matching factor (SMF), the Coefficient of Determination (r2), and the concordance correlation (pc). Indirect accuracy performances were also tested on the experimental images by means of the Mean Absolute Error (MAE) and the Coefficient of Variation (CV). In the latter case, SPOT and SCANALYZE software results were also tested. In the former case, SMRF attained the best SMF, r2, and pc (92.66%, 0.923, and 0.88, respectively) scores, whereas, in the latter case scored MAE and CV, 497 and 0.88, respectively. The results and support the performance superiority of the SMRF algorithm in segmenting cDNA images. / Τα τελευταία χρόνια παρατηρείται ραγδαία ανάπτυξη της τεχνολογίας των μικροσυστοιχιών (microarrays) με αποτέλεσμα την ποιοτική και ποσοτική μέτρηση της έκφρασης χιλιάδων γονιδίων ταυτοχρόνως σ’ ένα και μόνο πείραμα. Εικόνες μικροσυστοιχιών, στις οποίες έχει λάβει χώρα υβριδοποίηση δείγματος DNA, χρησιμοποιούνται ευρέως για την εξαγωγή αξιόπιστων αποτελεσμάτων γονιδιακής έκφρασης και προσδιορισμό των μηχανισμών που ελέγχουν την ενεργοποίηση των γονιδίων σ’ έναν οργανισμό. Συνεπώς, η δημιουργία κατάλληλων υπολογιστικών τεχνικών για την επεξεργασία των εικόνων αυτών συντελεί καθοριστικά στην εξαγωγή ορθών και έγκυρων αποτελεσμάτων. Στη παρούσα Διδακτορική Διατριβή αναπτύχθηκε στο πρώτο στάδια μια νέα πλήρως αυτοματοποιημένη τεχνική διευθυνσιοδότησης και στο δεύτερο στάδιο δύο νέες τεχνικές τμηματοποίησης. Πιο συγκεκριμένα, αναπτύχθηκε μια νέα μέθοδος διευθυνσιοδότησης η οποία βασίζεται στο συνεχή μετασχηματισμό κυματιδίου (Continuous Wavelet Transform CWT) για την αυτόματη εύρεση των κέντρων των κηλίδων, καθώς και των ορίων μεταξύ δύο διαδοχικών κηλίδων. Στη συνέχεια αναπτύχθηκαν δύο νέες μέθοδοι κατάτμησης της εικόνας για τον διαχωρισμό των κηλίδων από το φόντο, οι οποίες βασίζονται στη τεχνική μίξης ασαφών μοντέλων Γκάους (Fuzzy Gaussian Mixture Models FGMM) καθώς και στη τεχνική συνδυασμού τυχαίων πεδίων Μαρκόφ (Markov Random Field MRF) και μετασχηματισμού κυματιδίου (Wavelet Transform WT) (SMRF). Με σκοπό την αξιολόγηση (validation) των προτεινόμενων μεθόδων της παρούσας Διδακτορικής Διατριβής, δημιουργήθηκαν και χρησιμοποιήθηκαν τόσο πραγματικές εικόνες μικροσυστοιχιών, καθώς και απομιμούμενες (simulated) σύμφωνα με μεθοδολογία η οποία προτείνεται απο τη διεθνή βιβλιογραφία. Όσον αφορά την διευθυνσιοδότηση, χρησιμοποιώντας οπτική ανασκόπηση για κάθε κηλίδα χωριστά σε όλες τις πραγματικές εικόνες, δημιουργήθηκαν δύο κατηγορίες, ανάλογα με το αν οι γραμμές του πλέγματος εφάπτονταν πάνω σε κάποια κηλίδα ή όχι. Η προτεινόμενη μεθοδολογία ήταν ακριβής σε ποσοστό 98% στον ακριβή εντοπισμό των κηλίδων σε όλες τις εικόνες. Σύγκριση ανάμεσα στην απόδοση των GMM, FGMM, MRF και SMRF στις απομιμούμενες εικόνες σε διαφορετικά επίπεδα θορύβου πραγματοποιήθηκε και τα αποτελέσματα σε όλα τα μετρικά, segmentation matching factor (SMF), coefficient of variation ( ), και coefficient of determination ( ), μας έδειξαν ότι η μέθοδος SMRF είναι πιο αξιόπιστη στο να μπορέσει να αναδείξει την πραγματική περιφέρεια της κηλίδας, τόσο σε εικόνες με μεγάλο λόγο σήματος προς θόρυβο, όσο και σε μικρό λόγο. Ενδεικτικά αποτελέσματα σε 1 db SNR για την περίπτωση του SMRF είναι SMF = 92.66, =0.923, και = 0.88, ακολουθούμενο από το MRF ( SMF = 92.15, =0.91, και = 0.85), FGMM ( SMF = 91.07, =0.92, και = 0.86)και GMM (SMF = 90.73, =0.89, και = 0.83). Στη συνέχεια πάρθηκαν αποτελέσματα τα οποία προέκυψαν από τη χρήση πραγματικών εικόνων μικροσυστοιχιών. Και σε αυτή τη περίπτωση, αναδείχθηκε η υπεροχή του WMRF, έναντι των άλλων αλγορίθμων ταξινόμησης μέση τιμή MAE = 497 και CV = 0.88. Τέλος, θα πρέπει να τονιστεί ότι τα παραπάνω μετρικά υπολογίστηκαν και σε αποτελέσματα από δύο ευρέως χρησιμοποιούμενα πακέτα επεξεργασίας εικόνων μικροσυστοιχιών, τα οποία χρησιμοποιούνται και είναι διαθέσιμα. Πιο συγκεκριμένα, χρησιμοποιήθηκαν το SCANALYSE και το SPOT, τα οποία χρησιμοποιούν τις τεχνικές τμηματοποίησης Fixed Circle και Seeded Region Growing, αντίστοιχα. Στη περίπτωση αυτή η τεχνική SMRF κατάφερε να υπολογίσει καλύτερα αποτελέσματα από τα δύο αυτά πακέτα. Πιο συγκεκριμένα η τεχνική GMM πέτυχε MAE = 1470 και CV = 1.29, η τεχνική FGMM πέτυχε MAE = 1430 και CV = 1.21, η τεχνική MRF πέτυχε MAE = 1215 και CV = 1.15, η τεχνική WMRF πέτυχε MAE = 497 και CV = 0.88, η τεχνική FC του λογισμικού πακέτου SCANALYZE πέτυχε MAE = 503 και CV = 0.90, και τέλος η τεχνική SRG του λογισμικού πακέτου SPOT πέτυχε MAE = 1180 και CV = 0.93.
33

"Segmentação de imagens e validação de classes por abordagem estocástica" / Image segmentation and class validation in a stochastic approach

Leandro Cavaleri Gerhardinger 13 April 2006 (has links)
Uma etapa de suma importância na análise automática de imagens é a segmentação, que procura dividir uma imagem em regiões cujos pixels exibem um certo grau de similaridade. Uma característica que provê similaridade entre pixels de uma mesma região é a textura, formada geralmente pela combinação aleatória de suas intensidades. Muitos trabalhos vêm sendo realizados com o intuito de estudar técnicas não-supervisionadas de segmentação de imagens por modelos estocásticos, definindo texturas como campos aleatórios de Markov. Um método com esta abordagem que se destaca é o EM/MPM, um algoritmo iterativo que combina a técnica EM para realizar uma estimação de parâmetros por máxima verossimilhança com a MPM, utilizada para segmentação pela minimização do número de pixels erroneamente classificados. Este trabalho desenvolveu um estudo sobre a modelagem e a implementação do algoritmo EM/MPM, juntamente com sua abordagem multiresolução. Foram propostas uma estimação inicial de parâmetros por limiarização e uma combinação com o algoritmo de Annealing. Foi feito também um estudo acerca da validação de classes, ou seja, a busca pelo número de regiões diferentes na imagem, mostrando as principais técnicas encontradas na literatura e propondo uma nova abordagem, baseada na distribuição dos níveis de cinza das classes. Por fim, foi desenvolvida uma extensão do modelo para a segmentação de malhas em duas e três dimensões. / An important stage of the automatic image analysis process is segmentation, that aims to split an image into regions whose pixels exhibit a certain degree of similarity. Texture is known as an efficient feature that provides enough discriminant power to differenciate pixels from distinct regions. It is usually defined as a random combination of pixel intensities. A considerable amount of researches has been done on non-supervised techniques for image segmentation based on stochastic models, in which texture is defined as Markov Random Fields. Such an important method in this category is the EM/MPM, an iterative algorithm that combines the maximum-likelihood parameter estimation model EM with the MPM segmentation algorithm, whose aim is to minimize the number of misclassified pixels in the image. This work has carried out a study on stochastic models for segmentation and shows an implementation for the EM/MPM algorithm, together with a multiresolution approach. A new threshold-based scheme for the estimation of initial parameters for the EM/MPM model has been proposed. This work also shows how to incorporate the concept of annealing to the current EM/MPM algorithm in order to improve segmentation. Additionally, a study on the class validity problem (search for the correct number of classes) has been done, showing the most important techniques available in the literature. As a consequence, a gray level distribution-based approach has been devised. Finally, the work shows an extension of the traditional EM/MPM technique for segmenting 2D and 3D meshes.
34

Generalized Survey Propagation

Tu, Ronghui January 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
35

Scaling Analytics via Approximate and Distributed Computing

Chakrabarti, Aniket 12 December 2017 (has links)
No description available.
36

Mapping and localization for extraterrestrial robotic explorations

Xu, Fengliang 01 December 2004 (has links)
No description available.
37

Integrative Modeling and Analysis of High-throughput Biological Data

Chen, Li 21 January 2011 (has links)
Computational biology is an interdisciplinary field that focuses on developing mathematical models and algorithms to interpret biological data so as to understand biological problems. With current high-throughput technology development, different types of biological data can be measured in a large scale, which calls for more sophisticated computational methods to analyze and interpret the data. In this dissertation research work, we propose novel methods to integrate, model and analyze multiple biological data, including microarray gene expression data, protein-DNA interaction data and protein-protein interaction data. These methods will help improve our understanding of biological systems. First, we propose a knowledge-guided multi-scale independent component analysis (ICA) method for biomarker identification on time course microarray data. Guided by a knowledge gene pool related to a specific disease under study, the method can determine disease relevant biological components from ICA modes and then identify biologically meaningful markers related to the specific disease. We have applied the proposed method to yeast cell cycle microarray data and Rsf-1-induced ovarian cancer microarray data. The results show that our knowledge-guided ICA approach can extract biologically meaningful regulatory modes and outperform several baseline methods for biomarker identification. Second, we propose a novel method for transcriptional regulatory network identification by integrating gene expression data and protein-DNA binding data. The approach is built upon a multi-level analysis strategy designed for suppressing false positive predictions. With this strategy, a regulatory module becomes increasingly significant as more relevant gene sets are formed at finer levels. At each level, a two-stage support vector regression (SVR) method is utilized to reduce false positive predictions by integrating binding motif information and gene expression data; a significance analysis procedure is followed to assess the significance of each regulatory module. The resulting performance on simulation data and yeast cell cycle data shows that the multi-level SVR approach outperforms other existing methods in the identification of both regulators and their target genes. We have further applied the proposed method to breast cancer cell line data to identify condition-specific regulatory modules associated with estrogen treatment. Experimental results show that our method can identify biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. Third, we propose a bootstrapping Markov Random Filed (MRF)-based method for subnetwork identification on microarray data by incorporating protein-protein interaction data. Methodologically, an MRF-based network score is first derived by considering the dependency among genes to increase the chance of selecting hub genes. A modified simulated annealing search algorithm is then utilized to find the optimal/suboptimal subnetworks with maximal network score. A bootstrapping scheme is finally implemented to generate confident subnetworks. Experimentally, we have compared the proposed method with other existing methods, and the resulting performance on simulation data shows that the bootstrapping MRF-based method outperforms other methods in identifying ground truth subnetwork and hub genes. We have then applied our method to breast cancer data to identify significant subnetworks associated with drug resistance. The identified subnetworks not only show good reproducibility across different data sets, but indicate several pathways and biological functions potentially associated with the development of breast cancer and drug resistance. In addition, we propose to develop network-constrained support vector machines (SVM) for cancer classification and prediction, by taking into account the network structure to construct classification hyperplanes. The simulation study demonstrates the effectiveness of our proposed method. The study on the real microarray data sets shows that our network-constrained SVM, together with the bootstrapping MRF-based subnetwork identification approach, can achieve better classification performance compared with conventional biomarker selection approaches and SVMs. We believe that the research presented in this dissertation not only provides novel and effective methods to model and analyze different types of biological data, the extensive experiments on several real microarray data sets and results also show the potential to improve the understanding of biological mechanisms related to cancers by generating novel hypotheses for further study. / Ph. D.
38

Image analysis and representation for textile design classification

Jia, Wei January 2011 (has links)
A good image representation is vital for image comparision and classification; it may affect the classification accuracy and efficiency. The purpose of this thesis was to explore novel and appropriate image representations. Another aim was to investigate these representations for image classification. Finally, novel features were examined for improving image classification accuracy. Images of interest to this thesis were textile design images. The motivation of analysing textile design images is to help designers browse images, fuel their creativity, and improve their design efficiency. In recent years, bag-of-words model has been shown to be a good base for image representation, and there have been many attempts to go beyond this representation. Bag-of-words models have been used frequently in the classification of image data, due to good performance and simplicity. “Words” in images can have different definitions and are obtained through steps of feature detection, feature description, and codeword calculation. The model represents an image as an orderless collection of local features. However, discarding the spatial relationships of local features limits the power of this model. This thesis exploited novel image representations, bag of shapes and region label graphs models, which were based on bag-of-words model. In both models, an image was represented by a collection of segmented regions, and each region was described by shape descriptors. In the latter model, graphs were constructed to capture the spatial information between groups of segmented regions and graph features were calculated based on some graph theory. Novel elements include use of MRFs to extract printed designs and woven patterns from textile images, utilisation of the extractions to form bag of shapes models, and construction of region label graphs to capture the spatial information. The extraction of textile designs was formulated as a pixel labelling problem. Algorithms for MRF optimisation and re-estimation were described and evaluated. A method for quantitative evaluation was presented and used to compare the performance of MRFs optimised using alpha-expansion and iterated conditional modes (ICM), both with and without parameter re-estimation. The results were used in the formation of the bag of shapes and region label graphs models. Bag of shapes model was a collection of MRFs' segmented regions, and the shape of each region was described with generic Fourier descriptors. Each image was represented as a bag of shapes. A simple yet competitive classification scheme based on nearest neighbour class-based matching was used. Classification performance was compared to that obtained when using bags of SIFT features. To capture the spatial information, region label graphs were constructed to obtain graph features. Regions with the same label were treated as a group and each group was associated uniquely with a vertex in an undirected, weighted graph. Each region group was represented as a bag of shape descriptors. Edges in the graph denoted either the extent to which the groups' regions were spatially adjacent or the dissimilarity of their respective bags of shapes. Series of unweighted graphs were obtained by removing edges in order of weight. Finally, an image was represented using its shape descriptors along with features derived from the chromatic numbers or domination numbers of the unweighted graphs and their complements. Linear SVM classifiers were used for classification. Experiments were implemented on data from Liberty Art Fabrics, which consisted of more than 10,000 complicated images mainly of printed textile designs and woven patterns. Experimental data was classified into seven classes manually by assigning each image a text descriptor based on content or design type. The seven classes were floral, paisley, stripe, leaf, geometric, spot, and check. The result showed that reasonable and interesting regions were obtained from MRF segmentation in which alpha-expansion with parameter re-estimation performs better than alpha-expansion without parameter re-estimation or ICM. This result was not only promising for textile CAD (Computer-Aided Design) to redesign the textile image, but also for image representation. It was also found that bag of shapes model based on MRF segmentation can obtain comparable classification accuracy with bag of SIFT features in the framework of nearest neighbour class-based matching. Finally, the result indicated that incorporation of graph features extracted by constructing region label graphs can improve the classification accuracy compared to both bag of shapes model and bag of SIFT models.
39

Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover Mapping

Niu, Xin January 2012 (has links)
Urban land cover mapping represents one of the most important remote sensing applications in the context of rapid global urbanization. In recent years, high resolution spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) has been increasingly used for urban land cover/land-use mapping, since more information could be obtained in multiple polarizations and the collection of such data is less influenced by solar illumination and weather conditions.  The overall objective of this research is to develop effective methods to extract accurate and detailed urban land cover information from spaceborne PolSAR data. Six RADARSAT-2 fine-beam polarimetric SAR and three RADARSAT-2 ultra-fine beam SAR images were used. These data were acquired from June to September 2008 over the north urban-rural fringe of the Greater Toronto Area, Canada. The major landuse/land-cover classes in this area include high-density residential areas, low-density residential areas, industrial and commercial areas, construction sites, roads, streets, parks, golf courses, forests, pasture, water and two types of agricultural crops. In this research, various polarimetric SAR parameters were evaluated for urban land cover mapping. They include the parameters from Pauli, Freeman and Cloude-Pottier decompositions, coherency matrix, intensities of each polarization and their logarithms.  Both object-based and pixel-based classification approaches were investigated. Through an object-based Support Vector Machine (SVM) and a rule-based approach, efficiencies of various PolSAR features and the multitemporal data combinations were evaluated. For the pixel-based approach, a contextual Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) and a modified Multiscale Pappas Adaptive Clustering (MPAC), contextual information was explored to improve the mapping results. To take full advantages of alternative PolSAR distribution models, a rule-based model selection approach was put forward in comparison with a dictionary-based approach.  Moreover, the capability of multitemporal fine-beam PolSAR data was compared with multitemporal ultra-fine beam C-HH SAR data. Texture analysis and a rule-based approach which explores the object features and the spatial relationships were applied for further improvement. Using the proposed approaches, detailed urban land-cover classes and finer urban structures could be mapped with high accuracy in contrast to most of the previous studies which have only focused on the extraction of urban extent or the mapping of very few urban classes. It is also one of the first comparisons of various PolSAR parameters for detailed urban mapping using an object-based approach. Unlike other multitemporal studies, the significance of complementary information from both ascending and descending SAR data and the temporal relationships in the data were the focus in the multitemporal analysis. Further, the proposed novel contextual analyses could effectively improve the pixel-based classification accuracy and present homogenous results with preserved shape details avoiding over-averaging. The proposed contextual SEM algorithm, which is one of the first to combine the adaptive MRF and the modified MPAC, was able to mitigate the degenerative problem in the traditional EM algorithms with fast convergence speed when dealing with many classes. This contextual SEM outperformed the contextual SVM in certain situations with regard to both accuracy and computation time. By using such a contextual algorithm, the common PolSAR data distribution models namely Wishart, G0p, Kp and KummerU were compared for detailed urban mapping in terms of both mapping accuracy and time efficiency. In the comparisons, G0p, Kp and KummerU demonstrated better performances with higher overall accuracies than Wishart. Nevertheless, the advantages of Wishart and the other models could also be effectively integrated by the proposed rule-based adaptive model selection, while limited improvement could be observed by the dictionary-based selection, which has been applied in previous studies. The use of polarimetric SAR data for identifying various urban classes was then compared with the ultra-fine-beam C-HH SAR data. The grey level co-occurrence matrix textures generated from the ultra-fine-beam C-HH SAR data were found to be more efficient than the corresponding PolSAR textures for identifying urban areas from rural areas. An object-based and pixel-based fusion approach that uses ultra-fine-beam C-HH SAR texture data with PolSAR data was developed. In contrast to many other fusion approaches that have explored pixel-based classification results to improve object-based classifications, the proposed rule-based fusion approach using the object features and contextual information was able to extract several low backscatter classes such as roads, streets and parks with reasonable accuracy. / <p>QC 20121112</p>
40

A New Look Into Image Classification: Bootstrap Approach

Ochilov, Shuhratchon January 2012 (has links)
Scene classification is performed on countless remote sensing images in support of operational activities. Automating this process is preferable since manual pixel-level classification is not feasible for large scenes. However, developing such an algorithmic solution is a challenging task due to both scene complexities and sensor limitations. The objective is to develop efficient and accurate unsupervised methods for classification (i.e., assigning each pixel to an appropriate generic class) and for labeling (i.e., properly assigning true labels to each class). Unique from traditional approaches, the proposed bootstrap approach achieves classification and labeling without training data. Here, the full image is partitioned into subimages and the true classes found in each subimage are provided by the user. After these steps, the rest of the process is automatic. Each subimage is individually classified into regions and then using the joint information from all subimages and regions the optimal configuration of labels is found based on an objective function based on a Markov random field (MRF) model. The bootstrap approach has been successfully demonstrated with SAR sea-ice and lake ice images which represent challenging scenes used operationally for ship navigation, climate study, and ice fraction estimation. Accuracy assessment is based on evaluation conducted by third party experts. The bootstrap method is also demonstrated using synthetic and natural images. The impact of this technique is a repeatable and accurate methodology that generates classified maps faster than the standard methodology.

Page generated in 0.1367 seconds