• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Road Extraction From High Resolution Satellite Images Using Adaptive Boosting With Multi-resolution Analysis

Cinar, Umut 01 September 2012 (has links) (PDF)
Road extraction from satellite or aerial imagery is a popular topic in remote sensing, and there are many road extraction algorithms suggested by various researches. However, the need of reliable remotely sensed road information still persists as there is no sufficiently robust road extraction algorithm yet. In this study, we explore the road extraction problem taking advantage of the multi-resolution analysis and adaptive boosting based classifiers. That is, we propose a new road extraction algorithm exploiting both spectral and structural features of the high resolution multi-spectral satellite images. The proposed model is composed of three major components / feature extraction, classification and road detection. Well-known spectral band ratios are utilized to represent reflectance properties of the data whereas a segmentation operation followed by an elongatedness scoring technique renders structural evaluation of the road parts within the multi-resolution analysis framework. The extracted features are fed into Adaptive Boosting (Adaboost) learning procedure, and the learning method iteratively combines decision trees to acquire a classifier with a high accuracy. The road network is identified from the probability map constructed by the classifier suggested by Adaboost. The algorithm is designed to be modular in the sense of its extensibility, that is / new road descriptor features can be easily integrated into the existing model. The empirical evaluation of the proposed algorithm suggests that the algorithm is capable of extracting majority of the road network, and it poses promising performance results.
32

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
33

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010 (has links)
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
34

Object Tracking For Surveillance Applications Using Thermal And Visible Band Video Data Fusion

Beyan, Cigdem 01 December 2010 (has links) (PDF)
Individual tracking of objects in the video such as people and the luggages they carry is important for surveillance applications as it would enable deduction of higher level information and timely detection of potential threats. However, this is a challenging problem and many studies in the literature track people and the belongings as a single object. In this thesis, we propose using thermal band video data in addition to the visible band video data for tracking people and their belongings separately for indoor applications using their heat signatures. For object tracking step, an adaptive, fully automatic multi object tracking system based on mean-shift tracking method is proposed. Trackers are refreshed using foreground information to overcome possible problems which may occur due to the changes in object&rsquo / s size, shape and to handle occlusion, split and to detect newly emerging objects as well as objects that leave the scene. By using the trajectories of objects, owners of the objects are found and abandoned objects are detected to generate an alarm. Better tracking performance is also achieved compared a single modality as the thermal reflection and halo effect which adversely affect tracking are eliminated by the complementing visible band data.
35

Κατηγοροποίηση μαγνητικών τομογραφιών με DSPs

Τσάμπρας, Λάμπρος 05 February 2015 (has links)
Είναι ενδιαφέρουσα αλλά συνάμα δύσκολη η ανάλυση ιατρικών εικόνων, επειδή υπάρχουν πολύ μικρές διακυμάνσεις και μεγάλος όγκος δεδομένων για επεξεργασία. Είναι αρκετά δύσκολο να αναπτυχθεί ένα αυτοματοποιημένο σύστημα αναγνώρισης, το οποίο θα μπορούσε να επεξεργάζεται μεγάλο όγκο πληροφοριών των ασθενών και να παρέχει μια σωστή εκτίμηση. Στην ιατρική, η συμβατική διαγνωστική μέθοδος για εικόνες MR γονάτου για αναγνώριση ανωμαλιών, είναι από την επίβλεψη έμπειρων ιατρών. Η τεχνική της ασαφούς λογικής είναι πιο ακριβής, αλλά αυτό εξαρτάται πλήρως από τη γνώση των εμπειρογνωμόνων, η οποία μπορεί να μην είναι πάντα διαθέσιμη. Στη παρούσα εργασία, τμηματοποιούμε την MR εικόνα του γονάτου με την τεχνική Mean Shift, αναγνωρίζουμε τα κύρια μέρη με τη βοήθεια των ΗΜRF και τέλος εκπαιδεύουμε ταξινομητή ANFIS. Η απόδοση του ταξινομητή ANFIS αξιολογήθηκε όσον αφορά την απόδοση της εκαπαίδευσης και της ακρίβειας ταξινόμησης. Επιβεβαιώθηκε ότι ο ταξινομητής είχε μεγάλη ακρίβεια στην ανίχνευση ανωμαλιών στις ακτινογραφίες Στην εργασία αυτή περιγράφεται η προτεινόμενη στρατηγική για την διάγνωση ανωμαλιών στις εικόνες μαγνητικής τομογραφίας γόνατος. / It is a challenging task to analyze medical images because there are very minute variations & larger data set for analysis. It is a quite difficult to develop an automated recognition system which could process on a large information of patient and provide a correct estimation. The conventional method in medicine for knee MR images classification and diseases detection is by human inspection. Fuzzy logic technique is more accurate but it fully depends on expert knowledge, which may not always available. Here we extract the feature using Mean Shift segmentation and region recognition with HMRF and after that training using the ANFIS tool. The performance of the ANFIS classifier was evaluated in terms of training performance and classification accuracy. Here the result confirmed that the proposed ANFIS classifier with high accuracy in detecting the knee diseases. This work describes the proposed strategy to medical image classification of patient’s MRI scan images of the knee.
36

Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data

Li, Ting 04 April 2012 (has links) (PDF)
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
37

Nouvelles approches en filtrage particulaire. Application au recalage de la navigation inertielle

Murangira, A. 25 March 2014 (has links) (PDF)
Les travaux présentés dans ce mémoire de thèse concernent le développement et la mise en œuvre d'un algorithme de filtrage particulaire pour le recalage de la navigation inertielle par mesures altimétriques. Le filtre développé, le MRPF (Mixture Regularized Particle Filter), s'appuie à la fois sur la modélisation de la densité a posteriori sous forme de mélange fini, sur le filtre particulaire régularisé ainsi que sur l'algorithme mean-shiftclustering. Nous proposons également une extension du MRPF au filtre particulaire Rao-Blackwellisé appelée MRBPF (Mixture Rao-Blackwellized ParticleFilter). L'objectif est de proposer un filtre adapté à la gestion des multimodalités dues aux ambiguïtés de terrain. L'utilisation des modèles de mélange fini permet d'introduire un algorithme d'échantillonnage d'importance afin de générer les particules dans les zones d'intérêt. Un second axe de recherche concerne la mise au point d'outils de contrôle d'intégrité de la solution particulaire. En nous appuyant sur la théorie de la détection de changement, nous proposons un algorithme de détection séquentielle de la divergence du filtre. Les performances du MRPF, MRBPF, et du test d'intégrité sont évaluées sur plusieurs scénarios de recalage altimétrique.
38

CLASSIFICAÇÃO DE FASES EM IMAGENS HIPERESPECTRAIS DE RAIOS X CARACTERÍSTICOS PELO MÉTODO DE AGRUPAMENTO POR DESLOCAMENTO PARA A MÉDIA / PHASE CLASSIFICATION IN CHARACTERISTIC X-RAYS HYPERSPECTRAL IMAGES BY MEAN SHIFT CLUSTERING METHOD

Martins, Diego Schmaedech 23 January 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In the present work we introduce the Mean Shift Clustering (MSC) algorithm as a valuable alternative to perform materials phase classification from hyperspectral images. As opposed to other multivariate statistical techniques, such as principal components analysis (PCA), clustering techniques directly assign a class (phase) label to each pixel, so that their outputs are phase segmented images, i.e. , there is no need for an additional segmentation algorithm. On the other hand, as compared to other clustering procedures and classification methods based on cluster analysis, MSC has the advantages of not requiring previous knowledge of the number of data clusters and not assuming any shape of these clusters, i.e., neither the number nor the composition of the phases must be previously known. This makes MSC a particularly useful tool for exploratory research, allowing automatic phase identification of unknown samples. Other advantages of this approach are the possibility of multimodal image analysis, composed of different types of signals, and estimate the uncertainties of the analysis. Finally, the visualization and interpretation of results are also simplified, since the information content of the output image does not depend on any arbitrary choice of the contents of the color channels. In this paper we apply the PCA and MSC algorithms for the analysis of characteristic X-ray maps acquired in Scanning Electron Microscopes (SEM) which is equipped with Energy Dispersive Detection Systems (EDS). Our results indicate that MSC is capable of detecting minor phases, not clearly identified when only three components obtained by PCA are used. / No presente trabalho será introduzido o algoritmo de Agrupamento por Deslocamento para a Média (ADM) como uma alternativa para executar a classificação de fases em materiais a partir de imagens hiperspectrais de mapas raios X característicos. Ao contrário de outras técnicas estatísticas multivariadas, tal como Análise de Componentes Principais (ACP), técnicas de agrupamentos atribuiem diretamente uma classe de rótulo (fase) para cada pixel, de modo que suas saídas são imagens de fase segmentadas, i.e., não há necessidade de algoritmos adicionais para segmentação. Por outro lado, em comparação com outros procedimentos de agrupamento e métodos classificação baseados em análise de agrupamentos, ADM tem a vantagem de não necessitar de conhecimento prévio do número de fases, nem das formas dos agrupamentos, o que faz dele um instrumento particularmente útil para a pesquisa exploratória, permitindo a identificação automática de fase de amostras desconhecidas. Outras vantagens desta abordagem são a possibilidade de análise de imagens multimodais, compostas por diferentes tipos de sinais, e de estimar as incertezas das análises. Finalmente, a visualização e a interpretação dos resultados também é simplificada, uma vez que o conteúdo de informação da imagem de saída não depende de qualquer escolha arbitrária do conteúdo dos canais de cores. Neste trabalho foram aplicados os algoritmos de ADM e ACP para a análise de mapas de raios X característicos adquiridos em Microscópios de Varredura Eletrônica (MEV) que está equipado com um Espectrômetro de Raios X por Dispersão em Energia (EDS). Nossos resultados indicam que o método ADM é capaz de detectar as fases menores, não claramente identificadas nas imagens compostas pelo três componentes mais significativos obtidos pelo método ACP.
39

Sledování objektu ve videu / Object Tracking in Video

Sojma, Zdeněk January 2011 (has links)
This master's thesis describes principles of the most widely used object tracking systems in video and then mainly focuses on characterization and on implementation of an interactive offline tracking system for generic color objects. The algorithm quality consists in high accuracy evaluation of object trajectory. The system creates the output trajectory from input data specified by user which may be interactively modified and added to improve the system accuracy. The algorithm is based on a detector which uses a color bin features and on the temporal coherence of object motion to generate multiple candidate object trajectories. Optimal output trajectory is then calculated by dynamic programming whose parameters are also interactively modified by user. The system achieves 15-70 fps on a 480x360 video. The thesis describes implementation of an application which purpose is to optimally evaluate the tracker accuracy. The final results are also discussed.
40

MONITORING AUTOCORRELATED PROCESSES

Tang, Weiping 10 1900 (has links)
<p>This thesis is submitted by Weiping Tang on August 2, 2011.</p> / <p>Several control schemes for monitoring process mean shifts, including cumulative sum (CUSUM), weighted cumulative sum (WCUSUM), adaptive cumulative sum (ACUSUM) and exponentially weighted moving average (EWMA) control schemes, display high performance in detecting constant process mean shifts. However, a variety of dynamic mean shifts frequently occur and few control schemes can efficiently work in these situations due to the limited window for catching shifts, particularly when the mean decreases rapidly. This is precisely the case when one uses the residuals from autocorrelated data to monitor the process mean, a feature often referred to as forecast recovery. This thesis focuses on detecting a shift in the mean of a time series when a forecast recovery dynamic pattern in the mean of the residuals is observed. Specifically, we examine in detail several particular cases of the Autoregressive Integrated Moving Average (ARIMA) time series models. We introduce a new upper-sided control chart based on the Exponentially Weighted Moving Average (EWMA) scheme combined with the Fast Initial Response (FIR) feature. To assess chart performance we use the well-established Average</p> <p>Run Length (ARL) criterion. A non-homogeneous Markov chain method is developed for ARL calculation for the proposed chart. We show numerically that the proposed procedure performs as well or better than the Weighted Cumulative Sum (WCUSUM) chart introduced by Shu, Jiang and Tsui (2008), and better than the conventional CUSUM, the ACUSUM and the Generalized Likelihood Ratio Test (GLRT) charts. The methods are illustrated on molecular weight data from a polymer manufacturing process.</p> / Master of Science (MSc)

Page generated in 0.03 seconds