• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 128
  • 75
  • 31
  • 15
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 515
  • 515
  • 107
  • 97
  • 97
  • 78
  • 72
  • 71
  • 70
  • 66
  • 64
  • 60
  • 57
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

A New Segmentation Algorithm for Prostate Boundary Detection in 2D Ultrasound Images

Chiu, Bernard January 2003 (has links)
Prostate segmentation is a required step in determining the volume of a prostate, which is very important in the diagnosis and the treatment of prostate cancer. In the past, radiologists manually segment the two-dimensional cross-sectional ultrasound images. Typically, it is necessary for them to outline at least a hundred of cross-sectional images in order to get an accurate estimate of the prostate's volume. This approach is very time-consuming. To be more efficient in accomplishing this task, an automated procedure has to be developed. However, because of the quality of the ultrasound image, it is very difficult to develop a computerized method for defining boundary of an object in an ultrasound image. The goal of this thesis is to find an automated segmentation algorithm for detecting the boundary of the prostate in ultrasound images. As the first step in this endeavour, a semi-automatic segmentation method is designed. This method is only semi-automatic because it requires the user to enter four initialization points, which are the data required in defining the initial contour. The discrete dynamic contour (DDC) algorithm is then used to automatically update the contour. The DDC model is made up of a set of connected vertices. When provided with an energy field that describes the features of the ultrasound image, the model automatically adjusts the vertices of the contour to attain a maximum energy. In the proposed algorithm, Mallat's dyadic wavelet transform is used to determine the energy field. Using the dyadic wavelet transform, approximate coefficients and detailed coefficients at different scales can be generated. In particular, the two sets of detailed coefficients represent the gradient of the smoothed ultrasound image. Since the gradient modulus is high at the locations where edge features appear, it is assigned to be the energy field used to drive the DDC model. The ultimate goal of this work is to develop a fully-automatic segmentation algorithm. Since only the initialization stage requires human supervision in the proposed semi-automatic initialization algorithm, the task of developing a fully-automatic segmentation algorithm is reduced to designing a fully-automatic initialization process. Such a process is introduced in this thesis. In this work, the contours defined by the semi-automatic and the fully-automatic segmentation algorithm are compared with the boundary outlined by an expert observer. Tested using 8 sample images, the mean absolute difference between the semi-automatically defined and the manually outlined boundary is less than 2. 5 pixels, and that between the fully-automatically defined and the manually outlined boundary is less than 4 pixels. Automated segmentation tools that achieve this level of accuracy would be very useful in assisting radiologists to accomplish the task of segmenting prostate boundary much more efficiently.
332

Novel Pattern Recognition Techniques for Improved Target Detection in Hyperspectral Imagery

Sakla, Wesam Adel 2009 December 1900 (has links)
A fundamental challenge in target detection in hyperspectral imagery is spectral variability. In target detection applications, we are provided with a pure target signature; we do not have a collection of samples that characterize the spectral variability of the target. Another problem is that the performance of stochastic detection algorithms such as the spectral matched filter can be detrimentally affected by the assumptions of multivariate normality of the data, which are often violated in practical situations. We address the challenge of lack of training samples by creating two models to characterize the target class spectral variability --the first model makes no assumptions regarding inter-band correlation, while the second model uses a first-order Markovbased scheme to exploit correlation between bands. Using these models, we present two techniques for meeting these challenges-the kernel-based support vector data description (SVDD) and spectral fringe-adjusted joint transform correlation (SFJTC). We have developed an algorithm that uses the kernel-based SVDD for use in full-pixel target detection scenarios. We have addressed optimization of the SVDD kernel-width parameter using the golden-section search algorithm for unconstrained optimization. We investigated a proper number of signatures N to generate for the SVDD target class and found that only a small number of training samples is required relative to the dimensionality (number of bands). We have extended decision-level fusion techniques using the majority vote rule for the purpose of alleviating the problem of selecting a proper value of s 2 for either of our target variability models. We have shown that heavy spectral variability may cause SFJTC-based detection to suffer and have addressed this by developing an algorithm that selects an optimal combination of the discrete wavelet transform (DWT) coefficients of the signatures for use as features for detection. For most scenarios, our results show that our SVDD-based detection scheme provides low false positive rates while maintaining higher true positive rates than popular stochastic detection algorithms. Our results also show that our SFJTC-based detection scheme using the DWT coefficients can yield significant detection improvement compared to use of SFJTC using the original signatures and traditional stochastic and deterministic algorithms.
333

Quantization Index Modulation Based Watermarking Using Digital Holography

Okman, Osman Erman 01 September 2006 (has links) (PDF)
The multimedia watermarking techniques are evolved very quickly in the last years with the increase in the use of internet. The evolution of the internet makes the copyright issues very important and many different approaches are appeared to protect the digital content. On the other hand, holography is the method to store the 3-D information of an object but it is very applicable to use as a watermark because of the nature of the holographic data. The 3-D object can be reconstructed from the hologram even if the hologram is cropped or occluded. However, watermarking of an image with a hologram is a very novel approach and there are only a few works in the literature which are not very robust against the attacks like filtering or compression. In this thesis, we propose to embed the phase of the hologram to an image using quantization index modulation (QIM). QIM is utilized to make the watermarking scheme blind and degrade the host image as low as possible. The robustness of the proposed technique is also tested against several attacks such as filtering, compression, etc. The evaluated performance of this system is compared with the existing methods in the literature which uses either holograms or logos as the secret mark. Furthermore, the characteristics of the holograms are investigated and the findings about the hologram compression are reported in this work.
334

Speech Encryption Using Wavelet Packets

Bopardikar, Ajit S 02 1900 (has links)
The aim of speech scrambling algorithms is to transform clear speech into an unintelligible signal so that it is difficult to decrypt it in the absence of the key. Most of the existing speech scrambling algorithms tend to retain considerable residual intelligibility in the scrambled speech and are easy to break. Typically, a speech scrambling algorithm involves permutation of speech segments in time, frequency or time-frequency domain or permutation of transform coefficients of each speech block. The time-frequency algorithms have given very low residual intelligibility and have attracted much attention. We first study the uniform filter bank based time-frequency scrambling algorithm with respect to the block length and number of channels. We use objective distance measures to estimate the departure of the scrambled speech from the clear speech. Simulations indicate that the distance measures increase as we increase the block length and the number of chan­nels. This algorithm derives its security only from the time-frequency segment permutation and it has been estimated that the effective number of permutations which give a low residual intelligibility is much less than the total number of possible permutations. In order to increase the effective number of permutations, we propose a time-frequency scrambling algorithm based on wavelet packets. By using different wavelet packet filter banks at the analysis and synthesis end, we add an extra level of security since the eavesdropper has to choose the correct analysis filter bank, correctly rearrange the time-frequency segments, and choose the correct synthesis bank to get back the original speech signal. Simulations performed with this algorithm give distance measures comparable to those obtained for the uniform filter bank based algorithm. Finally, we introduce the 2-channel perfect reconstruction circular convolution filter bank and give a simple method for its design. The filters designed using this method satisfy the paraunitary properties on a discrete equispaced set of points in the frequency domain.
335

Μεθοδολογίες επεξεργασίας σημάτων ακουστικής εκπομπής και ακουστοϋπέρηχου για την παρακολούθηση και την ταυτοποίηση της εξέλιξης της βλάβης σε σύνθετα κεραμικά υλικά

Λούτας, Θεόδωρος 01 August 2007 (has links)
Η συσσώρευση της βλάβης σε σύνθετα υλικά κεραμικής μήτρας που υπόκεινται σε μηχανική φόρτιση είναι ένα ζήτημα που δεν έχει απαντηθεί ικανοποιητικά μέχρι σήμερα. Η βασικότερη αντικειμενική δυσκολία στο πρόβλημα αυτό είναι ο τρόπος προσέγγισης και προσδιορισμού της βλάβης στα σύνθετα υλικά καθώς πρόκειται για πολυπαραμετρικό πρόβλημα. Επίσης τίθεται το ζήτημα του τρόπου παρακολούθησης της βλάβης. Οι μη καταστρεπτικοί έλεγχοι αποτελούν μια πολύ καλή επιλογή για την παρακολούθηση και τη μελέτη της εξέλιξης της βλάβης. Ο βασικός σκοπός της εργασίας αυτής είναι η μελέτη της εξέλιξης της βλάβης και των μηχανισμών αστοχίας στα υλικά αυτά με τη χρήση δύο διαφορετικών τεχνικών μη καταστροφικών ελέγχων (Aκουστική Eκπομπή AE και Aκουστο-Yπέρηχο AY) κατά τη διάρκεια μηχανικών δοκιμών, καθώς επίσης και η εύρεση ποσοτικών δεικτών ικανών να παρακολουθούν τα διάφορα επίπεδα βλάβης του υλικού. Ιδιαίτερη έμφαση δίνεται στις μεθοδολογίες επεξεργασίας των σημάτων που προκύπτουν από κάθε τεχνική. Στην κατεύθυνση αυτή δοκιμάστηκαν τρεις τύποι υλικών C/C με ενίσχυση τύπου υφάσματος. Η διαφοροποίηση από τύπο σε τύπο υλικού έγκειται στις διαφορετικές ιδιότητες της διεπιφάνειας που επέλεξε ο κατασκευαστής να προσδώσει χωρίς να διατεθούν περαιτέρω λεπτομέρειες (βιομηχανικό απόρρητο). Παράλληλα, από τα αποτελέσματα της εφαρμογής των διαφορετικών μεθοδολογιών επεξεργασίας των σημάτων που προέκυψαν από κάθε μη καταστροφική μέθοδο, επιχειρείται η εξαγωγή συμπερασμάτων σχετικά με τον τρόπο που οι διαφορετικές ποιότητες της διεπιφάνειας επηρεάζουν τους μηχανισμούς συσσώρευσης βλάβης στα υπό εξέταση υλικά. Αναλυτικότερα οι στόχοι που επιδιώχθησαν στο πλαίσιο της διατριβής είναι οι ακόλουθοι: • Εκτέλεση ειδικά επιλεγμένων μηχανικών δοκιμών σε τρία είδη συνθέτων υλικών C/C με ενίσχυση τύπου υφάσματος, που δίδουν τη δυνατότητα ανάπτυξης βλάβης πολλαπλών επιπέδων στη δομή του υλικού • Χρήση μη καταστροφικών μεθόδων όπως η ακουστική εκπομπή (AE) και οι ακουστο-υπέρηχοι (AU) για την παρακολούθηση της βλάβης που αναπτύσσεται και εξελίσσεται κατά τη διάρκεια των μηχανικών δοκιμών • Αναγνώριση των μηχανισμών αστοχίας και εξέλιξης της βλάβης έπειτα από επεξεργασία των σημάτων ΑΕ • Ανάπτυξη και εφαρμογή καινοτόμων τεχνικών επεξεργασίας για τα σήματα του ΑΥ βασιζόμενες στο μετασχηματισμό κυματιδίων • Ανάπτυξη ποσοτικών δεικτών για την παρακολούθηση της συσσώρευσης της βλάβης από την επεξεργασία των σημάτων του ΑΥ • Εξαγωγή συμπερασμάτων για τον τρόπο που η διαφοροποίηση στις τελικές ιδιότητες της διεπιφάνειας επηρεάζει τον τρόπο εξέλιξης και συσσώρευσης στα υπό εξέταση υλικά / The accumulation of damage in compοsite materials of ceramic matrix under mechanic loading is a topic that has not been answered satisfactorily up to today. The most basic objective difficulty in this problem is the way of approach and determination of damage in compοsite materials as it is a multiparametric problem. The question of the way of monitoring the damage is also rised. Non destructive testing constitutes of a very good choice for the monitoring and the study of the development of damage. The basic aim of this work is the study of development of damage and the failure mechanisms in composite materials with the use of two different techniques of not destructive techniques (Acoustic Emission AE and Acousto-Ultrasonic AU) during the mechanical testing, as well as the development of quantitative indicators capable of monitoring the various levels of damage of the material. Particular accent is given in the signal processing methodologies for the signals that result from each technique. To this direction three types of woven C/C composite material were tested. The differentiation in type of material lies in the different interfacial properties that the manufacturer selected without further details (industrial secrecy). At the same time, from the results of the application of the different signal processing methodologies that resulted from each non destructive method, conclusions are attempted to be exported with regard to the way that the different interfacial qualities influence the mechanisms of accumulation of damage. More analytically the objectives that were sought in the frame of this thesis are as follows: • Implementation of specifically selected mechanical tests in the three types of the woven composite C/C materials, that give the ability of development of multiple level damage in the structure of materials • Use of non destructive methods as the acoustic emission (AE) and the acousto-ultrasonics (AU) for the monitoring of damage that is developed during the mechanical tests • Recognition of the material’s failure mechanisms after the processing of AE signals • Development and application of innovative techniques for the processing of AU signals based on the wavelet transform • Development of quantitative indicators for the monitoring of damage accumulation from the processing of AU signals • Export of conclusions on the way that the differentiation in the final interfacial properties influences the way of development and accumulation of damage in the under review materials
336

Applications of the Wavelet Transform to B Mixing Analysis

Cadien, Adam Samuel 06 1900 (has links)
Abstract The neutral B mesons B0 and B0s can under go flavor changing oscillations due to interactions by the weak force. Experiments which measure the frequency of these state transitions produce extremely noisy results that are difficult to analyse. A method for extracting the frequency of B mesons oscillations using the continuous wavelet transform is developed here. In this paper the physics of B meson mixing is related, leading to the derivation of a function describing the expected amount of mixing present in B0 and B0s meson decays. This result is then used to develop a new method for analysing the underlying frequency of oscillation in B mixing. An introduction to wavelet theory is provided in addition to details on interpreting daughter wavelet coefficient diagrams. Finally, the effectiveness of the analysis technique produced, referred to as the Template Fitting Method, is investigated through an application to data generated using Monte Carlo methods.
337

Image Classification For Content Based Indexing

Taner, Serdar 01 December 2003 (has links) (PDF)
As the size of image databases increases in time, the need for content based image indexing and retrieval become important. Image classification is a key to content based image indexing. In this thesis supervised learning with feed forward back propagation artificial neural networks is used for image classification. Low level features derived from the images are used to classify the images to interpret the high level features that yield semantics. Features are derived using detail histogram correlations obtained by Wavelet Transform, directional edge information obtained by Fourier Transform and color histogram correlations. An image database consisting of 357 color images of various sizes is used for training and testing the structure. The database is indexed into seven classes that represent scenery contents which are not mutually exclusive. The ground truth data is formed in a supervised fashion to be used in training the neural network and testing the performance. The performance of the structure is tested using leave one out method and comparing the simulation outputs with the ground truth data. Success, mean square error and the class recall rates are used as the performance measures. The performances of the derived features are compared with the color and texture descriptors of MPEG-7 using the structure designed. The results show that the performance of the method is comparable and better. This method of classification for content based image indexing is a reliable and valid method for content based image indexing and retrieval, especially in scenery image indexing.
338

Comparative evaluation of video watermarking techniques in the uncompressed domain

Van Huyssteen, Rudolph Hendrik 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly embedded into electronic media, while ideally being robust against common signal manipulations and intentional attacks to remove the embedded watermark. This study evaluates the characteristics of uncompressed video watermarking techniques in terms of visual characteristics, computational complexity and robustness against attacks and signal manipulations. The foundations of video watermarking are reviewed, followed by a survey of existing video watermarking techniques. Representative techniques from different watermarking categories are identified, implemented and evaluated. Existing image quality metrics are reviewed and extended to improve their performance when comparing these video watermarking techniques. A new metric for the evaluation of inter frame flicker in video sequences is then developed. A technique for possibly improving the robustness of the implemented discrete Fourier transform technique against rotation is then proposed. It is also shown that it is possible to reduce the computational complexity of watermarking techniques without affecting the quality of the original content, through a modified watermark embedding method. Possible future studies are then recommended with regards to further improving watermarking techniques against rotation. / AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen aanslae en seinmanipulasies. Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer. Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering in video’s word ook ontwikkel. ’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit van watermerktegnieke te verminder, sonder om die kwaliteit van die oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode. Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van watermerktegnieke teen rotasie gemaak.
339

Tranformada wavelet e redes neurais artificiais na análise de sinais relacionados à qualidade da energia elétrica / Wavelet transform and artificial neural networks in power quality signal analysis

Pozzebon, Giovani Guarienti 10 February 2009 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents a different method for power quality signal classification using the principal components analysis (PCA) associated to the wavelet transform (WT). The standard deviation of the detail coefficients and the average of the approximation coefficients from WT are combined to extract discriminated characteristics from the disturbances. The PCA was used to condense the information of those characteristics, than a smaller group of characteristics uncorrelated were generated. These were processed by a probabilistic neural network (PNN) to accomplish the classifications. In the application of the algorithm, in the first case, seven classes of signals which represent different types of disturbances were classified, they are as follows: voltage sag and interruption, flicker, oscillatory transients, harmonic distortions, notching and normal sine waveform. In the second case were increased four more situations that usually happen in distributed generation systems connected to distribution grids through converters, they are as follows: connection of the distributed generation, connection of local load, normal operation and islanding occurrence. In this case, the voltage on the point of common coupling between GD and grid were measured by simulations and were analyzed by the proposed algorithm. In both cases, the signals were decomposed in nine resolution levels by the wavelet transformed, being represented by detail and approximation coefficients. The application of the WT generated a lot of variations in the coefficients. Therefore, the application of the standard deviation in different resolution levels can quantify the magnitude of the variations. In order to take into account those features originated from low frequency components contained in the signals, was proposed to calculate the average of the approximation coefficients. The standard deviations of the detail coefficients and the average of the approximation coefficients composed the feature vector containing 10 variables for each signal. Before accomplishing the classification these vectors were processed by the principal component analysis algorithm in order to reduce the dimension of the feature vectors that contained correlated variables. Consequently, the processing time of the neural network were reduced to. The principal components, which are uncorrelated, were ordered so that the first few components account for the most variation that all the original variables acted previously. The first three components were chosen. Like this, a new group of variables was generated through the principal components. Thus, the number of variables on the feature vector was reduced to 3 variables. These 3 variables were inserted in a neural network for the classification of the disturbances. The output of the neural network indicates the type of disturbance. / Este trabalho apresenta um diferente método para a classificação de distúrbios em sinais elétricos visando analisar a qualidade da energia elétrica (QEE). Para isso, a análise de componentes principais (ACP) e a transformada wavelet (TW) são associadas. O desvio padrão dos coeficientes de detalhes e a média dos coeficientes de aproximação da TW são combinados para extrair características discriminantes dos distúrbios. A ACP é utilizada para condensar a informação dessas características, originando um conjunto menor de características descorrelacionadas. Estas são processadas por uma rede neural probabilística (RNP) para realizar as classificações. Na aplicação do algoritmo, inicialmente, foram utilizadas senóides puras e seis classes de sinais que representam os diferentes tipos de distúrbios: afundamentos e interrupções de tensão, flicker, transitórios oscilatórios, distorções harmônicas e notching. Em seguida, são acrescentadas mais quatro situações ocorridas em sistemas de geração distribuída (GD) conectados em redes de distribuição através de conversores. São elas: conexão da geração distribuída, conexão de carga local, operação normal e ocorrência de ilhamento. Neste caso, os sinais de tensão no ponto de acoplamento comum (PAC) entre a GD e a rede são medidos e analisados pelo algoritmo. Em ambos os casos, os sinais são decompostos em nove níveis de resolução pela transformada wavelet, ficando representados por coeficientes de detalhes e aproximações. A aplicação da transformada wavelet discreta gera muitas variações nos coeficientes. Por isso a aplicação do desvio padrão, nos diferentes níveis de resolução, é capaz de quantificar a magnitude destas variações. Para considerar as características originadas pelas componentes de baixa freqüência contidas nos sinais, propõe-se o uso da média dos coeficientes de aproximação do sinal. Os desvios padrões dos coeficientes de detalhes e a média da aproximação compõem um vetor de características contendo 10 variáveis para cada sinal analisado. Antes de realizar a classificação estes vetores passam por um algoritmo de análise das componentes principais, visando reduzir a dimensão dos vetores de características que continham variáveis correlacionadas e conseqüentemente, reduzir o tempo de processamento da rede neural. As componentes principais, descorrelacionadas, são ordenadas de forma que as primeiras componentes contenham a maior parte das informações das variáveis originais. Dessa forma, as três primeiras componentes são escolhidas, pois elas representam cerca de 90% das informações relacionadas com o sinal em estudo. Assim, um novo conjunto de variáveis é gerado através das componentes principais, reduzindo o número de variáveis contidas no vetor de características de 10 (dez) para 3 (três). Finalmente, estas 3 variáveis são inseridas em uma rede neural para a classificação dos distúrbios de forma que o resultado da rede neural indica o tipo de distúrbio presente no sinal analisado.
340

Análise cepstral baseada em diferentes famílias transformada wavelet / Cepstral analysis based on different family of wavelet transform

Fabrício Lopes Sanchez 02 December 2008 (has links)
Este trabalho apresenta um estudo comparativo entre diferentes famílias de transformada Wavelet aplicadas à análise cepstral de sinais digitais de fala humana, com o objetivo específico de determinar o período de pitch dos mesmos e, ao final, propõe um algoritmo diferencial para realizar tal operação, levando-se em consideração aspectos importantes do ponto de vista computacional, tais como: desempenho, complexidade do algoritmo, plataforma utilizada, dentre outros. São apresentados também, os resultados obtidos através da implementação da nova técnica (baseada na transformada wavelet) em comparação com a abordagem tradicional (baseada na transformada de Fourier). A implementação da técnica foi testada em linguagem C++ padrão ANSI sob as plataformas Windows XP Professional SP3, Windows Vista Business SP1, Mac OSX Leopard e Linux Mandriva 10. / This work presents a comparative study between different family of wavelets applied on cepstral analysis of the digital speech human signal with specific objective for determining of pitch period of the same and in the end, proposes an differential algorithm to make such a difference operation take into consideration important aspects of computational point of view, such as: performance, algorithm complexity, used platform, among others. They are also present, the results obtained through of the technique implementation compared with the traditional approach. The technique implementation was tested in C++ language standard ANSI under the platform Windows XP Professional SP3 Edition, Windows Vista Business SP1, MacOSX Leopard and Linux Mandriva 10.

Page generated in 0.0759 seconds