• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 127
  • 75
  • 31
  • 15
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 514
  • 514
  • 107
  • 97
  • 97
  • 78
  • 72
  • 70
  • 70
  • 66
  • 64
  • 60
  • 57
  • 50
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Detection, Localization, and Recognition of Faults in Transmission Networks Using Transient Currents

Perera, Nuwan 18 September 2012 (has links)
The fast clearing of faults is essential for preventing equipment damage and preserving the stability of the power transmission systems with smaller operating margins. This thesis examined the application of fault generated transients for fast detection and isolation of faults in a transmission system. The basis of the transient based protection scheme developed and implemented in this thesis is the fault current directions identified by a set of relays located at different nodes of the system. The direction of the fault currents relative to a relay location is determined by comparing the signs of the wavelet coefficients of the currents measured in all branches connected to the node. The faulted segment can be identified by combining the fault directions identified at different locations in the system. In order to facilitate this, each relay is linked with the relays located at the adjacent nodes through a telecommunication network. In order to prevent possible malfunctioning of relays due to transients originating from non-fault related events, a transient recognition system to supervise the relays is proposed. The applicability of different classification methods to develop a reliable transient recognition system was examined. A Hidden Markov Model classifier that utilizes the energies associated with the wavelet coefficients of the measured currents as input features was selected as the most suitable solution. Performance of the protection scheme was evaluated using a high voltage transmission system simulated in PSCAD/EMTDC simulation software. The custom models required to simulate the complete protection scheme were implemented in PSCAD/EMTDC. The effects of various factors such as fault impedance, signal noise, fault inception angle and current transformer saturation were investigated. The performance of the protection scheme was also tested with the field recorded signals. Hardware prototypes of the fault direction identification scheme and the transient classification system were implemented and tested under different practical scenarios using input signals generated with a real-time waveform playback instrument. The test results presented in this thesis successfully demonstrate the potential of using transient signals embedded in currents for detection, localization and recognition of faults in transmission networks in a fast and reliable manner.
122

A Prototype Transformer Partial Discharge Detection System

Hardie, Stewart Ramon January 2006 (has links)
Increased pressure on high voltage power distribution components has been created in recent years by a demand to lower costs and extend equipment lifetimes. This has led to a need for condition based maintenance, which requires a continuous knowledge of equipment health. Power transformers are a vital component in a power distribution network. However, there are currently no established techniques to accurately monitor and diagnose faults in real-time while the transformer is on-line. A major factor in the degradation of power transformer insulation is partial discharging. Left unattended, partial discharges (PDs) will eventually cause complete insulation failure. PDs generate a variety of signals, including electrical pulses that travel through the windings of the transformer to the terminals. A difficulty with detecting these pulses in an on-line environment is that they can be masked by external electrical interference. This thesis develops a method for identifying PD pulses and determining the number of PD sources while the transformer is on-line and subject to external interference. The partial discharge detection system (PDDS) acquires electrical signals with current and voltage transducers that are placed on the transformer bushings, making it unnecessary to disconnect or open the transformer. These signals are filtered to prevent aliasing and to attenuate the power frequency, and then digitised and analysed in Matlab, a numerical processing software package. Arbitrary narrowband interference is removed with an automated Fourier domain threshold filter. Internal PD pulses are separated from stochastic wideband pulse interference using directional coupling, which is a technique that simultaneously analyses the current and voltage signals from a bushing. To improve performance of this stage, the continuous wavelet transform is used to discriminate time and frequency information. This provides the additional advantage of preserving the waveshapes of the PD pulses for later analysis. PD pulses originating within the transformer have their waveshapes distorted when travelling though the windings. The differentiation of waveshape distortion of pulses from multiple physical sources is used as an input to a neural network to group pulses from the same source. This allows phase resolved PD analysis to be presented for each PD source, for instance, as phase/magnitude/count plots. The neural network requires no prior knowledge of the transformer or pulse waveshapes. The thesis begins with a review of current techniques and trends for power transformer monitoring and diagnosis. The description of transducers and filters is followed by an explanation of each of the signal processing steps. Two transformers were used to conduct testing of the PDDS. The first transformer was opened and modified so that internal PDs could be simulated by injecting artificial pulses. Two test scenarios were created and the performance of the PDDS was recorded. The PDDS identified and extracted a high rate of simulated PDs and correctly allocated the pulses into PD source groups. A second identically constructed transformer was energised and analysed for any natural PDs while external interference was present. It was found to have a significant natural PD source.
123

SINGLE ENDED TRAVELING WAVE BASED FAULT LOCATION USING DISCRETE WAVELET TRANSFORM

Chang, Jin 01 January 2014 (has links)
In power transmission systems, locating faults is an essential technology. When a fault occurs on a transmission line, it will affect the whole power system. To find the fault location accurately and promptly is required to ensure the power supply. In this paper, the study of traveling wave theory, fault location method, Karrenbauer transform, and Wavelet transform is presented. This thesis focuses on single ended fault location method. The signal processing technique and evaluation study are presented. The MATLAB SimPowerSystem is used to test and simulate fault scenarios for evaluation studies.
124

Using the Discrete Wavelet Transform to Haar'd Code a Blind Digital Watermark

Brannock, Evelyn R 20 April 2009 (has links)
Safeguarding creative content in a digital form has become increasingly difficult. It is progressively easier to copy, modify and redistribute digital media, which causes great declines in business profits. For example, the International Federation of the Phonographic Industry estimates that in 2001 the worldwide sales of pirated music CDs were 475 million US dollars. While a large amount of time and money is committed to creating intellectual property, legal means have not proven to be sufficient for the protection of this property. Digital watermarking is a steganographic technique that has been proposed as a possible solution to this problem. A digital watermark hides embedded information about the origin, status, owner and/or destination of the data, often without the knowledge of the viewer or user. This dissertation examines a technique for digital watermarking which utilizes properties of the Discrete Wavelet Transform (DWT). Research has been done in this field, but which wavelet family is superior is not adequately addressed. This dissertation studies the influence of the wavelet family when using a blind, nonvisible watermark in digital media. The digital watermarking algorithm uses a database of multiple images with diverse properties. Various watermarks are embedded. Eight different families of wavelets with dissimilar properties are compared. How effective is each wavelet? To objectively measure the success of the algorithm, the influence of the mother wavelet, the imperceptibility of the embedded watermark and the readability of the extracted watermark, the Peak Signal-to-Noise Ratio and the Image Quality Index for each wavelet family and image are obtained. Two common categories of digital watermarking attacks are removing the watermark and rendering the watermark undetectable. To simulate and examine the effect of attacks on the images, noise is added to the image data. Also, to test the effect of reducing an image in size, each image containing the embedded watermark is compressed. The dissertation asks the questions: “Is the wavelet family chosen to implement the algorithm for a blind, nonvisible watermark in digital images of consequence? If so, which family is superior?” This dissertation conclusively shows that the Haar transform is the best for blind, non-visible digital watermarking.
125

Detection, Localization, and Recognition of Faults in Transmission Networks Using Transient Currents

Perera, Nuwan 18 September 2012 (has links)
The fast clearing of faults is essential for preventing equipment damage and preserving the stability of the power transmission systems with smaller operating margins. This thesis examined the application of fault generated transients for fast detection and isolation of faults in a transmission system. The basis of the transient based protection scheme developed and implemented in this thesis is the fault current directions identified by a set of relays located at different nodes of the system. The direction of the fault currents relative to a relay location is determined by comparing the signs of the wavelet coefficients of the currents measured in all branches connected to the node. The faulted segment can be identified by combining the fault directions identified at different locations in the system. In order to facilitate this, each relay is linked with the relays located at the adjacent nodes through a telecommunication network. In order to prevent possible malfunctioning of relays due to transients originating from non-fault related events, a transient recognition system to supervise the relays is proposed. The applicability of different classification methods to develop a reliable transient recognition system was examined. A Hidden Markov Model classifier that utilizes the energies associated with the wavelet coefficients of the measured currents as input features was selected as the most suitable solution. Performance of the protection scheme was evaluated using a high voltage transmission system simulated in PSCAD/EMTDC simulation software. The custom models required to simulate the complete protection scheme were implemented in PSCAD/EMTDC. The effects of various factors such as fault impedance, signal noise, fault inception angle and current transformer saturation were investigated. The performance of the protection scheme was also tested with the field recorded signals. Hardware prototypes of the fault direction identification scheme and the transient classification system were implemented and tested under different practical scenarios using input signals generated with a real-time waveform playback instrument. The test results presented in this thesis successfully demonstrate the potential of using transient signals embedded in currents for detection, localization and recognition of faults in transmission networks in a fast and reliable manner.
126

Motion Estimation Using Complex Discrete Wavelet Transform

Sari, Huseyin 01 January 2003 (has links) (PDF)
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blocks of filtering with a set of Gabor-like kernels with different scales and orientations. The output is a hierarchy of scaled and subsampled orientation-tuned subimages. The motion estimation algorithm is based on the relationship between translations in image domain and phase shifts in CDWT domain, which is satisfied by the shiftability and interpolability property of CDWT. Optical flow is estimated by using this relationship at each scale, in a coarse-to-fine (hierarchical) manner, where information from finer scales is used to refine the estimates from coarser scales. The performance of the motion estimation algorithm is investigated with various image sequences as input and the effects of the options in the algorithm like curvature-correction, interpolation kernel between levels and some parameter values like confidence threshold iv maximum number of CDWT levels and minimum finest level of detail are also experimented and discussed. The test results show that the method is superior to other well-known algorithms in estimation accuracy, especially under high illuminance variations and additive noise.
127

Motion Compensated Three Dimensional Wavelet Transform Based Video Compression And Coding

Bicer, Aydin 01 January 2005 (has links) (PDF)
In this thesis, a low bit rate video coding system based on three-dimensional (3-D) wavelet coding is studied. In addition to the initial motivation to make use of the motion compensated wavelet based coding schemes, the other techniques that do not utilize the motion compensation in their coding procedures have also been considered on equal footing. The 3-D wavelet transform (WT) algorithm is based on the &ldquo / group of frames&rdquo / (GOF) concept. The group of eight frames are decomposed both temporally and spatially to their coarse and detail parts. The decomposition process utilizes both orthogonal and bi-orthogonal wavelet analysis filters. The transform coefficients are coded using an embedded image coding algorithm, called the &ldquo / Two-Dimensional Set Partitioning in Hierarchical Trees&rdquo / (2-D SPIHT). Due to its nature, the 2-D SPIHT is applied to the individual subband frames. In the reconstruction phase, the 2-D SPIHT decoding algorithm and the wavelet synthesis filters are employed, respectively. The Peak Signal to Noise Ratios (PSNRs) are used as a quality measure of the reconstructed frames. The investigations reveal that among several factors, the multi-level (de)composition is the dominant one effective both on the signal compression and the PSNR level. The encoded videos compressed to the ratio of 1:9 are reconstructed with the PSNR of about 30 dB in the best cases.
128

Extraction Of Auditory Evoked Potentials From Ongoing Eeg

Aydin, Serap 01 September 2005 (has links) (PDF)
In estimating auditory Evoked Potentials (EPs) from ongoing EEG the number of sweeps should be reduced to decrease the experimental time and to increase the reliability of diagnosis. The &macr / rst goal of this study is to demon- strate the use of basic estimation techniques in extracting auditory EPs (AEPs) from small number of sweeps relative to ensemble averaging (EA). For this purpose, three groups of basic estimation techniques are compared to the traditional EA with respect to the signal-to-noise ratio(SNR) improve- ments in extracting the template AEP. Group A includes the combinations of the Subspace Method (SM) with the Wiener Filtering (WF) approaches (the conventional WF and coherence weighted WF (CWWF). Group B con- sists of standard adaptive algorithms (Least Mean Square (LMS), Recursive Least Square (RLS), and one-step Kalman &macr / ltering (KF). The regularization techniques (the Standard Tikhonov Regularization (STR) and the Subspace Regularization (SR) methods) forms Group C. All methods are tested in sim- ulations and pseudo-simulations which are performed with white noise and EEG measurements, respectively. The same methods are also tested with experimental AEPs. Comparisons based on the output signal-to-noise ratio (SNR) show that: 1) the KF and STR methods are the best methods among the algorithms tested in this study,2) the SM can reduce the large amount of the background EEG noise from the raw data, 3) the LMS and WF algo- rithms show poor performance compared to EA. The SM should be used as 1 a pre-&macr / lter to increase their performance. 4) the CWWF works better than the WF when it is combined with the SM, 5) the STR method is better than the SR method. It is observed that, most of the basic estimation techniques show de&macr / nitely better performance compared to EA in extracting the EPs. The KF or the STR e&reg / ectively reduce the experimental time (to one-fourth of that required by EA). The SM is a useful pre-&macr / lter to signi&macr / cantly reduce the noise on the raw data. The KF and STR are shown to be computationally inexpensive tools to extract the template AEPs and should be used instead of EA. They provide a clear template AEP for various analysis methods. To reduce the noise level on single sweeps, the SM can be used as a pre-&macr / lter before various single sweep analysis methods. The second goal of this study is to to present a new approach to extract single sweep AEPs without using a template signal. The SM and a modi- &macr / ed scale-space &macr / lter (MSSF) are applied consecutively. The SM is applied to raw data to increase the SNR. The less-noisy sweeps are then individu- ally &macr / ltered with the MSSF. This new approach is assessed in both pseudo- simulations and experimental studies. The MSSF is also applied to actual auditory brainstem response (ABR) data to obtain a clear ABR from a rel- atively small number of sweeps. The wavelet transform coe&plusmn / cients (WTCs) corresponding to the signal and noise become distinguishable after the SM. The MSSF is an e&reg / ective &macr / lter in selecting the WTCs of the noise. The esti- mated single sweep EPs highly resemble the grand average EP although less number of sweeps are evaluated. Small amplitude variations are observed among the estimations. The MSSF applied to EA of 50 sweeps yields an ABR that best &macr / ts to the grand average of 250 sweeps. We concluded that the combination of SM and MSSF is an e&plusmn / cient tool to obtain clear single sweep AEPs. The MSSF reduces the recording time to one-&macr / fth of that re- quired by EA in template ABR estimation. The proposed approach does not use a template signal (which is generally obtained using the average of small number of sweeps). It provides unprecedented results that support the basic assumptions in the additive signal model.
129

Analysis and Coding of High Quality Audio Signals

Ning, Daryl January 2003 (has links)
Digital audio is increasingly becoming more and more a part of our daily lives. Unfortunately, the excessive bitrate associated with the raw digital signal makes it an extremely expensive representation. Applications such as digital audio broadcasting, high definition television, and internet audio, require high quality audio at low bitrates. The field of audio coding addresses this important issue of reducing the bitrate of digital audio, while maintaining a high perceptual quality. Developing an efficient audio coder requires a detailed analysis of the audio signals themselves. It is important to find a representation that can concisely model any general audio signal. In this thesis, we propose two new high quality audio coders based on two different audio representations - the sinusoidal-wavelet representation, and the warped linear predictive coding (WLPC)-wavelet representation. In addition to high quality coding, it is also important for audio coders to be flexible in their application. With the increasing popularity of internet audio, it is advantageous for audio coders to address issues related to real-time audio delivery. The issue of bitstream scalability has been targeted in this thesis, and therefore, a third audio coder capable of bitstream scalability is also proposed. The performance of each of the proposed coders was evaluated by comparisons with the MPEG layer III coder. The first coder proposed is based on a hybrid sinusoidal-wavelet representation. This assumes that each frame of audio can be modelled as a sum of sinusoids plus a noisy residual. The discrete wavelet transform (DWT) is used to decompose the residual into subbands that approximate the critical bands of human hearing. A perceptually derived bit allocation algorithm is then used to minimise the audible distortions introduced from quantising the DWT coefficients. Listening tests showed that the coder delivers near-transparent quality for a range of critical audio signals at G4 kbps. It also outperforms the MPEG layer III coder operating at this same bitrate. This coder, however, is only useful for high quality coding, and is difficult to scale to operate at lower rates. The second coder proposed is based on a hybrid WLPC-wavelet representation. In this approach, the spectrum of the audio signal is estimated by an all pole filter using warped linear prediction (WLP). WLP operates on a warped frequency domain, where the resolution can be adjusted to approximate that of the human auditory system. This makes the inherent noise shaping of the synthesis filter even more suited to audio coding. The excitation to this filter is transformed using the DWT and perceptually encoded. Listening tests showed that near-transparent coding is achieved at G4 kbps. The coder was also found to be slightly superior to the MPEG layer III coder operating at this same bitrate. The third proposed coder is similar to the previous WLPC-wavelet coder, but modified to achieve bitstream scalability. A noise model for high frequency components is included to keep the overall bitrate low, and a two stage quantisation scheme for the DWT coefficients is implemented. The first stage uses fixed rate scalar and vector quantisation to provide a coarse approximation of the coefficients. This allows for low bitrate, low quality versions of the input signal to be embedded in the overall bitstream. The second stage of quantisation adds detail to the coefficients, and hence, enhances the quality of the output signal. Listening tests showed that signal quality gracefully improves as the bitrate increases from 16 kbps to SO kbps. This coder has a performance that is comparable to the MPEG layer III coder operating at a similar (but fixed) bitrate.
130

Wavelet Transform For Texture Analysis With Application To Document Analysis

Busch, Andrew W. January 2004 (has links)
Texture analysis is an important problem in machine vision, with applications in many fields including medical imaging, remote sensing (SAR), automated flaw detection in various products, and document analysis to name but a few. Over the last four decades many techniques for the analysis of textured images have been proposed in the literature for the purposes of classification, segmentation, synthesis and compression. Such approaches include analysis the properties of individual texture elements, using statistical features obtained from the grey-level values of the image itself, random field models, and multichannel filtering. The wavelet transform, a unified framework for the multiresolution decomposition of signals, falls into this final category, and allows a texture to be examined in a number of resolutions whilst maintaining spatial resolution. This thesis explores the use of the wavelet transform to the specific task of texture classification and proposes a number of improvements to existing techniques, both in the area of feature extraction and classifier design. By applying a nonlinear transform to the wavelet coefficients, a better characterisation can be obtained for many natural textures, leading to increased classification performance when using first and second order statistics of these coefficients as features. In the area of classifier design, a combination of an optimal discriminate function and a non-parametric Gaussian mixture model classifier is shown to experimentally outperform other classifier configurations. By modelling the relationships between neighbouring bands of the wavelet trans- form, more information regarding a texture can be obtained. Using such a representation, an efficient algorithm for the searching and retrieval of textured images from a database is proposed, as well as a novel set of features for texture classification. These features are experimentally shown to outperform features proposed in the literature, as well as provide increased robustness to small changes in scale. Determining the script and language of a printed document is an important task in the field of document processing. In the final part of this thesis, the use of texture analysis techniques to accomplish these tasks is investigated. Using maximum a posterior (MAP) adaptation, prior information regarding the nature of script images can be used to increase the accuracy of these methods. Novel techniques for estimating the skew of such documents, normalising text block prior to extraction of texture features and accurately classifying multiple fonts are also presented.

Page generated in 0.0679 seconds