• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 395
  • 140
  • 65
  • 59
  • 47
  • 16
  • 15
  • 10
  • 7
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • Tagged with
  • 945
  • 148
  • 142
  • 138
  • 129
  • 114
  • 104
  • 88
  • 82
  • 68
  • 61
  • 61
  • 59
  • 54
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Nonlinear Approaches to Periodic Signal Modeling

Abd-Elrady, Emad January 2005 (has links)
<p>Periodic signal modeling plays an important role in different fields. The unifying theme of this thesis is using nonlinear techniques to model periodic signals. The suggested techniques utilize the user pre-knowledge about the signal waveform. This gives these techniques an advantage as compared to others that do not consider such priors.</p><p>The technique of Part I relies on the fact that a sine wave that is passed through a static nonlinear function produces a harmonic spectrum of overtones. Consequently, the estimated signal model can be parameterized as a known periodic function (with unknown frequency) in cascade with an unknown static nonlinearity. The unknown frequency and the parameters of the static nonlinearity are estimated simultaneously using the recursive prediction error method (RPEM). A treatment of the local convergence properties of the RPEM is provided. Also, an adaptive grid point algorithm is introduced to estimate the unknown frequency and the parameters of the static nonlinearity in a number of adaptively estimated grid points. This gives the RPEM more freedom to select the grid points and hence reduces modeling errors.</p><p>Limit cycle oscillations problem are encountered in many applications. Therefore, mathematical modeling of limit cycles becomes an essential topic that helps to better understand and/or to avoid limit cycle oscillations in different fields. In Part II, a second-order nonlinear ODE is used to model the periodic signal as a limit cycle oscillation. The right hand side of the ODE model is parameterized using a polynomial function in the states, and then discretized to allow for the implementation of different identification algorithms. Hence, it is possible to obtain highly accurate models by only estimating a few parameters.</p><p>In Part III, different user aspects for the two nonlinear approaches of the thesis are discussed. Finally, topics for future research are presented. </p>
502

Nonlinear Approaches to Periodic Signal Modeling

Abd-Elrady, Emad January 2005 (has links)
Periodic signal modeling plays an important role in different fields. The unifying theme of this thesis is using nonlinear techniques to model periodic signals. The suggested techniques utilize the user pre-knowledge about the signal waveform. This gives these techniques an advantage as compared to others that do not consider such priors. The technique of Part I relies on the fact that a sine wave that is passed through a static nonlinear function produces a harmonic spectrum of overtones. Consequently, the estimated signal model can be parameterized as a known periodic function (with unknown frequency) in cascade with an unknown static nonlinearity. The unknown frequency and the parameters of the static nonlinearity are estimated simultaneously using the recursive prediction error method (RPEM). A treatment of the local convergence properties of the RPEM is provided. Also, an adaptive grid point algorithm is introduced to estimate the unknown frequency and the parameters of the static nonlinearity in a number of adaptively estimated grid points. This gives the RPEM more freedom to select the grid points and hence reduces modeling errors. Limit cycle oscillations problem are encountered in many applications. Therefore, mathematical modeling of limit cycles becomes an essential topic that helps to better understand and/or to avoid limit cycle oscillations in different fields. In Part II, a second-order nonlinear ODE is used to model the periodic signal as a limit cycle oscillation. The right hand side of the ODE model is parameterized using a polynomial function in the states, and then discretized to allow for the implementation of different identification algorithms. Hence, it is possible to obtain highly accurate models by only estimating a few parameters. In Part III, different user aspects for the two nonlinear approaches of the thesis are discussed. Finally, topics for future research are presented.
503

Stimulus Coding and Synchrony in Stochastic Neuron Models

Cieniak, Jakub 19 May 2011 (has links)
A stochastic leaky integrate-and-fire neuron model was implemented in this study to simulate the spiking activity of the electrosensory "P-unit" receptor neurons of the weakly electric fish Apteronotus leptorhynchus. In the context of sensory coding, these cells have been previously shown to respond in experiment to natural random narrowband signals with either a linear or nonlinear coding scheme, depending on the intrinsic firing rate of the cell in the absence of external stimulation. It was hypothesised in this study that this duality is due to the relation of the stimulus to the neuron's excitation threshold. This hypothesis was validated with the model by lowering the threshold of the neuron or increasing its intrinsic noise, or randomness, either of which made the relation between firing rate and input strength more linear. Furthermore, synchronous P-unit firing to a common input also plays a role in decoding the stimulus at deeper levels of the neural pathways. Synchronisation and desynchronisation between multiple model responses for different types of natural communication signals were shown to agree with experimental observations. A novel result of resonance-induced synchrony enhancement of P-units to certain communication frequencies was also found.
504

Applications and optimization of response surface methodologies in high-pressure, high-temperature gauges

Hässig Fonseca, Santiago 05 July 2012 (has links)
High-Pressure, High-Temperature (HPHT) pressure gauges are commonly used in oil wells for pressure transient analysis. Mathematical models are used to relate input perturbation (e.g., flow rate transients) with output responses (e.g., pressure transients), and subsequently, solve an inverse problem that infers reservoir parameters. The indispensable use of pressure data in well testing motivates continued improvement in the accuracy (quality), sampling rate (quantity), and autonomy (lifetime) of pressure gauges. This body of work presents improvements in three areas of high-pressure, high-temperature quartz memory gauge technology: calibration accuracy, multi-tool signal alignment, and tool autonomy estimation. The discussion introduces the response surface methodology used to calibrate gauges, develops accuracy and autonomy estimates based on controlled tests, and where applicable, relies on field gauge drill stem test data to validate accuracy predictions. Specific contributions of this work include: - Application of the unpaired sample t-test, a first in quartz sensor calibration, which resulted in reduction of uncertainty in gauge metrology by a factor of 2.25, and an improvement in absolute and relative tool accuracies of 33% and 56%, accordingly. Greater accuracy yields more reliable data and a more sensitive characterization of well parameters. - Post-processing of measurements from 2+ tools using a dynamic time warp algorithm that mitigates gauge clock drifts. Where manual alignment methods account only for linear shifts, the dynamic algorithm elastically corrects nonlinear misalignments accumulated throughout a job with an accuracy that is limited only by the clock's time resolution. - Empirical modeling of tool autonomy based on gauge selection, battery pack, sampling mode, and average well temperature. A first of its kind, the model distills autonomy into two independent parameters, each a function of the same two orthogonal factors: battery power capacity and gauge current consumption as functions of sampling mode and well temperature -- a premise that, for 3+ gauge and battery models, reduces the design of future autonomy experiments by at least a factor of 1.5.
505

Stimulus Coding and Synchrony in Stochastic Neuron Models

Cieniak, Jakub 19 May 2011 (has links)
A stochastic leaky integrate-and-fire neuron model was implemented in this study to simulate the spiking activity of the electrosensory "P-unit" receptor neurons of the weakly electric fish Apteronotus leptorhynchus. In the context of sensory coding, these cells have been previously shown to respond in experiment to natural random narrowband signals with either a linear or nonlinear coding scheme, depending on the intrinsic firing rate of the cell in the absence of external stimulation. It was hypothesised in this study that this duality is due to the relation of the stimulus to the neuron's excitation threshold. This hypothesis was validated with the model by lowering the threshold of the neuron or increasing its intrinsic noise, or randomness, either of which made the relation between firing rate and input strength more linear. Furthermore, synchronous P-unit firing to a common input also plays a role in decoding the stimulus at deeper levels of the neural pathways. Synchronisation and desynchronisation between multiple model responses for different types of natural communication signals were shown to agree with experimental observations. A novel result of resonance-induced synchrony enhancement of P-units to certain communication frequencies was also found.
506

Design and evaluation of a capacitively coupled sensor readout circuit, toward contact-less ECG and EEG / Design och utvärdering av en kapacitivt kopplad sensorutläsningskrets, mot kontaktlös EKG och EEG

Svärd, Daniel January 2010 (has links)
In modern medicine, the measurement of electrophysiological signals play a key role in health monitoring and diagnostics. Electrical activity originating from our nerve and muscle cells conveys real-time information about our current health state. The two most common and actively used techniques for measuring such signals are electrocardiography (ECG) and electroencephalography (EEG). These signals are very weak, reaching from a few millivolts down to tens of microvolts in amplitude, and have the majority of the power located at very low frequencies, from below 1 Hz up to 40 Hz. These characteristics sets very tough requirements on the electrical circuit designs used to measure them. Usually, measurement is performed by attaching electrodes with direct contact to the skin using an adhesive, conductive gel to fixate them. This method requires a clinical environment and is time consuming, tedious and may cause the patient discomfort. This thesis investigates another method for such measurements; by using a non-contact, capacitively coupled sensor, many of these shortcomings can be overcome. While this method relieves some problems, it also introduces several design difficulties such as: circuit noise, extremely high input impedance and interference. A capacitively coupled sensor was created using the bottom layer of a printed circuit board (PCB) as a capacitor plate and placing it against the signal source, that acts as the opposite capacitor plate. The PCB solder mask layer and any air in between the two acts as the insulator to create a full capacitor. The signal picked up by this sensor was then amplified by 60 dB with a high input impedance amplifier circuit and further conditioned through filtering. Two measurements were made of the same circuit, but with different input impedances; one with 10 MΩ and one with 10 GΩ input impedance. Additional filtering was designed to combat interference from the main power lines at 50 Hz and 150 Hz that was discovered during initial measurements. The circuits were characterized with their transfer functions, and the ability to amplify a very low-level, low frequency input signal. The results of these measurements show that high input impedance is of critical importance for the functionality of the sensor and that an input impedance of 10 GΩ is sufficient to produce a signal-to-noise ratio (SNR) of 9.7 dB after digital filtering with an input signal of 25 μV at 10 Hz.
507

Mining Topic Signals from Text

Al-Halimi, Reem Khalil January 2003 (has links)
This work aims at studying the effect of word position in text on understanding and tracking the content of written text. In this thesis we present two uses of word position in text: topic word selectors and topic flow signals. The topic word selectors identify important words, called <i>topic words</i>, by their spread through a text. The underlying assumption here is that words that repeat across the text are likely to be more relevant to the main topic of the text than ones that are concentrated in small segments. Our experiments show that manually selected keywords correspond more closely to topic words extracted using these selectors than to words chosen using more traditional indexing techniques. This correspondence indicates that topic words identify the topical content of the documents more than words selected using the traditional indexing measures that do not utilize word position in text. The second approach to applying word position is through <i>topic flow signals</i>. In this representation, words are replaced by the topics to which they refer. The flow of any one topic can then be traced throughout the document and viewed as a signal that rises when a word relevant to the topic is used and falls when an irrelevant word occurs. To reflect the flow of the topic in larger segments of text we use a simple smoothing technique. The resulting smoothed signals are shown to be correlated to the ideal topic flow signals for the same document. Finally, we characterize documents using the importance of their topic words and the spread of these words in the document. When incorporated into a Support Vector Machine classifier, this representation is shown to drastically reduce the vocabulary size and improve the classifier's performance compared to the traditional word-based, vector space representation.
508

Smoothing And Differentiation Of Dynamic Data

Titrek, Fatih 01 May 2010 (has links) (PDF)
Smoothing is an important part of the pre-processing step in Signal Processing. A signal, which is purified from noise as much as possible, is necessary to achieve our aim. There are many smoothing algorithms which give good result on a stationary data, but these smoothing algorithms don&rsquo / t give expected result in a non-stationary data. Studying Acceleration data is an effective method to see whether the smoothing is successful or not. The small part of the noise that takes place in the Displacement data will affect our Acceleration data, which are obtained by taking the second derivative of the Displacement data, severely. In this thesis, some linear and non-linear smoothing algorithms will be analyzed in a non-stationary dataset.
509

A Study On Bandpassed Speech From The Point Of Intelligibility

Ganesh, Murthy C N S 10 1900 (has links)
Speech has been the subject of interest for a very long time. Even with so much advancement in the processing techniques and in the understanding of the source of speech, it is, even today, rather difficult to generate speech in the laboratory in all its aspects. A simple aspect like how the speech can retain its intelligibility even if it is distorted or band passed is not really understood. This thesis deals with one small feature of speech viz., the intelligibility of speech is retained even when it is bandpassed with a minimum bandwidth of around 1 KHz located any where on the speech spectrum of 0-4 KHz. Several experiments have been conducted by the earlier workers by passing speech through various distortors like differentiators, integrators and infinite peak clippers and it is found that the intelligibility is retained to a very large extent in the distorted speech. The integrator and the differentiator remove essentially a certain portion of the spectrum. Therefore, it is thought that the intelligibility of the speech is spread over the entire speech spectrum and that, the intelligibility of speech may not be impaired even when it is bandpassed with a minimum bandwidth and the band may be located any where in the speech spectrum. To test this idea and establish this feature if it exists, preliminary experiments have been conducted by passing the speech through different filters and it is found that the conjecture seems to be on the right line. To carry out systematic experiments on this an experimental set up has been designed and fabricated which consists of a microprocessor controlled speech recording, storing and speech playback system. Also, a personal computer is coupled to the microprocessor system to enable the storage and processing of the data. Thirty persons drawn from different walks of life like teachers, mechanics and students have been involved for collecting the samples and for recognition of the information of the processed speech. Even though the sentences like 'This is devices lab' are used to ascertain the effect of bandwidth on the intelligibility, for the purpose of analysis, vowels are used as the speech samples. The experiments essentially consist of recording words and sentences spoken by the 30 participants and these recorded speech samples are passed through different filters with different bandwidths and central frequencies. The filtered output is played back to the various listeners and observations regarding the intelligibility of the speech are noted. The listeners do not have any prior information about the content of the speech. It has been found that in almost all (95%) cases, the messages or words are intelligible for most of the listeners when the band width of the filter is about 1 KHz and this is independent of the location of the pass band in the spectrum of 0-4 KHz. To understand how this feature of speech arises, spectrums of vowels spoken by 30 people have using FFT algorithms on the digitized samples of the speech. It is felt that there is a cyclic behavior of the spectrum in all the samples. To make sure that the periodicity is present and also to arrive at the periodicity, a moving average procedure is employed to smoothen the spectrum. The smoothened spectrums of all the vowels indeed show a periodicity of about 1 KHz. When the periodicities are analysed the average value of the periodicities has been found to be 1038 Hz with a standard deviation of 19 Hz. In view of this it is thought that the acoustic source responsible for speech must have generated this periodic spectrum, which might have been modified periodically to imprint the intelligibility. If this is true, one can perhaps easily understand this feature of the speech viz., the intelligibility is retained in a bandpassed speech of bandwidth 1 K H z . the pass band located any where in the speech spectrum of 0-4 KHz. This thesis describing the experiments and the analysis of the speech has been presented in 5 chapters. Chapter 1 deals with the basics of speech and the processing tools used to analyse the speech signal. Chapter 2 presents the literature survey from where the present problem is tracked down. Chapter 3 describes the details of the structure and the fabrication of the experimental setup that has been used. In chapter 4, the detailed account of the way in which the experiments are conducted and the way in which the speech is analysed is given. In conclusion in chapter 5, the work is summarised and the future work needed to establish the mechanism of speech responsible for the feature of speech described in this thesis is suggested.
510

Is LED use in traffic signals viable in the Texas Department of Transportation, Houston District?

Ughanze, Ugonna Uzodinma 05 November 2012 (has links)
Light Emitting Diode (LED) is used in traffic signals and highway illumination in the Texas Department of Transportation, Houston District (TxDOT). The thesis focuses on the cost of maintenance of the LED for signals on the highway system in the Houston District. This LED cost includes human and capital resources which are compared against the cost associated with the incandescent bulb used in traffic signals at a similar location in Houston. The analysis leads to actionable decisions to see if total migration of the LED is advisable or not, amidst budgetary constraints and the benefits thereof. / text

Page generated in 0.4129 seconds