• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 527
  • 119
  • 70
  • 61
  • 54
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • 15
  • 9
  • 8
  • 7
  • Tagged with
  • 1085
  • 1085
  • 430
  • 368
  • 205
  • 172
  • 93
  • 84
  • 83
  • 76
  • 73
  • 73
  • 71
  • 68
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Signal Processing Approaches for Appearance Matching

Scoggins, Randy Keith 10 May 2003 (has links)
The motivation for this work is to study methods of estimating appropriate level-of-detail (LoD) object models by quantifying appearance errors prior to image synthesis. Visualization systems have been developed that employ LoD objects, however, the criteria are often based on heuristics that restrict the form of the object model and rendering method. Also, object illumination is not considered in the LoD selection. This dissertation proposes an image-based scene learning pre-process to determine appropriate LoD for each object in a scene. Scene learning employs sample images of an object, from many views and with a range of geometric representations, to produce a profile of the LoD image error as a function of viewing distance. Signal processing techniques are employed to quantify how images change with respect to object model resolution, viewing distance, and lighting direction. A frequency-space analysis is presented which includes use of the vision system?s contrast sensitivity to evaluate perceptible image differences with error metrics. The initial development of scene learning is directed to sampling the object?s appearance as a function of viewing distance and object geometry in scene space. A second phase allows local lighting to be incorporated in the scene learning pre-process. Two methods for re-lighting are presented that differ in accuracy and overhead; both allow properties of an object?s image to be computed without rendering. In summary, full-resolution objects pro-duce the best image since the 3D scene is as real as possible. A less realistic 3D scene with simpler objects produces a different appearance in an image, but by what amount? My the-sis is such can be had. Namely that object fidelity in the 3D scene can be loosened further than has previously been shown without introducing significant appearance change in an object and that the relationship between 3D object realism and appearance can be expressed quantitatively.
82

Infrared studies on solid hydrogen, deuterium and hydrogen-deuteride using Fourier-transform spectroscopy /

Lee, Sang Young January 1987 (has links)
No description available.
83

The fast Fourier transform and the spectral analysis of stationery time series /

Nobile, Marc January 1979 (has links)
No description available.
84

Fourier transform Raman spectroscopy: Evaluation as a non- destructive technique for studying the degradation of human hair from archaeological and forensic environments

Wilson, Andrew S., Edwards, Howell G.M., Farwell, Dennis W., Janaway, Robert C. January 1999 (has links)
No / Fourier transform (FT) Raman spectroscopy was evaluated as a non-destructive analytical tool for assessing the degradative state of archaeological and forensic hair samples. This work follows the successful application of FT-Raman spectroscopy to studies of both modern hair and ancient keratotic biopolymers, such as mummified skin. Fourteen samples of terminal scalp hair from 13 disparate depositional environments were analysed for evidence of structural alteration. Degradative change was evidenced by alteration to the amide I and III modes near 1651 and 1128 cm−1, respectively, and loss of definition to the (CC) skeletal backbone and the impact of environmental contaminants was noted.
85

Novel Fractional Wavelet Transform with Closed-Form Expression

Anoh, Kelvin O.O., Abd-Alhameed, Raed, Jones, Steven M.R., Ochonogor, O., Dama, Yousef A.S. 08 1900 (has links)
Yes / A new wavelet transform (WT) is introduced based on the fractional properties of the traditional Fourier transform. The new wavelet follows from the fractional Fourier order which uniquely identifies the representation of an input function in a fractional domain. It exploits the combined advantages of WT and fractional Fourier transform (FrFT). The transform permits the identification of a transformed function based on the fractional rotation in time-frequency plane. The fractional rotation is then used to identify individual fractional daughter wavelets. This study is, for convenience, limited to one-dimension. Approach for discussing two or more dimensions is shown.
86

BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS / BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS

E., Okwelume Gozie, Kingsley, Ezeude Anayo January 2007 (has links)
Our thesis work focuses on Frequency-domain Blind Source Separation (BSS) in which the received mixed signals are converted into the frequency domain and Independent Component Analysis (ICA) is applied to instantaneous mixtures at each frequency bin. Computational complexity is also reduced by using this method. We also investigate the famous problem associated with Frequency-Domain Blind Source Separation using ICA referred to as the Permutation and Scaling ambiguities, using methods proposed by some researchers. This is our main target in this project; to solve the permutation and scaling ambiguities in real time applications / Gozie: modebelu2001@yahoo.com Anayo: ezeudea@yahoo.com
87

Optimization of Rotations in FFTs

Qureshi, Fahad January 2012 (has links)
The aims of this thesis are to reduce the complexity and increasethe accuracy of rotations carried out inthe fast Fourier transform (FFT) at algorithmic and arithmetic level.In FFT algorithms, rotations appear after every hardware stage, which are alsoreferred to as twiddle factor multiplications. At algorithmic level, the focus is on the development and analysisof FFT algorithms. With this goal, a new approach based on binary tree decompositionis proposed. It uses the Cooley Tukey algorithm to generate a large number ofFFT algorithms. These FFT algorithms have identical butterfly operations and data flow but differ inthe value of the rotations. Along with this, a technique for computing the indices of the twiddle factors based on the binary tree representation has been proposed. We have analyzed thealgorithms in terms of switching activity, coefficient memory size, number of non-trivial multiplicationsand round-off noise. These parameters have impact on the power consumption, area, and accuracy of the architecture.Furthermore, we have analyzed some specific cases in more detail for subsets of the generated algorithms. At arithmetic level, the focus is on the hardware implementation of the rotations.These can be implemented using a complex multiplier,the CORDIC algorithm, and constant multiplications. Architectures based on the CORDIC and constant multiplication use shift and add operations, whereas the complex multiplication generally uses four real multiplications and two adders.The sine and cosine coefficients of the rotation angles fora complex multiplier are normally stored in a memory.The implementation of the coefficient memory is analyzed and the best possible approaches are analyzed.Furthermore, a number of twiddle factor multiplication architectures based on constant multiplications is investigated and proposed. In the first approach, the number of twiddle factor coefficients is reduced by trigonometric identities. By considering the addition aware quantization method, the accuracy and adder count of the coefficients are improved. A second architecture based on scaling the rotations such that they no longer have unity gain is proposed. This results in twiddle factor multipliers with even lower complexity and/or higher accuracy compared to the first proposed architecture.
88

Identification of characteristic energy scales in nuclear isoscalar giant quadrupole resonances: Fourier transforms and wavelet analysis

Usman, Iyabo Tinuola 08 August 2008 (has links)
The identification of energy scales in the region of Isoscalar Giant Quadrupole Resonance (ISGQR) is motivated by their potential use in understanding how an ordered collective motion transforms into a disordered motion of intrinsic single-particle degrees-of-freedom in many-body quantum systems. High energy-resolution measurements of the ISGQR were obtained by proton inelastic scattering at Ep= 200 MeV using the K600 magnetic Spectrometer at iThemba LABS. The nuclei 58Ni, 90Zr, 120Sn and 208Pb, associated with closed shells, were investigated. Both the Fourier transform and Wavelet analysis were used to extract characteristic energy scales and were later compared with the results from the theoretical microscopic Quasi-particle Phonon Model (QPM), including contributions from collective and non-collective states. The scales found in the experimental data were in good agreement with the QPM. This provides a strong argument that the observed energy scales result from the decay of the collective modes into 2p-2h states. The different scale regions were tested directly by reconstruction of measured energy spectra using the Inverse Fourier Transform and the Continuous Wavelet Transform (CWT), together with a comparison to a previously available reconstruction using the Discrete Wavelet Transform (DWT).
89

Ultra-compact holographic spectrometers for diffuse source spectroscopy

Hsieh, Chaoray 15 January 2008 (has links)
Compact and sensitive spectrometers are of high utility in biological and environmental sensing applications. Over the past half century, enormous research resources are dedicated in making the spectrometers more compact and sensitive. However, since all works are based on the same structure of the conventional spectrometers, the improvement on the performance is limited. Therefore, this ancient research filed still deserves further investigation, and a revolutionary idea is required to take the spectrometers to a whole new level. The research work presented in this thesis focuses on developing a new class of spectrometers that work based on diffractive properties of volume holograms. The hologram in these spectrometers acts as a spectral diversity filter, which maps different input wavelengths into different locations in the output plane. The experimental results show that properly designed volume holograms have excellent capability for separating different wavelength channels of a collimated incident beam. By adding a Fourier transforming lens behind the hologram, a slitless Fourier-transform volume holographic spectrometer is demonstrated, and it works well under diffuse light without using any spatial filter (i.e., slit) in the input. By further design of the hologram, a very compact slitless and lensless spectrometer is implemented for diffuse source spectroscopy by using only a volume hologram and a CCD camera. More sophisticated output patterns are also demonstrated using specially designed holograms to improve the performance of the holographic spectrometers. Finally, the performance of the holographic spectrometers is evaluated and the building of the holographic spectrometer prototype is also discussed.
90

Acoustic technique in the diagnosis of voice disorders /

Kulinski, Christina. January 2004 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2004. / Leaves 86-90 lacking. Includes bibliographical references (leaves 91-93).

Page generated in 0.088 seconds