• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 21
  • 14
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 198
  • 198
  • 21
  • 19
  • 19
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An Introduction to the Winograd Discrete Fourier Transform

Agnello, Janice S. 01 April 1979 (has links) (PDF)
This paper illustrates Winograd's approach to computing the Discrete Fourier Transform (DFT). This new approach changes the DFT into a cyclic convolution of 2 sequences, and illustrates shortcuts for computing this cyclic convolution. This method is known to reduce the number of multiplies required to about 20% less than the number of multiplies used by the techniques of the Fast Fourier Transform. Three approaches are discussed, one for prime numbers, one for products of primes, and lastly one for powers of odd primes. For powers of 2 Winograd's algorithm is, in general, inefficient and best if it is not used. A computer simulation is illustrated for the 35 point transform and its execution time is compared with that of the Fast Fourier Transform algorithm for 32 points.
42

Contrast Sensitivity to One- and Two-Dimensional Luminance Patterns

Persaud, Steven S. 14 May 2004 (has links)
Contrast sensitivities to one- and two-dimensional luminance patterns were compared in a two-alternative forced choice (2AFC) experiment. Space-averaged luminance was also manipulated. Statistical analyses revealed a main effect of stimulus dimension (p < .05) and no effect of space-averaged luminance. The main effect of stimulus dimension was explained in terms of an on-center, off-center receptive field model combined with watershed spatial vision behavior at spatial frequencies below 1 cycle-per-degree (cpd). The non-significant result for space-averaged luminance was explained by the limited range of manipulation of the variable. Two-dimensional luminance patterns were suggested as ideal patterns for reconciling grating-based spatial vision research with spatial vision behavior in an ecological context. Future research directions are suggested. / Master of Science
43

Integrated analysis of mass transport deposits : outcrop data, seismic interpretation & fast Fourier transform analysis

Garyfalou, Aikaterini January 2015 (has links)
No description available.
44

Estudo do eletrocardiograma sob uma abordagem matemática. / Electrocardiogram evaluation under a mathematical approach.

Melco, Tito Coutinho 10 November 2006 (has links)
O eletrocardiograma transMITe informações com relação à passagem do pulso elétrico pelo coração e, conseqüentemente, do funcionamento deste. Desde o início da sua utilização, possibilitada pelo trabalho de Willem Einthoven criando a primeira máquina capaz de medir o pulso elétrico de forma não invasiva e com sensibilidade forte o bastante para ser capaz de produzir um gráfico proveitoso, o eletrocardiograma é muito utilizado para avaliação clínica de pacientes. Entretanto a evolução das máquinas que o descrevem não foi muito além do que o elaborado por Einthoven no início do século 20. As máquinas capazes de captar o eletrocardiograma se tornaram menores (até portáteis para algumas aplicações), gráficos passaram a ser disponibilizados em telas de vídeo (ao invés das fitas de papel) e, como maior evolução, as máquinas que observam o eletrocardiograma passaram a conseguir captar a ocorrência de um ciclo cardíaco com alta confiabilidade e, atualmente, passaram a medir também o parâmetro ST com precisão deliMITada (necessitando ajuda do operador para ajuste em alguns casos). É baseado nestes fatos que esta dissertação procura estudar algoritmos matemáticos, de forma mais focada nos modelos do impulso elétrico durante os ciclos cardíacos, e avaliar suas capacidades de interpretar parâmetros do ciclo de ECG de forma precisa e rápida para que o médico tenha prontamente os dados necessários para realizar a avaliação clínica do paciente. Em primeira análise foram estudados os algoritmos para detecção do pulso de eletrocardiograma (detecção da onda R), em seguida feito o janelamento da curva de ECG a fim de separar os ciclos cardíacos. A partir deste ponto foram analisados os modelos matemáticos gerados por equações polinomiais, Transformada de Fourier e Transformada wavelet. E, com o intuito de filtrar ruídos e gerar derivações não medidas, foi implementado um filtro de kalman em um modelo vetorial do eletrocardiograma. Para avaliar os resultados obtidos foram utilizados requisitos de desempenho declarados pelo FDA norte americano e pela norma européia IEC60601-2-51. Essas análises foram feitas através da utilização dos algoritmos gerados nas curvas provindas do banco de dados do PhisioNet. O método polinomial não foi considerado interessante na medida em que não possibilita gerar uma equação para um ciclo cardíaco, mas sim várias equações (uma para cada ponto do ciclo). Os demais métodos apresentaram melhor eficiência na medida em que foram capazes de gerar parâmetros com significado físico e possibilitando melhor caracterização de pontos importantes da curva do eletrocardiograma. / The electrocardiogram gives information related to the passage of an electric pulse through the heart and, therefore, to his state function. Since the beginning of electrocardiogram utilization, thanks to the work of Willem Einthoven building the first machine capable of measuring the electric pulse non-invasively and with sensitivity enough to be able to provide a profitable graph, it is widely used for clinical evaluation of patients. However the evolution of the machines that describes the electrocardiogram hadn´t much more advances since the elaborated by Einthoven in the beginning of the 20th century. They become smaller (even portable for some applications), the graphs are now displayed in video screens (instead of the paper strip) and, taking place as the biggest evolutions, machines that observes the electrocardiogram became able to recognize a cardiac cycle with high reliability and, more recently, became able to measure the ST parameter with liMITed precision (it needs the help of the operator to set specific measuring points in some cases). It is based in these facts that this dissertation looks for analyzing mathematic algorithms, more specifically the mathematic models of the electric impulse during the cardiac cycles, and evaluate their capacities to expound ECG parameters in a fast and reliable way in order to the physician receive promptly the data needed for his clinical evaluation of the patient. For the first step were analyzed some algorithms for electrocardiogram pulse detection (detection of R wave), in the following step were done the windowing of the ECG wave in order to separate the cardiac cycles. In this step were analyzed the mathematic models generated by polynomial equations, Fourier Transform and Wavelet Transform. And, in order to filter noises and generate leads not measure, it was implemented a kalman´s filter at a vector model. To evaluate the obtained results were used the requirements of performance given by north-american FDA and by the European rule IEC60601-2-51. These evaluations were done by executing the generated algorithms in the waves supplied by the databank PhisioNet. The polynomial method weren´t considered interesting because it weren´t able to generate an equation for the cardiac cycle, but many equations (one for each point of the cycle). The other methods showed a better efficiency since they were capable of generate parameters with physical meaning and being able to do a better characterization of the important points of the electrocardiogram wave.
45

Consensus Model of Families of Images using Tensor-based Fourier Analysis

Shelton, Joel A 01 May 2016 (has links)
A consensus model is a statistical approach that uses a family of signals or in our case, a family of images to generate a predictive model. In this thesis, we consider a family of images that are represented as tensors. In particular, our images are (2,0)-tensors. The consensus model is produced by utilizing the quantum Fourier transform of a family of images as tensors to transform images to images. We write a quantum Fourier transform in the numerical computation library for Python, known as Theano to produce the consensus spectrum. From the consensus spectrum, we produce the consensus model via the inverse quantum Fourier transform. Our method seeks to improve upon the phase reconstruction problem when transforming images to images under a 2-dimensional consensus model by considering images as (2,0)-tensors.
46

Comparison of numerical result checking mechanisms for FFT computations under faults

Bharthipudi, Saraswati 01 January 2004 (has links)
This thesis studies and compares existing Numerical Result checking algorithms for FFT computations under faults. In order to simulate faulty conditions, a fault injection tool is implemented. The fault injection tool is designed so as to be as non-intrusive to the application as possible. Faults are injected into memory in the form of bit flips in the data elements of the application. The performance of the three result checking algorithms under these conditions is studied and compared. Faults are injected at all the stages of the FFT computation by flipping each of the 64-bits in the double-precision representation. Experiments also include introducing random bit flips in the data array, emulating a more real-life like scenario. Finally the performance of these algorithms under a set of worst-case is also studied
47

Fabric surface impection by fourier analysis and neural network /

Chan, Chi-ho. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 163-167).
48

Combinatorial and probabilistic techniques in harmonic analysis

Lewko, Mark J., 1983- 13 July 2012 (has links)
We prove several theorems in the intersection of harmonic analysis, combinatorics, probability and number theory. In the second section we use combinatorial methods to construct various sets with pathological combinatorial properties. In particular, we answer a question of P. Erdos and V. Sos regarding unions of Sidon sets. In the third section we use incidence bounds and bilinear methods to prove several new endpoint restriction estimates for the Paraboloid over finite fields. In the fourth and fifth sections we study a variational maximal operators associated to orthonormal systems. Here we use probabilistic techniques to construct well-behaved rearrangements and base changes. In the sixth section we apply our variational estimates to a problem in sieve theory. In the seventh section, motivated by applications to sieve theory, we disprove a maximal inequality related to multiplicative characters. / text
49

Functional magnetic resonance image registration using fourier phase and residue error detection

麥可瑩, Mak, Ho-ying. January 2002 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
50

Fabric surface inpection by fourier analysis and neural network

陳志豪, Chan, Chi-ho. January 2001 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy

Page generated in 0.0738 seconds