Spelling suggestions: "subject:"last fourier transforms"" "subject:"last courier transforms""
1 |
A Decimation-in-Frequency Fast-Fourier Transform for the Symmetric GroupKoyama, Masanori 01 May 2007 (has links)
A Discrete Fourier Transform (DFT) changes the basis of a group algebra from the standard basis to a Fourier basis. An efficient application of a DFT is called a Fast Fourier Transform (FFT). This research pertains to a particular type of FFT called Decimation in Frequency (DIF). An efficient DIF has been established for commutative algebra; however, a successful analogue for non-commutative algebra has not been derived. However, we currently have a promising DIF algorithm for CSn called Orrison-DIF (ODIF). In this paper, I will formally introduce the ODIF and establish a bound on the operation count of the algorithm.
|
2 |
Tropospheric Spectrum Estimations Comparing Maximum Likelihood with Expectation Maximization Solutions and Fast Fourier TransformsWellard, Stanley James 01 May 2007 (has links)
The FIRST program (Far Infrared Spectroscopy in the Troposphere) was created as an Instrument Incubator Program (IIP) by NASA Langley to demonstrate improved technology readiness levels (TRLs) for two technologies needed in the design of new imaging Fourier transform spectrometers (IFTS). The IIP IFTS was developed at the Space Dynamics Laboratory and flown to an altitude of 103,000 feet on an instrumented NASA balloon payload. The sensor collected approximately 15,000 interferograms during its 6-hour flight. Fourier transforms (FFT) produced acceptable results except for noise equivalent temperature differences (NETD) that were five times higher than goal and inconclusive transforms at seven strong absorption features.
An alternate transform technique, maximum likelihood estimation (MLE), was implemented to improve spectral estimations at the absorptions and to improve the NETD for the sensor. Iterative expectation-maximization (EM) algorithms provide numerical solutions for the MLE.
Four combinatorial forms of the EM algorithm were developed. Forms of the EM algorithm were developed to optimize amplitude estimations as a function of assumed noise distributions. 'Direct' and 'indirect' EM forms were developed to process the asymmetrical interferograms recorded by the FIRST sensor.
The direct method extends the standard even (cosine) EM algorithm to simultaneously transform both the sine and cosine components of the interferogram. The indirect method, uses Fourier and inverse Fourier transforms as pre-processors to convert the measured asymmetrical interferograms to even (cosine) interferograms.
Using the indirect Gaussian EM form improved the measured NETD by approximately twenty percent between 100 and 700 wavenumbers. For wavenumbers less than 100 or greater than 700, the improvement increased to a factor of at least two out to 1500 wavenumbers.
The indirect Gaussian produced inconclusive results in the areas of high absorption because of large bias errors introduced by the FFT/IFFT pre-processing. The indirect method was found to be inadequate for estimating spectra at the deep absorptions. The direct EM method, on the other hand, has the potential to produce improved amplitude estimations at the absorptions since there are no inherent biases in the algorithm's initial conditions at a cost in computer resources and execution times that are four times those needed for the indirect method.
|
3 |
FFT Bit Templating – A Technique for Making Amplitude and Frequency Measurements of a BPSK Modulated SignalShockey, Bruce 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In many spacecraft receiver applications, the Fast Fourier Transform (FFT) provides a powerful tool
for measuring the amplitude and frequency of an unmodulated RF signal. By increasing the FFT
acquisition time, tiny signals can be coaxed from the noise and their frequency measured by
determining which frequency bin the signal energy appears. The greater the acquisition time, the
narrower the bin bandwidth and the more accurate the frequency measurement.
In modern satellite operations it is often desirable for the receiver to measure the frequency of a
carrier which is modulated with BPSK data. The presence of the BPSK data limits the FFT
acquisition time since the signal may switch polarities a number of times while the FFT samples are
being acquired. This polarity switching spreads the signal energy into multiple frequency bins
making frequency measurement difficult or impossible. The Bit Templating Technique, used for the
first time in the CMC Electronics Cincinnati TDRSS / BPSK Spacecraft Receiver, collects the
modulated waveform energy back into a signal bin so that accurate amplitude and frequency
information can be calculated.
|
4 |
Fast Fourier transforms and fast Wigner and Weyl functions in large quantum systemsLei, Ci, Vourdas, Apostolos 05 July 2024 (has links)
Yes / Two methods for fast Fourier transforms are used in a quantum context. The first method is for systems with dimension of the Hilbert space
with d an odd integer, and is inspired by the Cooley-Tukey formalism. The ‘large Fourier transform’ is expressed as a sequence of n ‘small Fourier transforms’ (together with some other transforms) in quantum systems with d-dimensional Hilbert space. Limitations of the method are discussed. In some special cases, the n Fourier transforms can be performed in parallel. The second method is for systems with dimension of the Hilbert space
with
odd integers coprime to each other. It is inspired by the Good formalism, which in turn is based on the Chinese reminder theorem. In this case also the ‘large Fourier transform’ is expressed as a sequence of n ‘small Fourier transforms’ (that involve some constants related to the number theory that describes the formalism). The ‘small Fourier transforms’ can be performed in a classical computer or in a quantum computer (in which case we have the additional well known advantages of quantum Fourier transform circuits). In the case that the small Fourier transforms are performed with a classical computer, complexity arguments for both methods show the reduction in computational time from
to
. The second method is also used for the fast calculation of Wigner and Weyl functions, in quantum systems with large finite dimension of the Hilbert space.
|
5 |
IFFT-based techniques for peak power reduction in OFDM communication systemsGhassemi, Abolfazl 12 April 2010 (has links)
Orthogonal frequency division multiplexing (OFDM) is a multicarrier transmission technique which provides efficient bandwidth utilization and robustness against time dis¬persive channels. A major problem in the RF portion of a multicarrier transmitter is Gaussian-like time-domain signals with relatively high peak-to-average power ratios (PA¬PRs). These peaks can lead to saturation in the power amplifier (PA) which in turn distorts the signal and reduces the PA efficiency. To address this problem, numerous techniques have appeared in the literature based on signal and/or data modification.
In the class of distortionless techniques, partial transmit sequences (PTS), selective mapping (SLM), and tone reservation (TR) have received a great deal of attention as they are proven techniques that achieve significant PAPR reduction. However, high compu¬tational complexity is a problem in practical systems. In PTS and SLM, this complexity arises from the computation of multiple inverse fast Fourier transforms (IFFTs), resulting in a complexity proportional to the number of PTS subblocks or SLM sequences. TR has also a high computational complexity related to the computation of the IFFT as it must search for the optimal subsets of reserved subcarriers and generate the peak reduction signal. In addition, most research in the direction of analyzing and improving the above techniques has employed direct computation of the inverse discrete Fourier transform (IDFT), which is not practical for implementation.
This thesis focuses on the development and performance analysis of the major distortionless techniques in conjunction with the common IFFT algorithms to reduce the peak-to-power average (PAPR) of the original OFDM signal at the transmitter side. The structure of the IFFT common algorithms is used to propose a class of IFFT-based PAPR reduction techniques to reduce the computational complexity and improve PAPR performance.
For IFFT based PTS, two techniques are proposed. A low complexity scheme based on decimation in frequency (DIF) and high radix IFFT algorithm is proposed. Then, a new PTS subblocking technique is proposed to improve PAPR performance. The periodic auto-correlation function (ACF) of time-domain IFFT-based PTS subblocks is derived. To improve the PAPR, we use error-correcting codes (ECCs) in the subblocking. Our approach significantly decreases the computational complexity while providing comparable PAPR reduction to ordinary PTS (O-PTS).
With IFFT-based SLM, a technique for reducing computational complexity is proposed. This technique is based on multiplying the phase sequences with a subset of the inputs to identical inverse discrete Fourier transform (IDFTs). These subsets generate the partial SLM sequences using repetition codes. It is also shown how the partial time-domain sub-sets can be combined to generate new SLM sequences. These sequences do not requires any IFFT operations. The proposed scheme outperforms the existing techniques while pro¬viding comparable PAPR reduction to original SLM (O-SLM).
Finally, a gradient-based algorithm is proposed for IFFT-based TR. Unlike previous work, non-static channels are considered where the peak reduction tones (PRTs) locations and consequently the peak reduction kernels should be adjusted dynamically for best per¬formance. Two low complexity algorithms with different degrees of computational com¬plexity and PAPR performance are proposed. To generate the peak reduction kernels, the transform matrices of identical IFFTs are used. This provides low complexity solutions to determining the PRTs and computing the peak reduction kernels.
|
6 |
Estudo experimental de segregação de partículas em misturas binárias usando análise de flutuações de pressão em leito fluidizado gás-sólido / Experimental study of segregation in granular binary mixtures using pressure flutuations analysis in a gas-solid fluidized bedRueda Ordoñez, Diego Andres 24 August 2018 (has links)
Orientadores: Araí Augusta Bernárdez Pécora, Emerson dos Reis / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-24T13:04:25Z (GMT). No. of bitstreams: 1
RuedaOrdonez_DiegoAndres_M.pdf: 19506541 bytes, checksum: 0bbc753dfedabd5800a9c9bd05c71077 (MD5)
Previous issue date: 2014 / Resumo: Neste trabalho, foi utilizada uma metodologia para a análise dos sinais de flutuações de pressão visando estudar a fluidização e o fenômeno da segregação em leitos fluidizados contendo misturas binárias de partículas de diferentes tamanhos e massas específicas. As medidas foram feitas para caracterizar o comportamento fluidodinâmico do leito fluidizado contendo misturas binárias e encontrar as velocidades inerentes ao fenômeno da segregação. Para tanto, primeiramente foram estudados os comportamentos de cada material isolado para posterior comparação com o comportamento observado para as misturas contendo tais materiais.Três tipos de sólidos particulados foram utilizados neste trabalho: microesferas de plástico (diâmetro médio de Sauter de 971 µm) e microesferas de vidro (diâmetros médios de Sauter de 462 e 959 µm).O sistema experimental utilizado consiste de uma coluna, de 0,1 m de diâmetro e 2,5 m de altura, equipada com um distribuidor de gás tipo placa porosa e possuindo seções de vidro,acrílico e aço carbono intercaladas, o que permitiu acompanhar visualmente o processo, e adquirir imagens por meio de uma câmera fotográfica. Medidas de flutuações de pressão foram feitas em diferentes velocidades superficiais do gás para cada material ou mistura estudados. Os sinais de pressão foram medidos em três pontos na coluna do leito, sendo um no plenum e dois na coluna, situado a 0,035 e a 0,115 m acima da placa distribuidora.Em todos os testes a altura do leito foi mantida fixa em 0,150 m. As flutuações de pressão foram analisadas no domínio do tempo e no domínio da frequência aplicando a transformada rápida de Fourier (FFT) o que permitiu diferenciar o comportamento dinâmico das misturas em cada velocidade superficial do gás estudada.Os resultados permitiram identificar regiões com diferentes comportamentos fluidodinâmicos e velocidades inerentes ao processo de segregação como as velocidades de fluidização inicial, fluidização completa, de segregação e de mistura completa. Este trabalho procura contribuir para melhorar o entendimento sobre a fluidização de misturas binárias e sobre o fenômeno de segregação normalmente presente em tais sistemas / Abstract: A methodology for analysis of pressure fluctuations signals was used, in the present work, to study the ?uidizing and segregation phenomena in fluidized beds containing granular binary mixtures with different sizes and densities. Pressure measurements have been made to characterize the dynamic behavior of fluidized beds and to find the velocities involved in the segregation phenomena. The analysis of each single studied material was made before analyzing of the mixtures containing such materials. Three types of solid particles were used in this work: plastic microspheres (971 µm Sauter mean diameter) and glass microspheres (462 and 959 µm Sauter mean diameters). The experimental system presents a column, 0.1m diameter and 2.5 m height, equipped with a porous plate gas distributor. The column was made by glass, acrylic and carbon-steel sections to allow visual observations of the process and acquiring images by a camera. Measurements of pressure fluctuations were made in different gas superficial velocities for each studied material or mixture. The pressure signals were measured at three points in the column: one at plenum and two above the distributor plate, at 0.035 and 0.115 m above the distributor plate. The height of the bed material was fixed in 0.15 m regarding all performed tests. Pressure fluctuations were analyzed on the time domain and on the frequency domain using the fast Fourier transform (FFT) which allows differentiating the dynamic behavior of the mixtures in each superficial gas velocity studied. The results allowed the identification of regions with different fluid dynamic behaviors as well as the determination of gas velocities inherent to the segregation process as the initial fluidization, complete fluidization, segregation and complete mixing velocities. This paper aims to contribute on understanding the fluidization process of binary mixtures and on the phenomenon of segregation normally present in such systems / Mestrado / Termica e Fluidos / Mestre em Engenharia Mecânica
|
7 |
A New Subgroup Chain for the Finite Affine GroupLingenbrink, David Alan, Jr. 01 January 2014 (has links)
The finite affine group is a matrix group whose entries come from a finite field. A natural subgroup consists of those matrices whose entries all come from a subfield instead. In this paper, I will introduce intermediate sub- groups with entries from both the field and a subfield. I will also examine the representations of these intermediate subgroups as well as the branch- ing diagram for the resulting subgroup chain. This will allow us to create a fast Fourier transform for the group that uses asymptotically fewer opera- tions than the brute force algorithm.
|
8 |
DYNAMIC HARMONIC DOMAIN MODELING OF FLEXIBLE ALTERNATING CURRENT TRANSMISSION SYSTEM CONTROLLERSVyakaranam, Bharat GNVSR January 2011 (has links)
No description available.
|
9 |
Risques extrêmes en finance : analyse et modélisation / Financial extreme risks : analysis and modelingSalhi, Khaled 05 December 2016 (has links)
Cette thèse étudie la gestion et la couverture du risque en s’appuyant sur la Value-at-Risk (VaR) et la Value-at-Risk Conditionnelle (CVaR), comme mesures de risque. La première partie propose un modèle d’évolution de prix que nous confrontons à des données réelles issues de la bourse de Paris (Euronext PARIS). Notre modèle prend en compte les probabilités d’occurrence des pertes extrêmes et les changements de régimes observés sur les données. Notre approche consiste à détecter les différentes périodes de chaque régime par la construction d’une chaîne de Markov cachée et à estimer la queue de distribution de chaque régime par des lois puissances. Nous montrons empiriquement que ces dernières sont plus adaptées que les lois normales et les lois stables. L’estimation de la VaR est validée par plusieurs backtests et comparée aux résultats d’autres modèles classiques sur une base de 56 actifs boursiers. Dans la deuxième partie, nous supposons que les prix boursiers sont modélisés par des exponentielles de processus de Lévy. Dans un premier temps, nous développons une méthode numérique pour le calcul de la VaR et la CVaR cumulatives. Ce problème est résolu en utilisant la formalisation de Rockafellar et Uryasev, que nous évaluons numériquement par inversion de Fourier. Dans un deuxième temps, nous nous intéressons à la minimisation du risque de couverture des options européennes, sous une contrainte budgétaire sur le capital initial. En mesurant ce risque par la CVaR, nous établissons une équivalence entre ce problème et un problème de type Neyman-Pearson, pour lequel nous proposons une approximation numérique s’appuyant sur la relaxation de la contrainte / This thesis studies the risk management and hedging, based on the Value-at-Risk (VaR) and the Conditional Value-at-Risk (CVaR) as risk measures. The first part offers a stocks return model that we test in real data from NSYE Euronext. Our model takes into account the probability of occurrence of extreme losses and the regime switching observed in the data. Our approach is to detect the different periods of each regime by constructing a hidden Markov chain and estimate the tail of each regime distribution by power laws. We empirically show that powers laws are more suitable than Gaussian law and stable laws. The estimated VaR is validated by several backtests and compared to other conventional models results on a basis of 56 stock market assets. In the second part, we assume that stock prices are modeled by exponentials of a Lévy process. First, we develop a numerical method to compute the cumulative VaR and CVaR. This problem is solved by using the formalization of Rockafellar and Uryasev, which we numerically evaluate by Fourier inversion techniques. Secondly, we are interested in minimizing the hedging risk of European options under a budget constraint on the initial capital. By measuring this risk by CVaR, we establish an equivalence between this problem and a problem of Neyman-Pearson type, for which we propose a numerical approximation based on the constraint relaxation
|
Page generated in 0.066 seconds