• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 17
  • 15
  • 12
  • 9
  • 5
  • 5
  • 2
  • 2
  • Tagged with
  • 194
  • 194
  • 194
  • 59
  • 34
  • 26
  • 26
  • 24
  • 24
  • 23
  • 21
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology

Al-Fahdawi, Shumoos, Qahwaji, Rami S.R., Al-Waisy, Alaa S., Ipson, Stanley S., Ferdousi, M., Malik, R.A., Brahma, A. 22 March 2018 (has links)
Yes / Background and Objective Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. Methods First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). Results The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p < 0.0001) and a Bland–Altman plot shows that 95% of the data are between the 2SD agreement lines. Conclusions We demonstrate the effectiveness and robustness of the CEAS system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image.
62

FFT and Neural Networks for Identifying and Classifying Heart Arrhythmias

Kegel, Johan, Zetterblad, Carolina January 2024 (has links)
The rise of machine learning has seen an increase in digital methods for use within health care. Arrythmia detection is one of the areas where this increase is obvious. However, many machine learning methods for arrythmia detection utilise models that are computationally expensive, such as convolutional neural networks, CNNs. This thesis examines whether it is viable to use the Fast Fourier Transform to transform an Electrocardiogram, ECG, signal into its frequency components before training a neural network, NN, on the data. This could allow for a lower computational cost and wider availability of arrythmia detection technology. The results from the model were compared to that of a CNN trained on time domain data. The results show that the CCN model outperforms the NN trained on FFT transformed data but the performance of the model still indicates that valuable information about heart arrythmias does exist within the frequency space. This suggests a potential for future work on the subject.
63

OpenMP parallelization in the NFFT software library

Volkmer, Toni 29 August 2012 (has links) (PDF)
We describe an implementation of a multi-threaded NFFT (nonequispaced fast Fourier transform) software library and present the used parallelization approaches. Besides the NFFT kernel, the NFFT on the two-sphere and the fast summation based on NFFT are also parallelized. Thereby, the parallelization is based on OpenMP and the multi-threaded FFTW library. Furthermore, benchmarks for various cases are performed. The results show that an efficiency higher than 0.50 and up to 0.79 can still be achieved at 12 threads.
64

ASIC Implementation of A High Throughput, Low Latency, Memory Optimized FFT Processor

Kala, S 12 1900 (has links) (PDF)
The rapid advancements in semiconductor technology have led to constant shrinking of transistor sizes as per Moore's Law. Wireless communications is one field which has seen explosive growth, thanks to the cramming of more transistors into a single chip. Design of these systems involve trade-offs between performance, area and power. Fast Fourier Transform is an important component in most of the wireless communication systems. FFTs are widely used in applications like OFDM transceivers, Spectrum sensing in Cognitive Radio, Image Processing, Radar Signal Processing etc. FFT is the most compute intensive and time consuming operation in most of the above applications. It is always a challenge to develop an architecture which gives high throughput while reducing the latency without much area overhead. Next generation wireless systems demand high transmission efficiency and hence FFT processor should be capable of doing computations much faster. Architectures based on smaller radices for computing longer FFTs are inefficient. In this thesis, a fully parallel unrolled FFT architecture based on novel radix-4 engine is proposed which is catered for wide range of applications. The radix-4 butterfly unit takes all four inputs in parallel and can selectively produce one out of the four outputs. The proposed architecture uses Radix-4^3 and Radix-4^4 algorithms for computation of various FFTs. The Radix-4^4 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. In existing Cooley-Tukey architectures, the output from each stage has to be reordered before the next stage can start computation. This needs intermediate storage after each stage. In our architecture, each stage can directly generate the reordered outputs and hence reduce these buffers. A solution for output reordering problem in Radix-4^3 and Radix-4^4 FFT architectures are also discussed in this work. Although the hardware complexity in terms of adders and multipliers are increased in our architecture, a significant reduction in intermediate memory requirement is achieved. FFTs of varying sizes starting from 64 point to 64K point have been implemented in ASIC using UMC 130nm CMOS technology. The data representation used in this work is fixed point format and selected word length is 16 bits to get maximum Signal to Quantization Noise Ratio (SQNR). The architecture has been found to be more suitable for computing FFT of large sizes. For 4096 point and 64K point FFTs, this design gives comparable throughput with considerable reduction in area and latency when compared to the state-of-art implementations. The 64K point FFT architecture resulted in a throughput of 1332 mega samples per second with an area of 171.78 mm^2 and total power of 10.7W at 333 MHz.
65

Kan datorer höra fåglar? / Can Computers Hear Birds?

Movin, Andreas, Jilg, Jonathan January 2019 (has links)
Ljudigenkänning möjliggörs genom spektralanalys, som beräknas av den snabba fouriertransformen (FFT), och har under senare år nått stora genombrott i samband med ökningen av datorprestanda och artificiell intelligens. Tekniken är nu allmänt förekommande, i synnerhet inom bioakustik för identifiering av djurarter, en viktig del av miljöövervakning. Det är fortfarande ett växande vetenskapsområde och särskilt igenkänning av fågelsång som återstår som en svårlöst utmaning. Även de främsta algoritmer i området är långt ifrån felfria. I detta kandidatexamensarbete implementerades och utvärderades enkla algoritmer för att para ihop ljud med en ljuddatabas. En filtreringsmetod utvecklades för att urskilja de karaktäristiska frekvenserna vid fem tidsramar som utgjorde basen för jämförelsen och proceduren för ihopparning. Ljuden som användes var förinspelad fågelsång (koltrast, näktergal, kråka och fiskmås) så väl som egeninspelad mänsklig röst (4 unga svenska män). Våra resultat visar att framgångsgraden normalt är 50–70%, den lägsta var fiskmåsen med 30% för en liten databas och den högsta var koltrasten med 90% för en stor databas. Rösterna var svårare för algoritmen att särskilja, men de hade överlag framgångsgrader mellan 50% och 80%. Dock gav en ökning av databasstorleken generellt inte en ökning av framgångsgraden. Sammanfattningsvis visar detta kandidatexamensarbete konceptbeviset bakom fågelsångigenkänning och illustrerar såväl styrkorna som bristerna av dessa enkla algoritmer som har utvecklats. Algoritmerna gav högre framgångsgrad än slumpen (25%) men det finns ändå utrymme för förbättring eftersom algoritmen vilseleddes av ljud av samma frekvenser. Ytterligare studier behövs för att bedöma den utvecklade algoritmens förmåga att identifiera ännu fler fåglar och röster. / Sound recognition is made possible through spectral analysis, computed by the fast Fourier transform (FFT), and has in recent years made major breakthroughs along with the rise of computational power and artificial intelligence. The technology is now used ubiquitously and in particular in the field of bioacoustics for identification of animal species, an important task for wildlife monitoring. It is still a growing field of science and especially the recognition of bird song which remains a hard-solved challenge. Even state-of-the-art algorithms are far from error-free. In this thesis, simple algorithms to match sounds to a sound database were implemented and assessed. A filtering method was developed to pick out characteristic frequencies at five time frames which were the basis for comparison and the matching procedure. The sounds used were pre-recorded bird songs (blackbird, nightingale, crow and seagull) as well as human voices (4 young Swedish males) that we recorded. Our findings show success rates typically at 50–70%, the lowest being the seagull of 30% for a small database and the highest being the blackbird at 90% for a large database. The voices were more difficult for the algorithms to distinguish, but they still had an overall success rate between 50% and 80%. Furthermore, increasing the database size did not improve success rates in general. In conclusion, this thesis shows the proof of concept and illustrates both the strengths as well as short-comings of the simple algorithms developed. The algorithms gave better success rates than pure chance of 25% but there is room for improvement since the algorithms were easily misled by sounds of the same frequencies. Further research will be needed to assess the devised algorithms' ability to identify even more birds and voices.
66

Pricing Basket of Credit Default Swaps and Collateralised Debt Obligation by Lévy Linearly Correlated, Stochastically Correlated, and Randomly Loaded Factor Copula Models and Evaluated by the Fast and Very Fast Fourier Transform

Fadel, Sayed M. January 2010 (has links)
In the last decade, a considerable growth has been added to the volume of the credit risk derivatives market. This growth has been followed by the current financial market turbulence. These two periods have outlined how significant and important are the credit derivatives market and its products. Modelling-wise, this growth has parallelised by more complicated and assembled credit derivatives products such as mth to default Credit Default Swaps (CDS), m out of n (CDS) and collateralised debt obligation (CDO). In this thesis, the Lévy process has been proposed to generalise and overcome the Credit Risk derivatives standard pricing model's limitations, i.e. Gaussian Factor Copula Model. One of the most important drawbacks is that it has a lack of tail dependence or, in other words, it needs more skewed correlation. However, by the Lévy Factor Copula Model, the microscopic approach of exploring this factor copula models has been developed and standardised to incorporate an endless number of distribution alternatives those admits the Lévy process. Since the Lévy process could include a variety of processes structural assumptions from pure jumps to continuous stochastic, then those distributions who admit this process could represent asymmetry and fat tails as they could characterise symmetry and normal tails. As a consequence they could capture both high and low events¿ probabilities. Subsequently, other techniques those could enhance the skewness of its correlation and be incorporated within the Lévy Factor Copula Model has been proposed, i.e. the 'Stochastic Correlated Lévy Factor Copula Model' and 'Lévy Random Factor Loading Copula Model'. Then the Lévy process has been applied through a number of proposed Pricing Basket CDS&CDO by Lévy Factor Copula and its skewed versions and evaluated by V-FFT limiting and mixture cases of the Lévy Skew Alpha-Stable distribution and Generalized Hyperbolic distribution. Numerically, the characteristic functions of the mth to default CDS's and (n/m) th to default CDS's number of defaults, the CDO's cumulative loss, and loss given default are evaluated by semi-explicit techniques, i.e. via the DFT's Fast form (FFT) and the proposed Very Fast form (VFFT). This technique through its fast and very fast forms reduce the computational complexity from O(N2) to, respectively, O(N log2 N ) and O(N ).
67

Applications of Fourier Analysis to Audio Signal Processing: An Investigation of Chord Detection Algorithms

Lenssen, Nathan 01 January 2013 (has links)
The discrete Fourier transform has become an essential tool in the analysis of digital signals. Applications have become widespread since the discovery of the Fast Fourier Transform and the rise of personal computers. The field of digital signal processing is an exciting intersection of mathematics, statistics, and electrical engineering. In this study we aim to gain understanding of the mathematics behind algorithms that can extract chord information from recorded music. We investigate basic music theory, introduce and derive the discrete Fourier transform, and apply Fourier analysis to audio files to extract spectral data.
68

DIGITAL RECEIVER PROCESSING TECHNIQUES FOR SPACE VEHICLE DOWNLINK SIGNALS

Natali, Francis D., Socci, Gerard G. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada / Digital processing techniques and related algorithms for receiving and processing space vehicle downlink signals are discussed. The combination of low minimum signal to noise density (C/No), large signal dynamic range, unknown time of arrival, and high space vehicle dynamics that is characteristic of some of these downlink signals results in a difficult acquisition problem. A method for rapid acquisition is described which employs a Fast Fourier Transform (FFT). Also discussed are digital techniques for precise measurement of space vehicle range and range rate using a digitally synthesized number controlled oscillator (NCO).
69

Travel time reliability assessment techniques for large-scale stochastic transportation networks

Ng, Man Wo 07 October 2010 (has links)
Real-life transportation systems are subject to numerous uncertainties in their operation. Researchers have suggested various reliability measures to characterize their network-level performances. One of these measures is given by travel time reliability, defined as the probability that travel times remain below certain (acceptable) levels. Existing reliability assessment (and optimization) techniques tend to be computationally intensive. In this dissertation we develop computationally efficient alternatives. In particular, we make the following three contributions. In the first contribution, we present a novel reliability assessment methodology when the source of uncertainty is given by road capacities. More specifically, we present a method based on the theory of Fourier transforms to numerically approximate the probability density function of the (system-wide) travel time. The proposed methodology takes advantage of the established computational efficiency of the fast Fourier transform. In the second contribution, we relax the common assumption that probability distributions of the sources of uncertainties are known explicitly. In reality, this distribution may be unavailable (or inaccurate) as we may have no (or insufficient) data to calibrate the distributions. We present a new method to assess travel time reliability that is distribution-free in the sense that the methodology only requires that the first N moments (where N is any positive integer) of the travel time to be known and that the travel times reside in a set of known and bounded intervals. Instead of deriving exact probabilities on travel times exceeding certain thresholds via computationally intensive methods, we develop analytical probability inequalities to quickly obtain upper bounds on the desired probability. Because of the computationally intensive nature of (virtually all) existing reliability assessment techniques, the optimization of the reliability of transportation systems has generally been computationally prohibitive. The third and final contribution of this dissertation is the introduction of a new transportation network design model in which the objective is to minimize the unreliability of travel time. The computational requirements are shown to be much lower due to the assessment techniques developed in this dissertation. Moreover, numerical results suggest that it has the potential to form a computationally efficient proxy for current simulation-based network design models. / text
70

Homogénéisation numérique de structures périodiques par transformée de Fourier : matériaux composites et milieux poreux / Numerical homogenization of periodic structures by Fourier transform : composite materials and porous media

Nguyen, Trung Kien 21 December 2010 (has links)
Cette étude est consacrée au développement d'outils numériques basés sur la Transformée de Fourier Rapide (TFR) en vue de la détermination des propriétés effectives des structures périodiques. La première partie est dédiée aux matériaux composites. Au premier chapitre, on présente et on compare les différentes méthodes de résolution basée sur la TFR dans le contexte linéaire. Au second chapitre on propose une approche à deux échelles, pour la détermination du comportement des composites non linéaires. La méthode couple, les techniques de résolution basées sur la TFR à l'échelle locale, une méthode d'interpolation multidimensionnelle du potentiel des déformations à l'échelle macroscopique. L'approche présente de nombreux avantages faces aux approches existantes. D'une part, elle ne nécessite aucune approximation et d'autre part, elle est parfaitement séquentielle puisqu'elle ne nécessite pas de traiter simultanément les deux échelles. La loi de comportement macroscopique obtenue a été ensuite implémentée dans un code de calcul par éléments finis. Des illustrations dans le cas d'un problème de flexion sont proposées. La deuxième partie du travail est dédiée à la formulation d'un outil numérique pour la détermination de la perméabilité des milieux poreux saturés. Au chapitre trois, on présente la démarche dans le cas des écoulements en régime quasi-statique. La méthode de résolution repose sur une formulation en contrainte du itératif basée sur la TFR, mieux adaptée pour traiter le cas des contrastes infinis. Deux extensions de cette méthode sont proposées au quatrième chapitre. La première concerne la prise en compte des effets de glissement sur la paroi de la matrice poreux. La méthodologie employée repose sur le concept d'interphase et d'interface équivalente, introduite dans le contexte de l'élasticité des composites et adaptée ici au cas des milieux poreux. Enfin, on présente l'extension de la méthode au cas des écoulements en régime dynamique. Pour cela, on propose un nouveau schéma itératif pour la prise en compte des effets d'origine inertiel / This study is devoted to developing numerical tools based on Fast Fourier Transform (FFT) for determining the effective properties of periodic structures. The first part is devoted to composite materials. In the first chapter, we present and we compare the different FFT-based methods in the context of linear composites. In the second chapter, we propose a two-scale approach for determining the behavior of nonlinear composites. The method uses both FFT-based iterative schemes at the local scale and a multidimensional interpolation of the strain potential at the macroscopic scale. This approach has many advantages over existing ones. Firstly, it requires no approximations for the determination of the macroscopic response. Moreover, it is sequential in the sense that it is not required to process both scales simultaneously. The macroscopic constitutive law has been derived and implemented in a finite element code. Some illustrations in the case of a beam bending are proposed. The second part of the work is dedicated to the formulation of a numerical tool for determining the permeability of saturated porous media. In chapter three, we present the approach in the context of quasi-static flows. To solve the problem we propose a FFT stress-based iterative scheme, better suited to handle the case of infinite contrasts. Two extensions of this method are proposed in the fourth chapter. The first concerns the slip effects which occurs at the interface between solid and fluid. The methodology use the concept of interface and the equivalent interphase, initially introduced in the context of elastic composites and adapted here to the case of porous media. Finally, we present the extension of the method in the dynamic context. We propose a new iterative scheme for taking into account the presence of inertial terms

Page generated in 0.0515 seconds