• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 36
  • 26
  • 25
  • 21
  • 16
  • 13
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 375
  • 101
  • 81
  • 71
  • 69
  • 61
  • 46
  • 39
  • 39
  • 38
  • 37
  • 35
  • 30
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Experimental and Computational Investigation of the Microstructure-Mechanical Deformation Relationship in Polycrystalline Materials, Applied to Additively Manufactured Titanium Alloys

Ozturk, Tugce 01 May 2017 (has links)
Parts made out of titanium alloys demonstrate anisotropic mechanical properties when manufactured by electron beam melting, an emerging additive manufacturing technique. Understanding the process history dependent heterogeneous microstructure, and its effect on mechanical properties is crucial in determining the performance of additively manufactured titanium alloys as the mechanical behavior heavily relies on the underlying microstructural features. This thesis work focuses on combined experimental and computational techniques for microstructure characterization, synthetic microstructure generation, mechanical property measurement, and mechanical behavior modeling of polycrystalline materials, with special focus on dual phase titanium alloys. Macroscopic mechanical property measurements and multi-modal microstructure characterizations (high energy X-ray diffraction, computed tomography and optical microscopy) are performed on additively manufactured Ti-6Al-4V parts, revealing the heterogeneity of the microstructure and properties with respect to the build height. Because characterizing and testing every location within a build is not practical, a computational methodology is established in order to reduce the time and cost spent on microstructure-property database creation. First a statistical volume element size is determined for the Fast Fourier Transform based micromechanical modeling technique through a sensitivity study performed on an experimental Ni-based superalloy and syntheticW, Cu, Ni and Ti structures, showing that as the contrast of properties (e.g., texture, field localization, anisotropy, rate-sensitivity) increases, so does the minimum simulation domain size requirement. In all deformation regimes a minimum volume element is defined for both single and dual phase materials. The database is then expanded by generating statistically representative Ti structures which are modified for features of interest, e.g., lath thickness, grain size and orientation distribution, to be used in spectral full-field micromechanical modeling. The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions. Fast Fourier transform based technique, being a spectral, full-field deformation modeling tool, is shown to be capable of capturing the relative contribution from varying microstructural features such as phase fractions, grain morphology/ size and texture on the overall mechanical properties as the results indicate that the mean field behavior is predominantly controlled by the alpha phase fraction and the prior beta phase orientation.
322

A comparison of frequency offset estimation methods in Orthogonal Frequency Division Multiplexing (OFDM) systems

Karaoglu, Bulent 12 1900 (has links)
Approved for public release; distribution in unlimited. / OFDM is a modulation technique that achieves high data rates, increased bandwidth efficiency and robustness in multipath environments. However, OFDM has some disadvantages, such as sensitivity to channel fading, large peak to average ratio and sensitivity to frequency offset. The latter causes intercarrier interference (ICI) and a reduction in the amplitude of the desired subcarrier which results in loss of orthogonality. In this thesis, the effects of frequency offset are studied in terms of loss of orthogonality. A number of techniques for frequency offset estimation are presented and tested in computer simulations. / Lieutenant Junior Grade, Turkish Navy
323

Application de la transformée en nombres entiers à la conception d'algorithmes de faible complexité pour l'annulation d'échos acoustiques

Alaeddine, Hamzé 12 July 2007 (has links) (PDF)
Le principal objectif de notre étude est d'évaluer la possibilité d'un développement en temps réel d'un système d'annulation d'écho acoustique. Pour réduire le coût de calcul de ce système, nous avons approfondi les bases mathématiques de la transformée en nombres entiers (NTT : Number Theoretic Transform) qui est amenée à trouver des applications de plus en plus diverses en traitement du signal. Nous avons introduit plus particulièrement la transformée en nombres de Fermat (FNT : Fermat Number Transform) qui permet une réduction, par rapport à la FFT (Fast Fourier Transform), des nombres de multiplications nécessaires à la réalisation de certaines fonctions telles que les produits de convolution. Pour mettre en évidence cette transformée, nous avons proposé et étudié de nouveaux algorithmes d'annulation d'écho de faible complexité que nous avons traités par blocs et rendus robustes avant de les implanter au moyen de la FNT. Le résultat de cette implantation, comparée à une implantation par la FFT, a montré une forte réduction du nombre de multiplications accompagnée d'une augmentation du nombre d'opérations classiques. Pour réduire cette augmentation, nous avons proposé une nouvelle technique de la transformée, intitulée Generalized Sliding FNT (GSFNT). Celle-ci consiste à calculer la FNT d'une succession de séquences qui diffèrent d'un certain nombre d'échantillons l'une de l'autre. Le résultat des simulations des performances de ces algorithmes d'annulation d'écho, traités au moyen de cette technique, a montré que celle-ci permet de pallier à l'augmentation du nombre d'opérations classiques observée lors d'une implantation en FNT. Enfin, l'implantation des algorithmes d'annulation d'écho en FNT et par une nouvelle procédure de l'algorithme MDF (Multi-Delay Filter ) associée à la nouvelle méthode de calcul du pas d'adaptation, a permis une réduction significative de la complexité de calcul.
324

Algorithms for Molecular Dynamics Simulations

Hedman, Fredrik January 2006 (has links)
Methods for performing large-scale parallel Molecular Dynamics(MD) simulations are investigated. A perspective on the field of parallel MD simulations is given. Hardware and software aspects are characterized and the interplay between the two is briefly discussed. A method for performing ab initio MD is described; the method essentially recomputes the interaction potential at each time-step. It has been tested on a system of liquid water by comparing results with other simulation methods and experimental results. Different strategies for parallelization are explored. Furthermore, data-parallel methods for short-range and long-range interactions on massively parallel platforms are described and compared. Next, a method for treating electrostatic interactions in MD simulations is developed. It combines the traditional Ewald summation technique with the nonuniform Fast Fourier transform---ENUF for short. The method scales as N log N, where N is the number of charges in the system. ENUF has a behavior very similar to Ewald summation and can be easily and efficiently implemented in existing simulation programs. Finally, an outlook is given and some directions for further developments are suggested.
325

Searching for missing baryons through scintillation

Habibi, Farhang 15 June 2011 (has links) (PDF)
Cool molecular hydrogen H2 may be the ultimate possible constituent to the Milky-Way missing baryon. We describe a new way to search for such transparent matter in the Galactic disc and halo, through the diffractive and refractive effects on the light of background stars. By simulating the phase delay induced by a turbulent medium, we computed the corresponding illumination pattern on the earth for an extended source and a given passband. We show that in favorable cases, the light of a background star can be subjected to stochastic fluctuations of the order of a few percent at a characteristic time scale of a few minutes. We have searched for scintillation induced by molecular gas in visible dark nebulae as well as by hypothetical halo clumpuscules of cool molecular hydrogen (H2_He) during two nights, using the NTT telescope and the IR SOFI detector. Amongst a few thousands of monitored stars, we found one light-curve that is compatible with a strong scintillation effect through a turbulent structure in the B68 nebula. Because no candidate were found toward the SMC, we are able to establish upper limits on the contribution of gas clumpuscules to the Galactic halo mass. We show that the short time-scale monitoring of a few 10^6 star _ hour in the visible band with a >4 m telescope and a fast readout camera should allow one to interestingly quantify or constrain the contribution of turbulent molecular gas to the Galactic halo.
326

Αρχιτεκτονικές VLSI modem χαμηλής κατανάλωσης για ασύρματα δίκτυα OFDM : ο ρόλος της εναλλακτικής αριθμητικής

Μπροκαλάκης, Ανδρέας 16 March 2009 (has links)
Η διαμόρφωση με πολύπλεξη συχνότητας ορθογωνίων φερουσών (Orthogonal Frequency Division Multiplexing - OFDM) έχει εδραιωθεί ως μία από τις επικρατέστερες μεθόδους διαμόρφωσης για την υψηλού ρυθμού μετάδοση πληροφορίας μέσω ασύρματων μέσων. Σε ένα σύστημα OFDM, ένα από τα βασικότερα και υπολογιστικά πολυπλοκότερα τμήματα είναι ο υπολογισμούς του Ταχύ Μετασχηματισμού Fourier. Αντικείμενο της εργασίας αυτής είναι η μελέτη της χρήσης εναλλακτικής αριθμητικής για την υλοποίηση κυκλωμάτων FFT. Τυπικά, τέτοιου είδους κυκλώματα υλοποιούνται χρησιμοποιώντας κάποια γραμμική αναπαράσταση σταθερής υποδιαστολής. Στη βιβλιογραφία έχουν προταθεί υλοποιήσεις του FFT με χρήση του Λογαριθμικού Συστήματος Αρίθμησης (Logarithmic Numbering System – LNS) και έχουν αναφερθεί κέρδη για συγκεκριμένους παράγοντες όπως το σφάλμα κβαντισμού, η επιφάνεια ολοκλήρωσης και η κατανάλωση ισχύος. Η αποδοτικότητα αυτών των λύσεων ερευνάται για τη συγκεκριμένη περίπτωση της εφαρμογής του FFT σε OFDM modems. Εστιάζοντας στην περίπτωση του FFT 64 σημείων για OFDM modem για ασύρματα δίκτυα 802.11a, μία από τις πλέον επιτυχημένες αρχιτεκτονικές που έχουν προταθεί για την υλοποίηση του, στηρίζεται στη λογική του FFT γραμμής – στήλης και παρουσιάζει έναν τρόπο πραγματοποίησης του υπολογισμού χωρίς κανένα ψηφιακό πολλαπλασιαστή. Με το βασικό πλεονέκτημα της λογαριθμικής αναπαράστασης να είναι η απλοποίηση των κυκλωμάτων πολλαπλασιασμού (με ταυτόχρονη όμως αύξηση του κόστους για την πραγματοποίηση προσθέσεων), δείχνεται ότι τελικά η υλοποίηση ενός FFT αμιγώς σε LNS δεν είναι προτιμητέα. Αν και η αρχιτεκτονική του FFT γραμμής – στήλης μπορεί να προσφέρει υψηλή απόδοση με χαμηλό κόστος υλοποίησης, παρουσιάζει μια σειρά από αδυναμίες, που σχετίζονται κυρίως με τη χρήση ειδικών κυκλωμάτων για την εκτέλεση των πολλαπλασιασμών με τις σταθερές που εμφανίζονται στον FFT (twiddle factors). Για την αντιμετώπιση αυτών των περιορισμών προτείνεται η εισαγωγή του LNS σε κάποια τμήματα του κυκλώματος του FFT, οδηγώντας έτσι στη δημιουργία ενός συστήματος μικτής αναπαράστασης. Σε τέτοιου είδους υβριδικά συστήματα τίθενται δύο βασικά ζητήματα. Το πρώτο αφορά τον ορισμό της ισοδυναμίας μεταξύ των διαφορετικών αναπαραστάσεων και το δεύτερο τον αποδοτικό τρόπο υλοποίησης των κυκλωμάτων μετατροπής από το ένα αριθμητικό σύστημα στο άλλο. Τυπικά, τα κριτήρια ισοδυναμίας που επιλέγονται είναι αυστηρά μαθηματικά ορισμένα, όπως για παράδειγμα ο Λόγος Σήματος προς Θόρυβο (Signal-to-Noise Ratio - SNR) ή το Μέσο Σχετικό Σφάλμα Αναπαράστασης (Average Relative Representation Error – ARRE). Στη συγκεκριμένη εργασία ακολουθείται μια λιγότερο δεσμευτική προσέγγιση, ορίζοντας την ισοδυναμία δύο αναπαραστάσεων με βάση την τελική απόδοση του συστήματος OFDM όσον αφορά το ρυθμό λαθών στο δέκτη (Bit Error Rate - BER). Με βάση αυτή τη λογική, αποδεικνύεται ότι μπορούν να χρησιμοποιηθούν αναπαραστάσεις πολύ μικρού μεγέθους λέξης και οι προσεγγίσεις που χρειάζεται να γίνουν κατά τις μετατροπές μεταξύ των δύο συστημάτων δεν είναι ανάγκη να είναι ιδιαίτερα ακριβείς. Έτσι, τα σχετικά κυκλώματα μπορούν να υλοποιηθούν αποδοτικά και με μικρό κόστος. Η υλοποίηση δύο συστημάτων για τον FFT 64 σημείων, ένα βασισμένο αποκλειστικά σε γραμμική αναπαράσταση σταθερής υποδιαστολής και ένα υβριδικό που χρησιμοποιεί γραμμική και λογαριθμική αναπαράσταση, δείχνει ότι χωρίς διαφορές όσον αφορά το BER και την καθυστέρηση (delay), η υβριδική προσέγγιση απαιτεί μικρότερη επιφάνεια ολοκλήρωσης και παρουσιάζει σημαντικά χαμηλότερη κατανάλωση ισχύος. / Orthogonal Frequency Division Multiplexing (OFDM) has been established as one of the most prevalent methods for high data rate transmission through wireless channels. In an OFDM communication system, one of the fundamental and most computationally intensive parts is the computation of the Fast Fourier Transform (FFT). The subject of this thesis is to investigate the use of alternative arithmetic representation systems for the implementation of FFT circuits. Typically, these circuits are implemented using linear fixed-point representations. In literature, implementations of the FFT using the Logarithmic Numbering System (LNS) have been proposed and significant gains in quantization errors, chip area and power consumption have been reported. The effectiveness of these proposals in the case of the FFT for OFDM systems is investigated. Focusing on the case of the 64-point FFT for an OFDM modem for an 802.11a wireless network, one of the most efficient architectures proposed is based on the concept of row-column FFT and presents a way of implementing the computation without using any digital (non-fixed input) multiplier. The most important feature of the LNS representation is the fact that multiplication operations turn to mere additions, thus there are significant implementation gains. On the downside though, addition in LNS is very expensive. Combining the aforementioned, it is shown that the implementation of the whole FFT computation in LNS is not a preferable solution. Although the row-column FFT architecture may offer high performance and low implementation cost, it presents a number of deficiencies mainly due to the fact that special purpose circuits are used to perform the multiplications with the complex constants (twiddle factors) that appear in the computation. In order to alleviate these deficiencies, it is proposed to use the LNS representation in some parts of the FFT circuit, thus forming a hybrid-representation system. In hybrid-representation systems two major issues are raised. The first one is how to define equivalence between the arithmetic representation systems used and the second one is related to the cost of the circuits required to perform the conversions between the numbers of the different arithmetic systems. Typically, the equivalence criterion used is mathematically defined and metrics like the Signal-to-Noise Ratio (SNR) or Average Relative Representation Error (ARRE) are commonly used. In this report, a less restrictive metric is used: two arithmetic representations are defined to be equal if the Bit Error Rate (BER) performance of the overall OFDM system is equal. Using this approach, it is shown that short word-length representations may be used and the conversions between the linear and logarithmic systems need not be very accurate. This results in great simplification of the conversion process and the respected circuits can be implemented with low cost. For comparison, two 64-point FFT systems have been implemented, one using a linear fixed-point 2’s complement representation and one using both linear and LNS representation. Without any differences in BER performance and circuit delay, the hybrid-representation system requires less chip area and consumes significantly lower power.
327

Contribution à l'étude de l'opérateur commun FFT dans le contexte de la Radio logicielle : application au codage de canal

Al Ghouwayel, Ali 27 May 2008 (has links) (PDF)
L'aspect de la radio logicielle et plus précisément celui de la paramétrisation sous l'approche opérateur commun est traité dans cette thèse. Dans ce contexte, l'étude des codes cycliques en particulier les codes de Reed-Solomon (RS) est considérée. Dans l'optique d'impliquer l'opérateur FFT, utilisé dans différentes fonctions comme la fonction de filtrage, la (dé)modulation OFDM dans le codage et décodage RS, nous avons ressorti une classe spécifique des codes RS définie dans le corps de Galois CG(Ft). Nous avons ensuite proposé un opérateur FFT reconfigurable (DMFFT) capable de réaliser deux fonctionnalités : FNT pour le décodage RS et la FFT classique. L'opérateur DMFFT est implémenté sur des composants FPGA et comparé à l'approche Velcro qui consiste à implémenter séparément les deux opérateurs FFT et FNT. Cette implémentation a montré que l'approche reconfigurable permet d'obtenir une économie en mémoire environ 25 % et une réduction de complexité environ 18 %. Dans le but de traiter les codes RS classiques utilisés dans les standards actuels, nous avons proposé deux scénari permettant de réaliser d'une façon optimale un opérateur tri mode (TMFFT) qui est capable de réaliser, en plus de deux fonctionnalités de l'opérateur DMFFT, la transformée de Fourier dans les corps finis CG(2m).
328

Mitigating the effect of soft-limiting for OFDM peak reduction

Bibi, Nargis January 2014 (has links)
Digital communication systems which use Orthogonal Frequency Division Multiplexing (OFDM) are now widely used and have many advantages. The main disadvantage is the requirement for highly linear analogue electronics including the high power amplifier (HPA). This requirement cannot be met in all circumstances because of the occurrence of symbols with high peak to average power ratio (PAPR). Such symbols may be non-linearly distorted by limiting. Approaches to solve this problem have been either to reduce the PAPR at the transmitter or to try to mitigate the effect of the non-linearity at the receiver. Soft-limiting, i.e. applying limiting in software prior to the HPA is a simple way to reduce the PAPR. It produces non-linear distortion which will cause an increase in the bit-error-rate (BER) at the receiver. This thesis surveys existing alternatives ways of reducing the effect of non-linearity and proposes some new ones. Two iterative receiver techniques, based on statistical analysis of the nature of the non-linearity, have been implemented and investigated. These are the ‘Bussgang Noise Cancellation’ (BNC) technique and the ‘Decision Aided Reconstruction’ (DAR) techniques. As these techniques are valid for any memory-less nonlinearity, an alternative form of limiting, named as Inverted-Wraparound (IWRAP) has been included in the BNC investigation. A new method is proposed which is capable of correcting the received time-domain samples that are clipped, once they have been identified. This is named the ‘Equation-Method’ and it works by identifying constellation symbols that are likely to be correct at the receiver. If there are a sufficient number of these and they are correctly identified, the FFT may be partitioned to produce a set of equations that may be solved for the clipped time-domain samples. The thesis proposes four enhancements to this new method which improve its effectiveness. It is shown that the best form of this method outperforms conventional techniques especially for severe clipping levels. The performance of these four enhancements is evaluated over channels with additive white Gaussian noise (AWGN) in addition to clipping distortion. A technique based on a ‘margin factor’ is designed to make these methods work more effectively in the presence of AWGN noise. A new combining algorithm referred as ‘HARQ for Clipping’ is presented where soft bit decisions are combined from multiple transmissions. ‘HARQ for Clipping’ has been combined with the best version of the Equation-Method, and the performance of this approach is evaluated in terms of the BER with different levels of AWGN. It has been compared to other approaches from the literature and was found to out-perform the BNC iterative receiver by 3dB at signal to noise ratios around 10dB. Without HARQ, the best version of the Equation-Method performs better than the BNC receiver, at signal-to-nose ratios above about 17dB.
329

Zazen pro diagnostiku asynchronnch motor / Device for diagnostic of induction machine

Buln, Tom January 2014 (has links)
The aim of the thesis is to study the problems of induction motors, which belong among the most widespread and used machines. Due to their expansion, it is advisable to start the fault monitoring to prevent any more damage and losses. It can be used for condition monitoring different types of methods, which are variety technically and costly demanding to implement. It is used the method of measurement using stator currents in this thesis, because these currents are already monitored and it is easy to create devices for a more detailed analysis. The whole monitoring is carried out using products from National Instruments. The program of monitoring is created in graphical setting of LabVIEW. The analysis is conducted by calculating the Fast Fourier transform of the time signal. The result is the frequency spectrum, which contains frequency peaks and some of these peaks represent failure. The test data are collected with a DAQ device and then the same data are used for creating a methodology for evaluating online analysis and it is subsequently implemented into CompactRIO.
330

Spectrum Sensing Techniques For Cognitive Radio Applications

Sanjeev, G 01 1900 (has links) (PDF)
Cognitive Radio (CR) has received tremendous research attention over the past decade, both in the academia and industry, as it is envisioned as a promising solution to the problem of spectrum scarcity. ACR is a device that senses the spectrum for occupancy by licensed users(also called as primary users), and transmits its data only when the spectrum is sensed to be available. For the efficient utilization of the spectrum while also guaranteeing adequate protection to the licensed user from harmful interference, the CR should be able to sense the spectrum for primary occupancy quickly as well as accurately. This makes Spectrum Sensing(SS) one of the where the goal is to test whether the primary user is inactive(the null or noise-only hypothesis), or not (the alternate or signal-present hypothesis). Computational simplicity, robustness to uncertainties in the knowledge of various noise, signal, and fading parameters, and ability to handle interference or other source of non-Gaussian noise are some of the desirable features of a SS unit in a CR. In many practical applications, CR devices can exploit known structure in the primary signal. IntheIEEE802.22CR standard, the primary signal is a wideband signal, but with a strong narrowband pilot component. In other applications, such as military communications, and blue tooth, the primary signal uses a Frequency Hopping (FH)transmission. These applications can significantly benefit from detection schemes that are tailored for detecting the corresponding primary signals. This thesis develops novel detection schemes and rigorous performance analysis of these primary signals in the presence of fading. For example, in the case of wideband primary signals with a strong narrowband pilot, this thesis answers the further question of whether to use the entire wideband for signal detection, or whether to filter out the pilot signal and use narrowband signal detection. The question is interesting because the fading characteristics of wideband and narrowband signals are fundamentally different. Due to this, it is not obvious which detection scheme will perform better in practical fading environments. At another end of the gamut of SS algorithms, when the CR has no knowledge of the structure or statistics of the primary signal, and when the noise variance is known, Energy Detection (ED) is known to be optimal for SS. However, the performance of the ED is not robust to uncertainties in the noise statistics or under different possible primary signal models. In this case, a natural way to pose the SS problem is as a Goodness-of-Fit Test (GoFT), where the idea is to either accept or reject the noise-only hypothesis. This thesis designs and studies the performance of GoFTs when the noise statistics can even be non-Gaussian, and with heavy tails. Also, the techniques are extended to the cooperative SS scenario where multiple CR nodes record observations using multiple antennas and perform decentralized detection. In this thesis, we study all the issues listed above by considering both single and multiple CR nodes, and evaluating their performance in terms of(a)probability of detection error,(b) sensing-throughput trade off, and(c)probability of rejecting the null-hypothesis. We propose various SS strategies, compare their performance against existing techniques, and discuss their relative advantages and performance tradeoffs. The main contributions of this thesis are as follows: The question of whether to use pilot-based narrowband sensing or wideband sensing is answered using a novel, analytically tractable metric proposed in this thesis called the error exponent with a confidence level. Under a Bayesian framework, obtaining closed form expressions for the optimal detection threshold is difficult. Near-optimal detection thresholds are obtained for most of the commonly encountered fading models. Foran FH primary, using the Fast Fourier Transform (FFT) Averaging Ratio(FAR) algorithm, the sensing-through put trade off are derived in closed form. A GoFT technique based on the statistics of the number of zero-crossings in the observations is proposed, which is robust to uncertainties in the noise statistics, and outperforms existing GoFT-based SS techniques. A multi-dimensional GoFT based on stochastic distances is studied, which pro¬vides better performance compared to some of the existing techniques. A special case, i.e., a test based on the Kullback-Leibler distance is shown to be robust to some uncertainties in the noise process. All of the theoretical results are validated using Monte Carlo simulations. In the case of FH-SS, an implementation of SS using the FAR algorithm on a commercially off-the ¬shelf platform is presented, and the performance recorded using the hardware is shown to corroborate well with the theoretical and simulation-based results. The results in this thesis thus provide a bouquet of SS algorithms that could be useful under different CRSS scenarios.

Page generated in 0.3575 seconds