Spelling suggestions: "subject:"[een] FOURIER ANALYSIS"" "subject:"[enn] FOURIER ANALYSIS""
111 |
Obtenção e caracterização do Basub(2)Insub(2)Osub(5) puro e contendo Gd e Er como aditivosREY, JOSE F.Q. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:52:38Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:02:24Z (GMT). No. of bitstreams: 0 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Cerâmicas elétricas de Ba2In2O5 foram preparadas pelo método convencional de mistura de óxidos, e pela mistura e cristalização dos nitratos metálicos, para verificar o efeito do tamanho inicial das partículas na transição de fase ordemdesordem e na condutividade elétrica. Foram feitas substituições utilizando os cátions Gd3+ e Er3+, para verificar o efeito destes cátions na condutividade elétrica do indato de bário. Foi também preparado o óxido de índio pela técnica de complexação de cátions, e as nanopartículas obtidas foram caracterizadas por diversas técnicas. As principais técnicas de caracterização utilizadas foram: análise térmica, espectroscopia de absorção na região do infravermelho com transformada de Fourier, microscopia eletrônica de varredura, microscopia eletrônica de transmissão, difração de raios X convencional e utilizando radiação síncrotron, espalhamento de raios X a baixos ângulos, espectroscopia de energia dispersiva, espectroscopia Raman e medida da condutividade elétrica por espectroscopia de impedância. Os principais resultados mostraram que os tratamentos térmicos de calcinação e sinterização exercem forte influência na obtenção da fase Ba2In2O5. Fases espúrias são facilmente formadas no Ba2In2O5 também decorrentes da interação deste com a umidade. Um menor tamanho inicial de partículas favorece a redução na temperatura de transição de fase de segunda ordem. A introdução do Er, em teores relativamente baixos, produziu aumento na condutividade elétrica e simultânea redução na temperatura de transição de fase. Altos teores de Er e Gd dão origem a múltiplas fases. Na decomposição térmica do citrato de índio é formado um composto intermediário. A calcinação do citrato de índio produziu um material particulado com tamanho nanométrico, mesmo para temperaturas de até 900 ºC. / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP / FAPESP:01/14033-0
|
112 |
Propriedades globais de uma classe de complexos diferenciais / Global properties of a class of differential complexesHugo Cattarucci Botós 23 March 2018 (has links)
Considere a variedade Tn x S1 com coordenadas (t;x) e considere uma 1-forma diferencial fechada e real a(t) em Tn. Neste trabalho consideramos o operador Lpa = dt +a(t) Λ ∂x de D\'p em D\'p+1, onde D\'p é o espaço das p-correntes da forma u = ∑ Ι I Ι = puI (t, x)dtI. O operador acima define um complexo de cocadeia formado pelos espaços vetoriais D\'p e pelos homomorfismos lineares Lpa : D\'p → D\'p+1. Definiremos o que significa resolubilidade global no complexo acima e caracterizaremos para quais 1-formas a o complexo é globalmente resolúvel. Faremos o mesmo com respeito a hipoeliticidade global no primeiro nível do complexo. / Consider the manifold Tn x S1 with coordinates (t;x) and let a(t) be a real and closed differential 1-form on Tn. In this work we consider the operator Lpsub>a = dt +a(t) Λ ∂x de D\'p from D\'p to D\'p+1, where D\'p is the space of all p-currents u = ∑ Ι I Ι = puI (t, x)dtI . The above operator defines a cochain complex consisting of the vector spaces D\'p and of the linear maps Lpa : D\'p → D\'p+1. We define what global solvability means for the above complex and characterize for which 1-forms a the complex is globally solvable. We will do the same with respect to global hypoellipticity on the first level of the complex.
|
113 |
Non-invasive Estimation of Blood Pressure using Harmonic Components of Oscillometric PulsesAbolarin, David January 2016 (has links)
This research presents a pulse-by-pulse analysis of Oscillometric blood pressure waveform at systolic, diastolic and mean arterial pressure points.
Using a mathematical optimization technique, pulses are characterized into component harmonic by minimizing the least square error. The results at the important pressure points are analyzed and compared for different subject using different waveform extraction techniques.
Blood pressure is estimated using the harmonic parameters. The approach studies changes in the parameters as oscillometric blood pressure recording is done. 8 harmonic parameters are obtained from the pulse characterization and are used to estimate Systolic arterial Blood Pressure, Mean arterial Blood Pressure, and Diastolic arterial Blood Pressure. The estimates are compared with our reference value to determine which has the best agreement. The proposed method is further compared with Maximum Amplitude Algorithm and Pulse Morphology Algorithm.
The effect of oscillometric waveform extraction methods on the proposed method is observed. The experiment established the fact that the extraction technique can alter the shape of oscillometric pulses. The methods were compared and it was observed that the used extraction methods did not make any significant difference on the accuracy, using this technique.
|
114 |
Analyse spatiale et spectrale des motifs d'échantillonnage pour l'intégration Monte Carlo / Spatial and spectral analysis of sampling patterns for Monte Carlo integrationPilleboue, Adrien 19 November 2015 (has links)
L’échantillonnage est une étape clé dans le rendu graphique. Il permet d’intégrer la lumière arrivant en un point de la scène pour en calculer sa couleur. Généralement, la méthode utilisée est l’intégration Monte Carlo qui approxime cette intégrale en choisissant un nombre fini d’échantillons. La réduction du biais et de la variance de l’intégration Monte Carlo est devenue une des grandes problématiques en rendu réaliste. Les techniques trouvées consistent à placer les points d’échantillonnage avec intelligence de façon à rendre la distribution la plus uniforme possible tout en évitant les régularités. Les années 80 ont été de ce point de vue un tournant dans ce domaine, avec l’apparition de nouvelles méthodes stochastiques. Ces méthodes ont, grâce à une meilleure compréhension des liens entre intégration Monte Carlo et échantillonnage, permis de réduire le bruit et la variance des images générées, et donc d’améliorer leur qualité. En parallèle, la complexité des méthodes d’échantillonnage s’est considérablement améliorée, permettant d’obtenir des méthodes à la fois rapides et efficaces en termes de qualité. Cependant, ces avancées ont jusqu’à là été faites par tâtonnement et se sont axées sur deux points majeurs : l’amélioration de l’uniformité du motif d’échantillonnage et la suppression des régularités. Bien que des théories permettant de borner l’erreur d’intégration existent, elles sont souvent limitées, voire inapplicables dans le domaine de l’informatique graphique. Cette thèse propose de rassembler les outils d’analyse des motifs d’échantillonnages et de les mettre en relation. Ces outils peuvent caractériser des propriétés spatiales, comme la distribution des distances entre points, ou bien spectrales à l’aide de la transformée de Fourier. Nous avons ensuite utilisé ces outils afin de donner une expression simple de la variance et du biais dans l’intégration Monte Carlo, en utilisant des prérequis compatibles avec le rendu d’image. Finalement, nous présentons une boite à outils théorique permettant de déterminer la vitesse de convergence d’une méthode d’échantillonnage à partir de son profil spectral. Cette boite à outils est notamment utilisée afin de classifier les méthodes d’échantillonnage existantes, mais aussi pour donner des indications sur les principes fondamentaux nécessaires à la conception de nouveaux algorithmes d’échantillonnage / Sampling is a key step in rendering pipeline. It allows the integration of light arriving to a point of the scene in order to calculate its color. Monte Carlo integration is generally the most used method to approximate that integral by choosing a finite number of samples. Reducing the bias and the variance of Monte Carlo integration has become one of the most important issues in realistic rendering. The solutions found are based on smartly positioning the samples points in a way that maximizes the uniformity of the distribution while avoiding the regularities. From this point of view, the 80s were a turning point in this domain, as new stochastic methods appeared. With a better comprehension of links between Monte Carlo integration and sampling, these methods allow the reduction of noise and of variance in rendered images. In parallel, the complexity of sampling methods has considerably enhanced, enabling to have fast as well as good quality methods. However, these improvements have been done by trial and error focusing on two major points : the improvement of sampling pattern uniformity, and the suppression of regularities. Even though there exists some theories allowing to bound the error of the integration, they are usually limited, and even inapplicable in computer graphics. This thesis proposes to gather the analysis tools of sampling patterns and to connect them together. These tools can characterize spatial properties such as the distribution of distances between points, as well as spectral properties via Fourier transformation. Secondly, we have used these tools in order to give a simple expression of the bias and the variance for Monte Carlo integration ; this is done by using prerequisites compatible with image rendering. Finally, we present a theoretical toolbox allowing to determine the convergence speed of a sampling method from its spectral profile. This toolbox is used specifically to give indications about the design principles necessary for new sampling algorithms
|
115 |
Measuring the Characteristic Sizes of Convection Structures in AGB Stars with Fourier Decomposition Analyses : the Stellar Intensity Analyzer (SIA) Pipeline.Colom i Bernadich, Miquel January 2020 (has links)
Context. Theoretical studies predict that the length scale of convection in stellar atmospheres isproportional to the pressure scale height, which implies that giant and supergiant stars should have convection granules of sizes comparable to their radii. Numerical simulations and the observation of anisotropies on stellar discs agree well with this prediction. Aims. To measure the characteristic sizes of convection structures of models simulated with the CO5BOLD code, to look at how they vary between models and to study their limitations due to numerical resolution. Methods. Fourier analyses are performed to frames from the models to achieve spatial spectral power distributions which are averaged over time. The position of the main peak and the averagevalue of the wavevector are taken as indicators of these sizes. The general shape of the intensity map of the disc in the frame is fitted and subtracted so that it does not contaminate the Fourier analysis. Results. A general relationship of the convection granule size being more or less ten times larger than the pressure length scale is found. The expected wavevector value of the time-averaged spectral power distributions is higher than the position of the main peak. Loose increasing trends with the characteristic sizes by the pressure scale height increasing against stellar mass, radius, luminosity,temperature and gravity are found, while a decreasing trends are found with the radius and modelresolution. Bad resolution subtracts signals on the slope at the side of the main peak towards larger wavevector values and in extreme cases it creates spurious signal towards the end of the spectrum due to artifacts appearing on the frames. Conclusions. The wavevector position of the absolute maximum in the time-averaged spectral power distribution is the best measure of the most prominent sizes in the stellar surfaces. The proportionality constant between granule size and pressure length scale is of the same order ofmagnitude as the one in the literature, however, models present sizes larger than the ones expected, likely because the of prominent features do not correspond to convection granules but to larger features hovering above them. Further studies on models with higher resolution will help in drawing more conclusive results. Appendix. The SIA pipeline takes a set of time-dependent pictures of stellar disks and uses a Fourier Analysis to measure the characteristic sizes of their features and other useful quantities, such as standard deviations or the spatial power distributions of features. The main core of the pipeline consists in identifying the stellar disc in the frames and subtracting their signal from the spatial power distributions through a general fit of the disc intensity. To analyze a time sequence, the SIA pipeline requires at least two commands from the user. The first commandorders the SIA pipeline to read the .sav IDL data structure file where the frame sequence is stored and to produce another .sav file with information on the spectral power distributions, the second command orders the reading of such file to produce two more .sav files, one containing time-averaged size measurements and their deviations while the other breaking down time-dependant information and other arrays used for the calculations. The SIA pipeline has been entirely written in Interactive Data Language (IDL). Most of the procedures used here are original from the SIA pipeline, but a small handfull like ima3_distancetransform.pro, power2d1d.pro, extremum.pro and smooth2d.pro from Bernd Freytag and peaks.pro and compile opt.pro amongst others are actually external. / <p>The report consists in two parts:</p><p>1.- The main project, where we apply our pipeline and get scientific results.</p><p>2.- The appendix, where a technical description of the pipeline is given.</p>
|
116 |
A Sparse Learning Approach for Linux Kernel Data Race PredictionRyan, Gabriel January 2023 (has links)
Operating system kernels rely on fine-grained concurrency to achieve optimal performance on modern multi-core processors. However, heavy usage of fine-grained concurrency mechanisms make modern operating system kernels prone to data races, which can cause severe and often elusive bugs. In this thesis, I propose a new approach to identifying data races in OS Kernels based on learning a model to predict which memory accesses can be feasibly executed concurrently with one another.
To develop an efficient learning method for memory access feasibility, I develop a novel approach based on encoding feasibility as a boolean indicator function of system calls and ordered memory accesses. A memory access feasibility function encoded this way will have a naturally sparse latent representation due to the sparsity of interthread communications and synchronization interactions, and can therefore be accurately approximated based on a small number of observed concurrent execution traces.
This thesis introduces two key contributions. First, Probabilistic Lockset Analysis (PLA), is a new analysis that exploits sparsity in input dependencies in conjunction with a conservative lockset analysis to efficiently predict data races in the Linux OS Kernel. Second, approximate happens-before analysis in the fourier domain (HBFourier) generalizes the approach used by PLA to reason about interthread memory communications and synchronization events through sparse fourier learning. In addition to being theoretically grounded, these techniques are highly practical: they find hundreds of races in a recent Linux development kernel, an order of magnitude improvement over prior work, and find races with severe security impacts that have been overlooked by existing kernel testing systems for years.
|
117 |
Analytical Raman spectroscopy in a forensic art context: the non-destructive discrimination of genuine and fake lapis lazuliAli, Esam M.A., Edwards, Howell G.M. 04 November 2013 (has links)
No / The differentiation between genuine and fake lapis lazuli specimens using Raman spectroscopy is assessed using laboratory and portable instrumentation operating at two longer wavelengths of excitation in the near-infrared, namely 1064 and 785 nm. In spite of the differences between the spectra excited here in the near infrared and those reported in the literature using visible excitation, it is clear that Raman spectroscopy at longer wavelengths can provide a means of differentiating between the fakes studied here and genuine lapis lazuli. The Raman spectra obtained from portable instrumentation can also achieve this result, which will be relevant for the verification of specimens which cannot be removed from collections and for the identification of genuine lapis lazuli inlays in, for example, complex jewellery and furniture. The non-destructive and non-contact character of the technique offers a special role for portable Raman spectroscopy in forensic art analysis.
|
118 |
Κατασκευή μικροϋπολογιστικού συστήματος επεξεργασίας σημάτων ομιλίας για την εκτίμηση των μηχανισμών διαμόρφωσης του ήχου στη φωνητική κοιλότηταΑγγελόπουλος, Ιωάννης 30 April 2014 (has links)
Στα πλαίσια της διπλωματικής εργασίας αναπτύχθηκε μία εφαρμογή, η οποία προσδιορίζει τις τρεις πρώτες συχνότητες συντονισμού της φωνητικής κοιλότητας κατά τη διαδικασία της φώνησης φωνηέντων. Οι τρεις αυτές συχνότητες παρέχουν επαρκή πληροφορία για τον προσδιορισμό του φωνήεντου. Η φώνηση εξομοιώνεται με σήμα εισόδου το οποίο παρουσιάζει κορυφές σε αναμενόμενες περιοχές συχνοτήτων. Ο προσδιορισμός των συχνοτήτων συντονισμού στηρίζεται στη μέθοδο βραχύχρονης ανάλυσης Fourier. Η εφαρμογή αναπτύχθηκε σε περιβάλλον μVision της Keil, σε γλώσσα προγραμματισμού C, για τον μικροελεγκτή STM32F103RB της ST Microelectronics. / In the context of this thesis an application was developed, that is capable of estimating the first three formant frequencies (resonances of the vocal tract) in the event of voicing of vowels. These three frequencies provide us enough information to determine the vowel that is voiced. The human voice is being emulated by an input signal which has peaks in the anticipated frequency regions. The formant frequencies are being estimated based on the short-time Fourier analysis method. The application was developed in Keil μVision programming suite, in C programming language, for the STM32F103RB microcontroller by ST Microelectronics.
|
119 |
A perturbed two-level preconditioner for the solution of three-dimensional heterogeneous Helmholtz problems with applications to geophysics / Un preconditionnement perturbé à deux niveaux pour la résolution de problèmes d'Helmholtz hétérogènes dans le cadre d'une application en géophysiquePinel, Xavier 18 May 2010 (has links)
Le sujet de cette thèse est le développement de méthodes itératives permettant la résolution degrands systèmes linéaires creux d'équations présentant plusieurs seconds membres simultanément. Ces méthodes seront en particulier utilisées dans le cadre d'une application géophysique : la migration sismique visant à simuler la propagation d'ondes sous la surface de la terre. Le problème prend la forme d'une équation d'Helmholtz dans le domaine fréquentiel en trois dimensions, discrétisée par des différences finies et donnant lieu à un système linéaire creux, complexe, non-symétrique, non-hermitien. De plus, lorsque de grands nombres d'onde sont considérés, cette matrice possède une taille élevée et est indéfinie. Du fait de ces propriétés, nous nous proposons d'étudier des méthodes de Krylov préconditionnées par des techniques hiérarchiques deux niveaux. Un tel pre-conditionnement s'est montré particulièrement efficace en deux dimensions et le but de cette thèse est de relever le défi de l'adapter au cas tridimensionel. Pour ce faire, des méthodes de Krylov sont utilisées à la fois comme lisseur et comme méthode de résolution du problème grossier. Ces derniers choix induisent l'emploi de méthodes de Krylov dites flexibles. / The topic of this PhD thesis is the development of iterative methods for the solution of large sparse linear systems of equations with possibly multiple right-hand sides given at once. These methods will be used for a specific application in geophysics - seismic migration - related to the simulation of wave propagation in the subsurface of the Earth. Here the three-dimensional Helmholtz equation written in the frequency domain is considered. The finite difference discretization of the Helmholtz equation with the Perfect Matched Layer formulation produces, when high frequencies are considered, a complex linear system which is large, non-symmetric, non-Hermitian, indefinite and sparse. Thus we propose to study preconditioned flexible Krylov subspace methods, especially minimum residual norm methods, to solve this class of problems. As a preconditioner we consider multi-level techniques and especially focus on a two-level method. This twolevel preconditioner has shown efficient for two-dimensional applications and the purpose of this thesis is to extend this to the challenging three-dimensional case. This leads us to propose and analyze a perturbed two-level preconditioner for a flexible Krylov subspace method, where Krylov methods are used both as smoother and as approximate coarse grid solver.
|
120 |
Transmitter-receiver system for time average fourier telescopyUnknown Date (has links)
Time Average Fourier Telescopy (TAFT) has been proposed as a means for obtaining high-resolution, diffraction-limited images over large distances through ground-level horizontal-path atmospheric turbulence. Image data is collected in the spatial-frequency, or Fourier, domain by means of Fourier Telescopy; an inverse two dimensional Fourier transform yields the actual image. TAFT requires active illumination of the distant object by moving interference fringe patterns. Light reflected from the object is collected by a “light-bucket” detector, and the resulting electrical signal is digitized and subjected to a series of signal processing operations, including an all-critical averaging of the amplitude and phase of a number of narrow-band signals. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
Page generated in 0.0601 seconds