• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 66
  • 47
  • 19
  • 8
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 312
  • 81
  • 43
  • 40
  • 36
  • 32
  • 32
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Construção da superfície de energia potencial global para o sistema [H,S,F] / Construction of the global potential energy surface of the [H,S,F] system

Yuri Alexandre Aoto 26 September 2013 (has links)
Este projeto tem dois objetivos. Primeiramente estudou-se a aplicabilidade dos splines tricúbicos para a construção de superfícies de energia potencial globais. Um dos obstáculos que este método tem de superar e a escolha de um sistema de coordenadas apropriado, que minimize a influência de pontos não físicos. Para isto, propôs-se o uso do sistema de coordenadas de Pekeris, nunca usado para este fim. Este procedimento foi realizado para três sistemas químicos bem descritos na literatura, [Cl,H2], [F,H,D] e [H,O,Cl], cujas superfícies de energia potencial e propriedades das reações foram usadas como referência. Com base nestes modelos, aplicamos o método proposto variando-se a quantidade e a disposição dos nós das interpolações, a fim de verificar sua influência na qualidade das superfícies interpoladas. Os resultados mostram que as superfícies construídas por este método reproduzem muito bem os cálculos de dinâmica química, tanto por métodos quânticos quanto por métodos clássicos. Para isto, os nós da interpolação devem cobrir as regiões mais importantes da superfície de energia potencial e os valores mais baixos das coordenadas de Pekeris devem ser priorizados. O segundo objetivo consiste na aplicação deste procedimento na construção da superfície de energia potencial [H,S,F]. Com esta superfície, diversas características deste sistema foram analisadas, tais como geometrias dos pontos estacionários, energias relativas e frequências vibracionais. Os valores obtidos estão de acordo com os dados descritos na literatura. A superfície construída também foi usada para a realização de cálculos de dinâmica para a reação F+HS → S+FH. Observamos a existência de dois tipos de mecanismos, um com a formação de um intermediário de longa duração e outro com a abstração direta do átomo de hidrogênio. / This project has two goals. First, we studied the applicability of the tricubic splines to construct global potential energy surfaces. One of the diculties this approach has to overcome is the choice of an appropriate coordinate system that minimises the in uence of non-physical points. For such, we proposed the use of the Pekeris coordinate system, never employed for this purpose. This procedure was carried out for three well described systems, [Cl,H2], [F,H,D] and [H,O,Cl], whose potential energy surfaces and reaction properties were taken as references. Based on these models, we applied the proposed method varying the amount and arrangement of the interpolation knots, to verify their influence on the quality of the interpolated surfaces. The results showed that surfaces constructed by this approach reproduce very well the chemical dynamics calculations, both for the quantum as well as for the classical methods, provided that the interpolation knots cover the most important regions of the potential energy surfaces, and the lower values of the Pekeris coordinates are prioritised. The second goal was the application of this procedure to the construction of the [H,S,F] potential energy surface. With this surface, several characteristics of this system were analysed, such as the geometry of the stationary points, relative energies and vibrational frequencies. The values obtained are in agreement with the data described in the literature. The constructed surface was also used for quantum dynamics calculations on the reaction F + HS → S + FH. We observed two kinds of mechanisms, one of them with the formation of a long-living intermediate and the other with the direct abstraction of the hydrogen atom.
102

Modelos de regressão com coeficientes funcionais para séries temporais / Functional-coefficient regression models for time series

Michel Helcias Montoril 28 February 2013 (has links)
Nesta tese, consideramos o ajuste de modelos de regressão com coeficientes funcionais para séries temporais, por meio de splines, ondaletas clássicas e ondaletas deformadas. Consideramos os casos em que os erros do modelo são independentes e correlacionados. Através das três abordagens de estimação, obtemos taxas de convergência a zero para distâncias médias entre as funções do modelo e seus respectivos estimadores, propostos neste trabalho. No caso das abordagens de ondaletas (clássicas e deformadas), obtemos também resultados assintóticos em situações mais específicas, nas quais as funções do modelo pertencem a espaços de Sobolev e espaços de Besov. Além disso, estudos de simulação de Monte Carlo e aplicações a dados reais são apresentados. Por meio desses estudos numéricos, fazemos comparações entre as três abordagens de estimação propostas, e comparações entre outras abordagens já conhecidas na literatura, onde verificamos desempenhos satisfatórios, no sentido das abordagens propostas fornecerem resultados competitivos, quando comparados aos resultados oriundos de metodologias já utilizadas na literatura. / In this thesis, we study about fitting functional-coefficient regression models for time series, by splines, wavelets and warped wavelets. We consider models with independent and correlated errors. Through the three estimation approaches, we obtain rates of convergence to zero for average distances between the functions of the model and their estimators proposed in this work. In the case of (warped) wavelets approach, we also obtain asymptotic results in more specific situations, in which the functions of the model belong to Sobolev and Besov spaces. Moreover, Monte Carlo simulation studies and applications to real data sets are presented. Through these numerical results, we make comparisons between the three estimation approaches proposed here and comparisons between other approaches known in the literature, where we verify interesting performances in the sense that the proposed approaches provide competitive results compared to the results from methodologies used in literature.
103

Interpolação por splines para modelação de inomogeneidades no método de elementos analíticos: implementação por programação orientada a objetos / Splines interpolation to inhomogeneities in analytic element method implemented with object-oriented programming

Mariano da Franca Alencar Neto 29 August 2008 (has links)
O método de elementos analíticos simula escoamentos subterrâneos por meio da superposição de soluções conceituais. No contexto do método, inomogeneidade é uma região bem definida de condutividade hidráulica constante. A diferença de condutividade hidráulica entre a inomogeneidade e o meio em que está inserida causa uma descontinuidade (salto) no potencial de descarga. Tradicionalmente este salto é simulado usando polinômios de primeiro ou segundo grau. O presente trabalho usa polinômios splines quadráticos para interpolar os saltos ocorridos no potencial de descarga ao longo das bordas de inomogeneidades. Paralelamente, a formulação tradicional de interpolação dos saltos no potencial de descarga é estendida para qualquer grau. Os principais elementos que compõe o método são descritos e implementados. O programa computacional resultante (AEM) foi desenvolvido integrado a um sistema de informações geográficas de código-aberto (JUMP). O programa permite a integração com outros sistemas de informações geográficas baseados em JAVA, guardando independência do SIG residente. O projeto do programa AEM/JUMP é baseado na programação orientada a objetos e apresentou grande afinidade com o método de elementos analíticos, havendo identificação entre os conceitos de elemento (usado pelo método) e de objeto (usado pela programação). Conceitos de padrões de projeto são utilizados objetivando ampliar as facilidades de leitura, entendimento, otimização e modificação do código fonte, já disponibilizadas pela programação orientada a objetos. Problemas conceituais são abordados usando as formulações propostas. A interpolação por splines quadráticas mostrou-se eficiente e precisa. Considerando as soluções exatas, o erro médio sobre a área de estudo foi inferior a 0,12%. O AEM/JUMP foi aplicado à região da Lagoa do Bonfim - RN com o objetivo de determinar as isolinhas de cargas hidráulicas. Os resultados foram comparados com estudo anterior, onde obteve resultados compatíveis, comprovando a aplicação do método e de sua implementação. Foram incorporadas ao problema da Lagoa do Bonfim características geométricas do contorno do oceano e de aluviões existentes no entorno da lagoa, demonstrando a utilidade do programa para gerar diferentes cenários de simulação. / The analytical elements method simulates underground draining through the superposition of conceptual solutions. In the method\'s context, inhomogeneity in defined as a clearly set region of constant hydraulic conductivity. Inhomogeneity hydraulic conductivity differences and the environment in which they are inserted cause a discontinuity (jump) in the discharge potential. Traditionally, this jump is simulated using first or second degree polynomials.The present work presents a formulation that uses quadratic spline polynomials to interpolate jumps occurred in the discharge potential through inhomogeneity borders. At the same time, the traditional formulation of discharge potential jump interpolation is extended to any degree. The main elements that compose the method are described and implemented. The resulting computational program (AEM) was developed integrated to an open code geographic information system (JUMP). The program permits the integration with other geographic information systems based on JAVA, keeping its independence from resident SIG. The architecture project program AEM/JUMP is based on object-oriented programming and presented great affinity with the analytical element method, showing identification among element concepts (used by the method) and the object (used by the program). Standard project concepts are used, seeking to widen source code reading possibilities, understanding, optimization and modifications already available through the object-oriented programming. Conceptual problems are approached with proposed formulations. Quadratic spline interpolation proved to be efficient and precise. Considering exact solutions, average mistake on study area was lower than 0.12%. AEM/JUMP was applied to the Lagoa do Bonfim (RN) lake region with the aim of establishing hydraulic charge isolines. Results were compared with the previous study, where compatible results had been obtained, thus proving method feasibility and implementation. Geometric features of surrounding areas and alluvion regions present around the lake area were incorporated to the original problem, demonstrating the usefulness of the program to generate different simulation scenarios.
104

Approximation de fonctions et de données discrètes au sens de la norme L1 par splines polynomiales / Function and data approximation in L1 norm by polynomial splines

Gajny, Laurent 15 May 2015 (has links)
L'approximation de fonctions et de données discrètes est fondamentale dans des domaines tels que la planification de trajectoire ou le traitement du signal (données issues de capteurs). Dans ces domaines, il est important d'obtenir des courbes conservant la forme initiale des données. L'utilisation des splines L1 semble être une bonne solution au regard des résultats obtenus pour le problème d'interpolation de données discrètes par de telles splines. Ces splines permettent notamment de conserver les alignements dans les données et de ne pas introduire d'oscillations résiduelles comme c'est le cas pour les splines d'interpolation L2. Nous proposons dans cette thèse une étude du problème de meilleure approximation au sens de la norme L1. Cette étude comprend des développements théoriques sur la meilleure approximation L1 de fonctions présentant une discontinuité de type saut dans des espaces fonctionnels généraux appelés espace de Chebyshev et faiblement Chebyshev. Les splines polynomiales entrent dans ce cadre. Des algorithmes d'approximation de données discrètes au sens de la norme L1 par procédé de fenêtre glissante sont développés en se basant sur les travaux existants sur les splines de lissage et d'ajustement. Les méthodes présentées dans la littérature pour ces types de splines peuvent être relativement couteuse en temps de calcul. Les algorithmes par fenêtre glissante permettent d'obtenir une complexité linéaire en le nombre de données. De plus, une parallélisation est possible. Enfin, une approche originale d'approximation, appelée interpolation à delta près, est développée. Nous proposons un algorithme algébrique avec une complexité linéaire et qui peut être utilisé pour des applications temps réel. / Data and function approximation is fundamental in application domains like path planning or signal processing (sensor data). In such domains, it is important to obtain curves that preserve the shape of the data. Considering the results obtained for the problem of data interpolation, L1 splines appear to be a good solution. Contrary to classical L2 splines, these splines enable to preserve linearities in the data and to not introduce extraneous oscillations when applied on data sets with abrupt changes. We propose in this dissertation a study of the problem of best L1 approximation. This study includes developments on best L1 approximation of functions with a jump discontinuity in general spaces called Chebyshev and weak-Chebyshev spaces. Polynomial splines fit in this framework. Approximation algorithms by smoothing splines and spline fits based on a sliding window process are introduced. The methods previously proposed in the littérature can be relatively time consuming when applied on large datasets. Sliding window algorithm enables to obtain algorithms with linear complexity. Moreover, these algorithms can be parallelized. Finally, a new approximation approach with prescribed error is introduced. A pure algebraic algorithm with linear complexity is introduced. This algorithm is then applicable to real-time application.
105

A Smooth Finite Element Method Via Triangular B-Splines

Khatri, Vikash 02 1900 (has links) (PDF)
A triangular B-spline (DMS-spline)-based finite element method (TBS-FEM) is proposed along with possible enrichment through discontinuous Galerkin, continuous-discontinuous Galerkin finite element (CDGFE) and stabilization techniques. The developed schemes are also numerically explored, to a limited extent, for weak discretizations of a few second order partial differential equations (PDEs) of interest in solid mechanics. The presently employed functional approximation has both affine invariance and convex hull properties. In contrast to the Lagrangian basis functions used with the conventional finite element method, basis functions derived through n-th order triangular B-splines possess (n ≥ 1) global continuity. This is usually not possible with standard finite element formulations. Thus, though constructed within a mesh-based framework, the basis functions are globally smooth (even across the element boundaries). Since these globally smooth basis functions are used in modeling response, one can expect a reduction in the number of elements in the discretization which in turn reduces number of degrees of freedom and consequently the computational cost. In the present work that aims at laying out the basic foundation of the method, we consider only linear triangular B-splines. The resulting formulation thus provides only a continuous approximation functions for the targeted variables. This leads to a straightforward implementation without a digression into the issue of knot selection, whose resolution is required for implementing the method with higher order triangular B-splines. Since we consider only n = 1, the formulation also makes use of the discontinuous Galerkin method that weakly enforces the continuity of first derivatives through stabilizing terms on the interior boundaries. Stabilization enhances the numerical stability without sacrificing accuracy by suitably changing the weak formulation. Weighted residual terms are added to the variational equation, which involve a mesh-dependent stabilization parameter. The advantage of the resulting scheme over a more traditional mixed approach and least square finite element is that the introduction of additional unknowns and related difficulties can be avoided. For assessing the numerical performance of the method, we consider Navier’s equations of elasticity, especially the case of nearly-incompressible elasticity (i.e. as the limit of volumetric locking approaches). Limited comparisons with results via finite element techniques based on constant-strain triangles help bring out the advantages of the proposed scheme to an extent.
106

Reconstruction en tomographie dynamique par approche inverse sans compensation de mouvement / Reconstruction in dynamic tomography by an inverse approach without motion compensation

Momey, Fabien 20 June 2013 (has links)
La tomographie est la discipline qui cherche à reconstruire une donnée physique dans son volume, à partir de l’information indirecte de projections intégrées de l’objet, à différents angles de vue. L’une de ses applications les plus répandues, et qui constitue le cadre de cette thèse, est l’imagerie scanner par rayons X pour le médical. Or, les mouvements inhérents à tout être vivant, typiquement le mouvement respiratoire et les battements cardiaques, posent de sérieux problèmes dans une reconstruction classique. Il est donc impératif d’en tenir compte, i.e. de reconstruire le sujet imagé comme une séquence spatio-temporelle traduisant son “évolution anatomique” au cours du temps : c’est la tomographie dynamique. Élaborer une méthode de reconstruction spécifique à ce problème est un enjeu majeur en radiothérapie, où la localisation précise de la tumeur dans le temps est un prérequis afin d’irradier les cellules cancéreuses en protégeant au mieux les tissus sains environnants. Des méthodes usuelles de reconstruction augmentent le nombre de projections acquises, permettant des reconstructions indépendantes de plusieurs phases de la séquence échantillonnée en temps. D’autres compensent directement le mouvement dans la reconstruction, en modélisant ce dernier comme un champ de déformation, estimé à partir d’un jeu de données d’acquisition antérieur. Nous proposons dans ce travail de thèse une approche nouvelle ; se basant sur la théorie des problèmes inverses, nous affranchissons la reconstruction dynamique du besoin d’accroissement de la quantité de données, ainsi que de la recherche explicite du mouvement, elle aussi consommatrice d’un surplus d’information. Nous reconstruisons la séquence dynamique à partir du seul jeu de projections courant, avec pour seules hypothèses a priori la continuité et la périodicité du mouvement. Le problème inverse est alors traité rigoureusement comme la minimisation d’un terme d’attache aux données et d’une régularisation. Nos contributions portent sur la mise au point d’une méthode de reconstruction adaptée à l’extraction optimale de l’information compte tenu de la parcimonie des données — un aspect typique du problème dynamique — en utilisant notamment la variation totale (TV) comme régularisation. Nous élaborons un nouveau modèle de projection tomographique précis et compétitif en temps de calcul, basé sur des fonctions B-splines séparables, permettant de repousser encore la limite de reconstruction imposée par la parcimonie. Ces développements sont ensuite insérés dans un schéma de reconstruction dynamique cohérent, appliquant notamment une régularisation TV spatio-temporelle efficace. Notre méthode exploite ainsi de façon optimale la seule information courante à disposition ; de plus sa mise en oeuvre fait preuve d’une grande simplicité. Nous faisons premièrement la démonstration de la force de notre approche sur des reconstructions 2-D+t à partir de données simulées numériquement. La faisabilité pratique de notre méthode est ensuite établie sur des reconstructions 2-D et 3-D+t à partir de données physiques “réelles”, acquises sur un fantôme mécanique et sur un patient / Computerized tomography (CT) aims at the retrieval of 3-D information from a set of projections acquired at different angles around the object of interest (OOI). One of its most common applications, which is the framework of this Ph.D. thesis, is X-ray CT medical imaging. This reconstruction can be severely impaired by the patient’s breath (respiratory) motion and cardiac beating. This is a major challenge in radiotherapy, where the precise localization of the tumor is a prerequisite for cancer cells irradiation with preservation of surrounding healthy tissues. The field of methods dealing with the reconstruction of a dynamic sequence of the OOI is called Dynamic CT. Some state-of-the-art methods increase the number of projections, allowing an independent reconstruction of several phases of the time sampled sequence. Other methods use motion compensation in the reconstruction, by a beforehand estimation on a previous data set, getting the explicit motion through a deformation model. Our work takes a different path ; it uses dynamic reconstruction, based on inverse problems theory, without any additional information, nor explicit knowledge of the motion. The dynamic sequence is reconstructed out of a single data set, only assuming the motion’s continuity and periodicity. This inverse problem is considered as a minimization of an error term combined with a regularization. One of the most original features of this Ph.D. thesis, typical of dynamic CT, is the elaboration of a reconstruction method from very sparse data, using Total Variation (TV) as a very efficient regularization term. We also implement a new rigorously defined and computationally efficient tomographic projector, based on B-splines separable functions, outperforming usual reconstruction quality in a data sparsity context. This reconstruction method is then inserted into a coherent dynamic reconstruction scheme, applying an efficient spatio-temporal TV regularization. Our method exploits current data information only, in an optimal way ; moreover, its implementation is rather straightforward. We first demonstrate the strength of our approach on 2-D+t reconstructions from numerically simulated dynamic data. Then the practical feasibility of our method is established on 2-D and 3-D+t reconstructions of a mechanical phantom and real patient data
107

An osteometric evaluation of age and sex differences in the long bones of South African children from the Western Cape

Stull, Kyra Elizabeth January 2013 (has links)
The main goal of a forensic anthropological analysis of unidentified human remains is to establish an accurate biological profile. The largest obstacle in the creation or validation of techniques specific for subadults is the lack of large, modern samples. Techniques created for subadults were mainly derived from antiquated North American or European samples and thus inapplicable to a modern South African population as the techniques lack diversity and ignore the secular trends in modern children. This research provides accurate and reliable methods to estimate age and sex of South African subadults aged birth to 12 years from long bone lengths and breadths, as no appropriate techniques exist. Standard postcraniometric variables (n = 18) were collected from six long bones on 1380 (males = 804, females = 506) Lodox Statscan-generated radiographic images housed at the Forensic Pathology Service, Salt River and the Red Cross War Memorial Children’s Hospital in Cape Town, South Africa. Measurement definitions were derived from and/or follow studies in fetal and subadult osteology and longitudinal growth studies. Radiographic images were generated between 2007 and 2012, thus the majority of children (70%) were born after 2000 and thus reflect the modern population. Because basis splines and multivariate adaptive regression splines (MARS) are nonparametric the 95% prediction intervals associated with each age at death model were calculated with cross-validation. Numerous classification methods were employed namely linear, quadratic, and flexible discriminant analysis, logistic regression, naïve Bayes, and random forests to identify the method that consistently yielded the lowest error rates. Because some of the multivariate subsets demonstrated small sample sizes, the classification accuracies were bootstrapped to validate results. Both univariate and multivariate models were employed in the age and sex estimation analyses. Standard errors for the age estimation models were smaller in most of the multivariate models with the exception of the univariate humerus, femur, and tibia diaphyseal lengths. Univariate models provide narrower age estimates at the younger ages but the multivariate models provide narrower age estimates at the older ages. Diaphyseal lengths did not demonstrate any significant sex differences at any age, but diaphyseal breadths demonstrated significant sex differences throughout the majority of the ages. Classification methods utilizing multivariate subsets achieved the highest accuracies, which offer practical applicability in forensic anthropology (81% to 90%). Whereas logistic regression yielded the highest classification accuracies for univariate models, FDA yielded the highest classification accuracies for multivariate models. This study is the first to successfully estimate subadult age and sex using an extensive number of measurements, univariate and multivariate models, and robust statistical analyses. The success of the current study is directly related to the large, modern sample size, which ultimately captured a wider range of human variation than previously collected for subadult diaphyseal dimensions. / Thesis (PhD)--University of Pretoria, 2013. / gm2014 / Anatomy / unrestricted
108

Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications. / Correspondence between Gaussian process regression and interpolation splines under linear inequality constraints. Theory and applications

Maatouk, Hassan 01 October 2015 (has links)
On s'intéresse au problème d'interpolation d'une fonction numérique d'une ou plusieurs variables réelles lorsque qu'elle est connue pour satisfaire certaines propriétés comme, par exemple, la positivité, monotonie ou convexité. Deux méthodes d'interpolation sont étudiées. D'une part, une approche déterministe conduit à un problème d'interpolation optimale sous contraintes linéaires inégalité dans un Espace de Hilbert à Noyau Reproduisant (RKHS). D'autre part, une approche probabiliste considère le même problème comme un problème d'estimation d'une fonction dans un cadre bayésien. Plus précisément, on considère la Régression par Processus Gaussien ou Krigeage pour estimer la fonction à interpoler sous les contraintes linéaires de type inégalité en question. Cette deuxième approche permet également de construire des intervalles de confiance autour de la fonction estimée. Pour cela, on propose une méthode d'approximation qui consiste à approcher un processus gaussien quelconque par un processus gaussien fini-dimensionnel. Le problème de krigeage se ramène ainsi à la simulation d'un vecteur gaussien tronqué à un espace convexe. L'analyse asymptotique permet d'établir la convergence de la méthode et la correspondance entre les deux approches déterministeet probabiliste, c'est le résultat théorique de la thèse. Ce dernier est vu comme unegénéralisation de la correspondance établie par [Kimeldorf and Wahba, 1971] entre estimateur bayésien et spline d'interpolation. Enfin, une application réelle dans le domainede l'assurance (actuariat) pour estimer une courbe d'actualisation et des probabilités dedéfaut a été développée. / This thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities.
109

Periodic and Non-Periodic Filter Structures in Lasers / Periodiska och icke-periodisk filterstrukturer i lasrar

Enge, Leo January 2020 (has links)
Communication using fiber optics is an integral part of modern societies and one of the most important parts of this is the grating filter of a laser. In this report we introduce both the periodic and the non-periodic grating filter and discuss how there can be resonance in these structures. We then provide an exact method for calculating the spectrum of these grating filters and study three different methods to calculate this approximately. The first one is the \emph{Fourier approximation} which is very simple. For the studied filters the fundamental form of the results for this method is correct, even though the details are not. The second method consists of calculating the spectrum exactly for some values and then use interpolation by splines. This method gives satisfactory results for the types of gratings analysed. Finally a method of perturbation is provided for the periodic grating filter as well as an outline for how this can be extended to the non-periodic grating filter. For the studied filters the results of this method are very promising. The method of perturbations may also give a deeper understanding of how a filter works and we therefore conclude that it would be of interest to study the method of perturbations further, while all the studied methods can be useful for computation of the spectrum depending on the required precision. / Fiberoptisk kommunikation utgör en viktig del i moderna samhällen och en av de grudläggande delarna av detta är Bragg-filter i lasrar. I den här rapporten introducerar vi både det periodiska och det icke-periodiska Bragg-filtret och diskuterar hur resonans kan uppstå i dessa. Vi presenterar sedan en exakt metod för att beräkna spektrumet av dessa filter samt studerar tre approximativa metoder för att beräkna spektrumet. Den första metoden är \emph{Fourier-approximationen} som är väldigt enkel. För de studerade filtrena blir de grundläggande formerna korrekta med Fourier-approximationen, medan detaljerna är fel. Den andra metoden består av att räkna ut spektrumet exakt för några punkter och sedan interpolera med hjälp av splines. Den här metoden ger mycket bra resultat för de studerade filtrena. Till sist presenteras en metod baserad på störningsteori för det periodiska filtret, samt en översikt över hur det här kan utökas till det icke-periodiska filtret. Denna metod ger mycket lovande resulat och den kan även ge djupare insikt i hur ett filter fungerar. Vi sluter oss därför till att det vore intressant att vidare studera metoder med störningar, men även att alla studerade metoder kan vara användabara för beräkningen av spektra beroende på vilken precision som krävs.
110

Empirical Bayesian Smoothing Splines for Signals with Correlated Errors: Methods and Applications

Rosales Marticorena, Luis Francisco 22 June 2016 (has links)
No description available.

Page generated in 0.0593 seconds