• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 14
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 68
  • 29
  • 23
  • 21
  • 20
  • 17
  • 16
  • 16
  • 16
  • 14
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Inverse source modeling of roll induced magnetic signature / Invers källmodellering av rullningsinducerad magnetisk signatur

Thermaenius, Erik January 2022 (has links)
Vessels constructed in electrically conductive materials give rise to frequency-dependent, induced magnetic fields when waves of water cause them to roll in the Earth's magnetic field. These fields, typically referred to as roll-induced magnetic vortex fields, are a component of the ship's overall signature, where signature refers to measurable quantities which can reveal or identify objects. It is crucial for military platforms to keep the signature low and thereby increase the possibilities of operation. For magnetic signatures, this is done through strategic design and construction of the platform or by using magnetic silencing systems. The signature is then decreased to minimize the risk of detection from naval mines and marine detection systems.  This report covers the initial research on the subject of an inverse source model for roll induced magnetic fields. By limiting the analysis to two basic objects and applying a time variant magnetic field to them, we induce a magnetic field which we then model. The inverse modeling is done using magnetic dipoles as sources which are placed around the area of the object. The parameters of the model are then found by applying a least squares algorithm coupled with Tikhonov regularization. The focus of this report is the configuration of this setup in terms of measurements and sources, as well as finding a proper regularization parameter. Since the applied magnetic field is dependent on the roll frequency, also the inverse model depends on a frequency parameter in addition to the geometry and material of the object. The objects here studied are of two simple geometries, a rectangular block and a hollow cylinder. Both objects are constructed in an aluminum alloy with well known material parameters. Measurement data is gathered using a numerical solver utilizing the finite element method for solving the partial differential equations. The numerical measurement data is compared to physical measurements as well. The physical measurement data is gathered by placing the objects in a Helmholtz-cage which is used to apply a homogeneous time variant magnetic field upon them. The project was carried out at the Swedish Defence Research Agency (FOI) at the department of underwater research.
72

Array-Based Characterization of Military Jet Aircraft Noise

Krueger, David William 20 July 2012 (has links) (PDF)
Since the 1950s the jet aeroacoustics community has been involved in predicting and measuring the noise distribution in jets. In this work, cylindrical and planar Fourier near-field acoustical holography are used to investigate radiation from a full-scale, installed jet engine. Practical problems involving measurement aperture and the highly directional nature of the source are addressed. Insights from numerical simulations reveal usable reconstruction regions. A comparison of cylindrical and planar NAH for the respective measurement apertures shows cylindrical NAH outperforms planar NAH on reconstructions both towards and away from the source.
73

Contribution à l'analyse mathématique et à la résolution numérique d'un problème inverse de scattering élasto-acoustique / Contribution to the mathematical analysis and to the numerical solution of an inverse elasto-acoustic scattering problem

Estecahandy, Elodie 19 September 2013 (has links)
La détermination de la forme d'un obstacle élastique immergé dans un milieu fluide à partir de mesures du champ d'onde diffracté est un problème d'un vif intérêt dans de nombreux domaines tels que le sonar, l'exploration géophysique et l'imagerie médicale. A cause de son caractère non-linéaire et mal posé, ce problème inverse de l'obstacle (IOP) est très difficile à résoudre, particulièrement d'un point de vue numérique. De plus, son étude requiert la compréhension de la théorie du problème de diffraction direct (DP) associé, et la maîtrise des méthodes de résolution correspondantes. Le travail accompli ici se rapporte à l'analyse mathématique et numérique du DP élasto-acoustique et de l'IOP. En particulier, nous avons développé un code de simulation numérique performant pour la propagation des ondes associée à ce type de milieux, basé sur une méthode de type DG qui emploie des éléments finis d'ordre supérieur et des éléments courbes à l'interface afin de mieux représenter l'interaction fluide-structure, et nous l'appliquons à la reconstruction d'objets par la mise en oeuvre d'une méthode de Newton régularisée. / The determination of the shape of an elastic obstacle immersed in water from some measurements of the scattered field is an important problem in many technologies such as sonar, geophysical exploration, and medical imaging. This inverse obstacle problem (IOP) is very difficult to solve, especially from a numerical viewpoint, because of its nonlinear and ill-posed character. Moreover, its investigation requires the understanding of the theory for the associated direct scattering problem (DP), and the mastery of the corresponding numerical solution methods. The work accomplished here pertains to the mathematical and numerical analysis of the elasto-acoustic DP and of the IOP. More specifically, we have developed an efficient numerical simulation code for wave propagation associated to this type of media, based on a DG-type method using higher-order finite elements and curved edges at the interface to better represent the fluid-structure interaction, and we apply it to the reconstruction of objects with the implementation of a regularized Newton method.
74

Studies on two specific inverse problems from imaging and finance

Rückert, Nadja 20 July 2012 (has links) (PDF)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
75

Studies on two specific inverse problems from imaging and finance

Rückert, Nadja 16 July 2012 (has links)
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
76

Identification Expérimentale de Sources vibratoires par Résolution du problème Inverse modélisé par un opérateur Eléments Finis local

Renzi, Cédric 16 December 2011 (has links) (PDF)
L'objet de cette thèse est l'extension aux structures complexes de la méthode de Résolution Inverse Fenêtrée Filtrée (RIFF). L'idée principale se base sur le modèle Eléments Finis local et libre d'une partie de la structure étudiée. Tout d'abord, la méthode a été développée dans le cas des poutres. Les mesures de vibrations sont alors injectées dans le modèle Eléments Finis de la partie de poutre analysée. Les rotations sont estimées à l'aide de mesures de déplacements supplémentaires et des fonctions de forme sur le support élémentaire. La méthode étant sensible vis-à-vis des incertitudes de mesures, une régularisation a dû être développée. Celle-ci repose sur une double inversion de l'opérateur où une régularisation de type Tikhonov est appliquée dans la seconde inversion. L'optimisation de cette régularisation est réalisée par le principe de la courbe en L. A cause des effets de lissage induits par la régularisation, les moments ne peuvent être reconstruits mais ils apparaissent comme des "doublets" de forces. Ceci nous a conduit à résoudre le problème en supposant que seules des forces agissent sur la poutre. Enfin, une étude des effets de la troncature du domaine a été menée dans le but de s'affranchir des efforts de couplage apparaissant aux limites de la zone étudiée. Le cas des plaques a été considéré ensuite afin d'augmenter progressivement la complexité des modèles utilisés. L'approche Eléments Finis a permis d'intégrer à la méthode des techniques de condensation dynamique et de réduction par la méthode de Craig-Bampton. Le nombre de degrés de liberté est trop élevé pour permettre une estimation des rotations par mesures de déplacements supplémentaires, la condensation dynamique est employée afin de les supprimer dans le modèle théorique. Par ailleurs, la régularisation induisant une perte de résolution spatiale à cause de son effet de lissage, une procédure de déconvolution spatiale basée sur l'algorithme de Richardson- Lucy a été ajoutée en post traitement. Enfin, une application de la méthode à la détection de défauts a été envisagée de même que l'application de la méthode à l'identification des efforts appliqués par une pompe à huile sur un banc d'essais industriel. Le travail s'est donc appuyé sur des développements numériques et la méthode a été validée expérimentalement en laboratoire et en contexte industriel. Les résultats de la thèse fournissent un outil prédictif des efforts injectés par des sources de vibrations raccordées à une structure en s'appuyant sur un modèle Eléments Finis local et des mesures vibratoires, le tout en régime harmonique.
77

Procesamiento de señales de tomografía de impedancia eléctrica para el estudio de la actividad cerebral

Fernández Corazza, Mariano January 2015 (has links)
La tomografía de impedancia eléctrica (EIT) permite estimar la conductividad eléctrica interna de un cuerpo. Consiste en aplicar una corriente eléctrica sobre su frontera y medir el potencial eléctrico resultante mediante un arreglo de sensores. Es considerada como una potencial herramienta de diagnóstico médico, caracterizada principalmente por su portabilidad y relativo bajo costo. Si bien se encuentra aún en etapa de desarrollo, está comenzando a ser utilizada en centros de salud para la caracterización del aparato cardio-respiratorio y existe un creciente interés en su aplicación a las neurociencias. Por ejemplo, es posible utilizar la EIT para construir modelos virtuales de la cabeza más precisos mediante la estimación de la conductividad eléctrica de los principales tejidos de la cabeza como un conjunto de parámetros relativamente pequeño, modalidad denominada EIT paramétrico. También se puede utilizar la EIT para generar un mapa de la distribución de conductividad eléctrica interna de un objeto, llamado problema de reconstrucción en EIT. Los cambios de la conductividad eléctrica en la cabeza pueden estar asociados a la actividad neuronal, a focos epilépticos, a accidentes cerebro-vasculares o a tumores. Ambas modalidades de EIT requieren la resolución del problema directo (PD), que consiste en el cálculo de la distribución de potencial eléctrico en el objeto originada por la inyección de corriente sobre su superficie, suponiendo que la conductividad interna es conocida. La estimulación de corriente continua transcraneal (tDCS) es físicamente muy similar a la EIT, pero la corriente eléctrica es aplicada sobre el cuero cabelludo de modo de alterar la tasa de disparos de poblaciones de neuronas en una región de interés. Es una potencial alternativa al empleo de psicofármacos para tratar desórdenes como epilepsia o depresiones. En esta tesis se desarrollan y analizan nuevos métodos para distintos problemas de EIT, centrándose mayormente en aplicaciones a la cabeza humana, y de tDCS. En primer lugar, se describen soluciones analíticas y numéricas para el PD en EIT, estas últimas basadas en el método de los elementos finitos. Luego, se desarrolla un nuevo procedimiento para resolver el PD con bajo costo computacional basado en la formulación del PD en electroencefalografía (EEG). Se propone un nuevo método para determinar la forma de onda de la fuente de corriente que permite desafectar la actividad propia del cerebro con un bajo número de muestras temporales. En EIT paramétrico, se utiliza la cota de Cramér-Rao (CRB) para determinar pares de electrodos convenientes para la inyección de corriente y para analizar límites teóricos en la estimación de las conductividades del cráneo y del cuero cabelludo, modelizándolos como tejidos isótropos y anisótropos. A su vez, se propone el estimador de máxima verosimilitud (MLE) como herramienta para realizar las estimaciones. El MLE se aplica a mediciones simuladas y reales de EIT mostrando un desempeño muy cercano a los límites teóricos. Para el problema de reconstrucción en EIT se adapta el algoritmo sLORETA, muy utilizado en el problema de localización de fuentes de actividad neuronal en EEG. Además, se lo modifica levemente para incorporar la regularización espacial de Laplace. Por otro lado, se introduce la utilización de filtros espaciales adaptivos para localizar cambios de conductividad de pequeño tamaño y estimar su variación temporal. Los resultados muestran mejoras en sesgo y resolución, en comparación con algoritmos de reconstrucción típicos en EIT. Estas mejoras son potencialmente ventajosas en la detección de accidentes cerebro-vasculares y en la localización indirecta de fuentes de actividad neuronal. En tDCS, se desarrolla un nuevo algoritmo para la determinación de patrones de inyección de corriente basado en el principio de reciprocidad y que considera restricciones de seguridad y de hardware. Los resultados obtenidos a partir de simulaciones muestran que el desempeño de dicho algoritmo es comparable al desempeño de algoritmos de optimización tradicionales cuyas soluciones implicarían un equipamiento comparativamente más complejo y costoso. Los métodos desarrollados en la tesis son comparados con métodos pre-existentes y validados a través de simulaciones numéricas por computadora, mediciones sobre maquetas experimentales (ó fantomas) y, de acuerdo con las posibilidades experimentales y respetando los principios de la bioética, mediciones reales sobre humanos. / Electrical impedance tomography (EIT) is a technique to estimate the electrical conductivity of an object. It consists in the application of an electric current on its boundary and the measurement of the resulting electric potential with a sensor array. In clinical practise, it is considered as a potential diagnostic tool characterized by its portability and relatively low cost. While it is still in a development stage, it is starting to be used in health centers to characterize the cardio-respiratory system. In turn, there is an increasing interest of EIT in neuroscience. For example, EIT can be used to estimate the electrical conductivity of the main tissues of the head as a set of a relatively low number of parameters, which is known as bounded or parametric EIT. This is useful for several medical imaging techniques that require realistic and accurate virtual models of the head. EIT can also be used to generate a map of the internal distribution of the electrical conductivity, known as the reconstruction problem. Tracking conductivity changes inside the head is of great interest as they may be related to neuronal activity, epileptic foci, acute stroke, or tumors. Both modalities of EIT require the solution of the EIT forward problem (FP), i.e., the computation of the electric potential distribution due to current injection on the scalp assuming that the electrical conductivity is known. The transcranial direct current stimulation (tDCS) is another technique which is physically very similar to EIT. It consists in injecting a small electric current in a convenient way such that it stimulates specific neuronal populations, increasing or decreasing their firing rate. It is considered as an alternative to psychoactive drugs in the treatment of brain disorders such as epilepsy or depression. This thesis describes the development and analysis of new methods for EIT FP, parametric EIT, reconstruction in EIT, and tDCS, focusing primarily (although not exclusively) in applications to human head. We first describe analytical and numerical approaches for the EIT FP, where the numerical approach is based on the finite element method. Then, we develop a new procedure to solve the EIT FP based on the electroencephalography (EEG) FP formulation, which results in computational advantages. We propose a new method to determine the waveform of the electric current source such that the neuronal activity of the brain can be neglected with the smallest possible number of time samples. In parametric EIT, we use the Cramér-Rao bound (CRB) to determine convenient electrode pairs for the current injection and theoretical limits in the estimation of the electrical conductivity of the main tissues of the head, which we model as isotropic and anisotropic. We propose the maximum likelihood estimator (MLE) to estimate these conductivities and we test it with simulated and real EIT measurements, showing that the MLE performs close to the CRB. We adapt the sLORETA algorithm to the reconstruction problem in EIT. This algorithm is being widely used in the source localization problem in EEG. We also slightly modify it to include the Laplace smoothing prior in the solution. Likewise, we introduce the use of adaptive spatial filters in the localization of conductivity changes and the estimation of its time courses from EIT measurements. The results show improvements over typical EIT algorithms. These improvements may benefit the early detection of acute strokes and the localization of neuronal activity using EIT. In tDCS, we develop a new algorithm to determine convenient current injection patterns. It is based on the reciprocity principle and considers hardware and safety constraints. Our simulation results show that this method performs similarly to other commonly used algorithms that require more complex and costly equipments. The methods we develop and study in this thesis are compared with pre-existing methods and are validated through numerical simulations, measurements on phantoms and, according to the experimental possibilities and bioethical principles, humans.
78

Magnetic field effects in chemical systems

Rodgers, Christopher T. January 2007 (has links)
Magnetic fields influence the rate and/or yield of chemical reactions that proceed via spin correlated radical pair intermediates. The field of spin chemistry centres around the study of such magnetic field effects (MFEs). This thesis is particularly concerned with the effects of the weak magnetic fields B₀ ~ 1mT relevant in the ongoing debates on the mechanism by which animals sense the geomagnetic field and on the putative health effects of environmental electromagnetic fields. Relatively few previous studies have dealt with such weak magnetic fields. This thesis presents several new theoretical tools and applies them to interpret experimental measurements. Chapter 1 surveys the development and theory of spin chemistry. Chapter 2 introduces the use of Tikhonov and Maximum Entropy Regularisation methods as a new means of analysing MARY field effect data. These are applied to recover details of the diffusive motion of reacting pyrene and N,N-dimethylaniline radicals. Chapter 3 gives a fresh derivation and appraisal of an approximate, semiclassical approach to MFEs. Monte Carlo calculations allow the elucidation of several "rules of thumb" for interpreting MFE data. Chapter 4 discusses recent optically-detected zero-field EPR measurements, adapting the gamma-COMPUTE algorithm from solid state NMR for their interpretation. Chapter 5 explores the role of RF polarisation in producing MFEs. The breakdown in weak fields of the familiar rotating frame approximation is analysed. Chapter 6 reviews current knowledge and landmark experiments in the area of animal magnetoreception. The origins of the sensitivity of European robins Erithacus rubecula to the Earth’s magnetic field are given particular attention. In Chapter 7, Schulten and Ritz’s hypothesis that avian magnetoreception is founded on a radical pair mechanism (RPM) reaction is appraised through calculations in model systems. Chapter 8 introduces quantitative methods of analysing anisotropic magnetic field effects using spherical harmonics. Chapter 9 considers recent observations that European robins may sometimes be disoriented by minuscule RF fields. These are shown to be consistent with magnetoreception via a radical pair with no (effective) magnetic nuclei in one of the radicals.
79

Automated Selection of Hyper-Parameters in Diffuse Optical Tomographic Image Reconstruction

Jayaprakash, * January 2013 (has links) (PDF)
Diffuse optical tomography is a promising imaging modality that provides functional information of the soft biological tissues, with prime imaging applications including breast and brain tissue in-vivo. This modality uses near infrared light( 600nm-900nm) as the probing media, giving an advantage of being non-ionizing imaging modality. The image reconstruction problem in diffuse optical tomography is typically posed as a least-squares problem that minimizes the difference between experimental and modeled data with respect to optical properties. This problem is non-linear and ill-posed, due to multiple scattering of the near infrared light in the biological tissues, leading to infinitely many possible solutions. The traditional methods employ a regularization term to constrain the solution space as well as stabilize the solution, with Tikhonov type regularization being the most popular one. The choice of this regularization parameter, also known as hyper parameter, dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. In this thesis, a simple back projection type image reconstruction algorithm is taken up, as they are known to provide computationally efficient solution compared to regularized solutions. In these algorithms, the hyper parameter becomes equivalent to filter factor and choice of which is typically dependent on the sampling interval used for acquiring data in each projection and the angle of projection. Determining these parameters for diffuse optical tomography is not so straightforward and requires usage of advanced computational models. In this thesis, a computationally efficient simplex Method based optimization scheme for automatically finding this filter factor is proposed and its performances is evaluated through numerical and experimental phantom data. As back projection type algorithms are approximations to traditional methods, the absolute quantitative accuracy of the reconstructed optical properties is poor .In scenarios, like dynamic imaging, where the emphasis is on recovering relative difference in the optical properties, these algorithms are effective in comparison to traditional methods, with an added advantage being highly computationally efficient. In the second part of this thesis, this hyper parameter choice for traditional Tikhonov type regularization is attempted with the help of Least-Squares QR-decompisition (LSQR) method. The established techniques that enable the automated choice of hyper parameters include Generalized Cross-Validation(GCV) and regularized Minimal Residual Method(MRM), where both of them come with higher over head of computation time, making it prohibitive to be used in the real-time. The proposed LSQR algorithm uses bidiagonalization of the system matrix to result in less computational cost. The proposed LSQR-based algorithm for automated choice of hyper parameter is compared with MRM methods and is proven to be computationally optimal technique through numerical and experimental phantom cases.
80

Essays in functional econometrics and financial markets

Tsafack-Teufack, Idriss 07 1900 (has links)
Dans cette thèse, j’exploite le cadre d’analyse de données fonctionnelles et développe l’analyse d’inférence et de prédiction, avec une application à des sujets sur les marchés financiers. Cette thèse est organisée en trois chapitres. Le premier chapitre est un article co-écrit avec Marine Carrasco. Dans ce chapitre, nous considérons un modèle de régression linéaire fonctionnelle avec une variable prédictive fonctionnelle et une réponse scalaire. Nous effectuons une comparaison théorique des techniques d’analyse des composantes principales fonctionnelles (FPCA) et des moindres carrés partiels fonctionnels (FPLS). Nous déterminons la vitesse de convergence de l’erreur quadratique moyen d’estimation (MSE) pour ces méthodes. Aussi, nous montrons cette vitesse est sharp. Nous découvrons également que le biais de régularisation de la méthode FPLS est plus petit que celui de FPCA, tandis que son erreur d’estimation a tendance à être plus grande que celle de FPCA. De plus, nous montrons que le FPLS surpasse le FPCA en termes de prédiction avec moins de composantes. Le deuxième chapitre considère un modèle autorégressif entièrement fonctionnel (FAR) pour prèvoir toute la courbe de rendement du S&P 500 a la prochaine journée. Je mène une analyse comparative de quatre techniques de Big Data, dont la méthode de Tikhonov fonctionnelle (FT), la technique de Landweber-Fridman fonctionnelle (FLF), la coupure spectrale fonctionnelle (FSC) et les moindres carrés partiels fonctionnels (FPLS). La vitesse de convergence, la distribution asymptotique et une stratégie de test statistique pour sélectionner le nombre de retard sont fournis. Les simulations et les données réelles montrent que les méthode FPLS performe mieux les autres en terme d’estimation du paramètre tandis que toutes ces méthodes affichent des performances similaires en termes de prédiction. Le troisième chapitre propose d’estimer la densité de neutralité au risque (RND) dans le contexte de la tarification des options, à l’aide d’un modèle fonctionnel. L’avantage de cette approche est qu’elle exploite la théorie d’absence d’arbitrage et qu’il est possible d’éviter toute sorte de paramétrisation. L’estimation conduit à un problème d’inversibilité et la technique fonctionnelle de Landweber-Fridman (FLF) est utilisée pour le surmonter. / In this thesis, I exploit the functional data analysis framework and develop inference, prediction and forecasting analysis, with an application to topics in the financial market. This thesis is organized in three chapters. The first chapter is a paper co-authored with Marine Carrasco. In this chapter, we consider a functional linear regression model with a functional predictor variable and a scalar response. We develop a theoretical comparison of the Functional Principal Component Analysis (FPCA) and Functional Partial Least Squares (FPLS) techniques. We derive the convergence rate of the Mean Squared Error (MSE) for these methods. We show that this rate of convergence is sharp. We also find that the regularization bias of the FPLS method is smaller than the one of FPCA, while its estimation error tends to be larger than that of FPCA. Additionally, we show that FPLS outperforms FPCA in terms of prediction accuracy with a fewer number of components. The second chapter considers a fully functional autoregressive model (FAR) to forecast the next day’s return curve of the S&P 500. In contrast to the standard AR(1) model where each observation is a scalar, in this research each daily return curve is a collection of 390 points and is considered as one observation. I conduct a comparative analysis of four big data techniques including Functional Tikhonov method (FT), Functional Landweber-Fridman technique (FLF), Functional spectral-cut off (FSC), and Functional Partial Least Squares (FPLS). The convergence rate, asymptotic distribution, and a test-based strategy to select the lag number are provided. Simulations and real data show that FPLS method tends to outperform the other in terms of estimation accuracy while all the considered methods display almost the same predictive performance. The third chapter proposes to estimate the risk neutral density (RND) for options pricing with a functional linear model. The benefit of this approach is that it exploits directly the fundamental arbitrage-free equation and it is possible to avoid any additional density parametrization. The estimation problem leads to an inverse problem and the functional Landweber-Fridman (FLF) technique is used to overcome this issue.

Page generated in 0.0289 seconds