• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 47
  • 27
  • 14
  • 14
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 62
  • 42
  • 37
  • 36
  • 34
  • 29
  • 26
  • 26
  • 26
  • 25
  • 23
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Spectral Element Method for Pricing European Options and Their Greeks

Yue, Tianyao January 2012 (has links)
<p>Numerical methods such as Monte Carlo method (MCM), finite difference method (FDM) and finite element method (FEM) have been successfully implemented to solve financial partial differential equations (PDEs). Sophisticated computational algorithms are strongly desired to further improve accuracy and efficiency.</p><p>The relatively new spectral element method (SEM) combines the exponential convergence of spectral method and the geometric flexibility of FEM. This dissertation carefully investigates SEM on the pricing of European options and their Greeks (Delta, Gamma and Theta). The essential techniques, Gauss quadrature rules, are thoroughly discussed and developed. The spectral element method and its error analysis are briefly introduced first and expanded in details afterwards.</p><p>Multi-element spectral element method (ME-SEM) for the Black-Scholes PDE is derived on European put options with and without dividend and on a condor option with a more complicated payoff. Under the same Crank-Nicolson approach for the time integration, the SEM shows significant accuracy increase and time cost reduction over the FDM. A novel discontinuous payoff spectral element method (DP-SEM) is invented and numerically validated on a European binary put option. The SEM is also applied to the constant elasticity of variance (CEV) model and verified with the MCM and the valuation formula. The Stochastic Alpha Beta Rho (SABR) model is solved with multi-dimensional spectral element method (MD-SEM) on a European put option. Error convergence for option prices and Greeks with respect to the number of grid points and the time step is analyzed and illustrated.</p> / Dissertation
142

Construction of FPGA-based Test Bench for QAM Modulators

Hederström, Josef January 2010 (has links)
In todays fast evolving mobile communications the requirements of higher datarates are continuously increasing, pushing operators to upgrade the backhaul to support these speeds. A cost eective way of doing this is by using microwave links between base stations, but as the requirements of data rates increase, the capacity of the microwave links must be increased. This thesis was part of a funded research project with the objective of developing the next generation high speed microwave links for the E-band. In the research project there was a need for a testing system that was able to generate a series of test signals with selectable QAM modulations and adjustable properties to be able to measure and evaluate hardware within the research project. The developed system was designed in a digital domain using an FPGA platform from Altera, and had the ability of selecting several types of modulations and changing the properties of the output signals as requested. By using simulation in several steps and measurements of the complete system the functionality was verified and the system was delivered to the research project successfully. The developed system can be used to test several dierent modulators in other projects as well and is easily extended to provide further properties.
143

Numerical methods for systems of highly oscillatory ordinary differential equations

Khanamiryan, Marianna January 2010 (has links)
This thesis presents methods for efficient numerical approximation of linear and non-linear systems of highly oscillatory ordinary differential equations. Phenomena of high oscillation is considered a major computational problem occurring in Fourier analysis, computational harmonic analysis, quantum mechanics, electrodynamics and fluid dynamics. Classical methods based on Gaussian quadrature fail to approximate oscillatory integrals. In this work we introduce numerical methods which share the remarkable feature that the accuracy of approximation improves as the frequency of oscillation increases. Asymptotically, our methods depend on inverse powers of the frequency of oscillation, turning the major computational problem into an advantage. Evolving ideas from the stationary phase method, we first apply the asymptotic method to solve highly oscillatory linear systems of differential equations. The asymptotic method provides a background for our next, the Filon-type method, which is highly accurate and requires computation of moments. We also introduce two novel methods. The first method, we call it the FM method, is a combination of Magnus approach and the Filon-type method, to solve matrix exponential. The second method, we call it the WRF method, a combination of the Filon-type method and the waveform relaxation methods, for solving highly oscillatory non-linear systems. Finally, completing the theory, we show that the Filon-type method can be replaced by a less accurate but moment free Levin-type method.
144

Vstupní část kvadraturního přijímače pro pásmo UHF / UHF band front-end of quadrature receiver

Tiller, Jakub January 2012 (has links)
The object of this master's thessis is study and description of RF circuits, which are used for receiveing. This work is also aimed to design this circuits and their simulation in Ansoft Designer software. Focus is placed to the standard parameters of receiving technology. The description of amplifier design is presented in this work. Parameters of this amplifier are optimalized to low noise figure. Frequency multiplier designs are included in this project.
145

Performance of alternative option pricing models during spikes in the FTSE 100 volatility index : Empirical evidence from FTSE100 index options

Rehnby, Nicklas January 2017 (has links)
Derivatives have a large and significant role on the financial markets today and the popularity of options has increased. This has also increased the demand of finding a suitable option pricing model, since the ground-breaking model developed by Black &amp; Scholes (1973) have poor pricing performance. Practitioners and academics have over the years developed different models with the assumption of non-constant volatility, without reaching any conclusions regarding which model is more suitable to use. This thesis examines four different models, the first model is the Practitioners Black &amp; Scholes model proposed by Christoffersen &amp; Jacobs (2004b). The second model is the Heston´s (1993) continuous time stochastic volatility model, a modification of the model is also included, which is called the Strike Vector Computation suggested by Kilin (2011). The last model is the Heston &amp; Nandi (2000) Generalized Autoregressive Conditional Heteroscedasticity type discrete model. From a practical point of view the models are evaluated, with the goal of finding the model with the best pricing performance and the most practical usage. The model´s robustness is also tested to see how the models perform in out-of-sample during a high respectively low implied volatility market. All the models are effected in the robustness test, the out-sample ability is negatively affected by a high implied volatility market. The results show that both of the stochastic volatility models have superior performances in the in-sample and out-sample analysis. The Generalized Autoregressive Conditional Heteroscedasticity type discrete model shows surprisingly poor results both in the in-sample and out-sample analysis. The results indicate that option data should be used instead of historical return data to estimate the model’s parameters. This thesis also provides an insight on why overnight-index-swap (OIS) rates should be used instead of LIBOR rates as a proxy for the risk-free rate.
146

Wideband Sigma-Delta Modulators

Yuan, Xiaolong January 2010 (has links)
Sigma-delta modulators (SDM) have come up as an attractive candidatefor analog-to-digital conversion in single chip front ends thanks to the continuousimproving performance. The major disadvantage is the limited bandwidthdue to the need of oversampling. Therefore, extending these convertersto broadband applications requires lowering the oversampling ratio (OSR) inorder. The aim of this thesis is the investigation on the topology and structureof sigma-delta modulators suitable for wideband applications, e.g. wireline orwireless communication system applications having a digital baseband aboutone to ten MHz.It has recently become very popular to feedforward the input signal inwideband sigma-delta modulators, so that the integrators only process quantizationerrors. The advantage being that the actual signal is not distorted byopamp and integrator nonlinearities. An improved feedforward 2-2 cascadedstructure is presented based on unity-gain signal transfer function (STF). Theimproved signal-to-noise-ratio (SNR) is obtained by optimizing zero placementof the noise transfer function (NTF) and adopting multi-bit quantizer.The proposed structure has low distortion across the entire input range.In high order single loop continuous-time (CT) sigma-delta modulator, excessloop delay may cause instability. Previous techniques in compensation ofinternal quantizer and feedback DAC delay are studied especially for the feedforwardstructure. Two alternative low power feedforward continuous-timesigma-delta modulators with excess loop delay compensation are proposed.Simulation based CT modulator synthesis from discrete time topologies isadopted to obtain the loop filter coefficients. Design examples are given toillustrate the proposed structure and synthesis methodology.Continuous time quadrature bandpass sigma-delta modulators (QBSDM)efficiently realize asymmetric noise-shaping due to its complex filtering embeddedin the loops. The effect of different feedback waveforms inside themodulator on the NTF of quadrature sigma-delta modulators is presented.An observation is made that a complex NTF can be realized by implementingthe loop as a cascade of complex integrators with a SCR feedback digital-toanalogconverter (DAC), which is desirable for its lower sensitivity to loopmismatch. The QBSDM design for different bandpass center frequencies relativeto the sampling frequency is illustrated.The last part of the thesis is devoted to the design of a wideband reconfigurablesigma-delta pipelined modulator, which consists of a 2-1-1 cascadedmodulator and a pipelined analog-to-digital convertor (ADC) as a multi-bitquantizer in the last stage. It is scalable for different bandwidth/resolutionapplication. The detail design is presented from system to circuit level. Theprototype chip is fabricated in TSMC 0.25um process and measured on thetest bench. The measurement results show that a SNR over 60dB is obtainedwith a sampling frequency of 70 MHz and an OSR of ten.
147

Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction Networks

Ben Hammouda, Chiheb 22 July 2020 (has links)
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and fast estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks. In the first work, we propose a novel hybrid multilevel Monte Carlo (MLMC) estimator, for systems characterized by having simultaneously fast and slow timescales. Our hybrid multilevel estimator uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit tau-leap method is not applicable due to numerical instability issues. In a second work, we address another challenge present in this context called the high kurtosis phenomenon, observed at the deep levels of the MLMC estimator. We propose a novel approach that combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Our theoretical estimates and numerical analysis show that our method improves the robustness and complexity of the multilevel estimator, with a negligible additional cost. In the second part of this thesis, we design novel methods for pricing financial derivatives. Option pricing is usually challenging due to: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by developing different techniques for smoothing the integrand to uncover the available regularity. Then, we approximate the resulting integrals using hierarchical quadrature methods combined with Brownian bridge construction and Richardson extrapolation. In the first work, we apply our approach to efficiently price options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges, implying classical numerical methods for pricing being either inapplicable or computationally expensive. In a second work, we design a numerical smoothing technique for cases where analytic smoothing is impossible. Our analysis shows that adaptive sparse grids’ quadrature combined with numerical smoothing outperforms the Monte Carlo approach. Furthermore, our numerical smoothing improves the robustness and the complexity of the MLMC estimator, particularly when estimating density functions.
148

Contribution to the analysis of optical transmission systems using QPSK modulation / Contribution à l'étude des systèmes de transmission optique utilisant le format de modulation QPSK

Ramantanis, Petros 30 September 2011 (has links)
La demande constante de capacité et la saturation prévue de la fibre monomode ont conduit récemment à des avances technologiques qui ont complètement changé le paysage des télécommunications à fibre optique. Le progrès le plus important était la mise en œuvre d'une détection cohérente à l'aide d'électronique rapide. Cela a permis pas seulement l'utilisation de formats de modulation qui promettent une utilisation plus efficace de la bande passante, mais aussi l’utilisation des algorithmes adaptés pour combattre la dégradation du signal optique due à la propagation. Cette thèse a commencé un peu après le début de cette « ère du cohérent » et son principal objectif était de revoir les effets physiques de la propagation dans des systèmes de transmission terrestres, utilisant le format de modulation QPSK (Quadrature Phase Shift Keying). Le manuscrit est divisé en deux parties. La première partie est consacrée à une étude sur les séquences des données qui doivent être utilisés dans les simulations numériques, lorsqu’un format de modulation avancée est impliqué. La propagation, et en particulier l'interaction entre la dispersion chromatique et les non-linéarités, introduisent une interférence inter-symbole (ISI). Vu que cet ISI dépend de l’enchainement des données transmises, il est évident que le choix de la séquence a une influence sur la qualité estimée du canal. Etant donné que des séquences aléatoires infinies ne sont pas pratiquement réalisables, nous utilisons souvent des séquences « pseudo-aléatoires » (PR), i.e. des séquences déterministes de longueur finie, avec des statistiques équilibrés, qui semblent être aléatoires. Dans la première partie, nous décrivons la méthode de génération de séquences PR avec M. niveaux (M> 2) et nous détaillons leurs propriétés. En outre, nous proposons des outils numériques pour caractériser les séquences non pseudo-aléatoires qu’on utilise souvent dans des simulations, ou parfois aussi dans des expériences au laboratoire. Enfin, nous présentons les résultats de simulations qui permettent de quantifier la nécessité d'utiliser des séquences PR en fonction des paramètres du système. Après avoir établi les séquences finies "les plus adaptées", dans la seconde partie du manuscrit, nous nous concentrons sur l'étude de la propagation, dans le contexte d'un système de transmission QPSK et en supposant une gestion de dispersion et un type de fibre variables. Plus précisément, nous étudions numériquement les statistiques de signaux dégradés dus à l'interaction de la dispersion chromatique avec les effets non linéaires, en négligeant tout effet de polarisation ou inter-canaux, aussi que le bruit des amplificateurs. Dans ce contexte, nous étions intéressés à déterminer si certaines lois empiriques développées pour les systèmes OOK, sont valable dans le cas d'une modulation QPSK, tels que le critère de la phase non-linéaire cumulée (ΦNL) ou des lois qui permettent une optimisation de la gestion de dispersion. Ensuite, nous révélons l'importance de la rotation de la constellation du signal initial, comme un paramètre qui peut fournir des informations pour la post-optimisation de notre système. Nous discutons également autour du fait que la forme de la constellation dépend de la gestion de dispersion et concernant les constellations nous concluons qu'il y en a généralement 3 types, avec: (1) une variance de phase supérieure à la variance d'amplitude (2) une variance d'amplitude supérieure à la variance de phase et (3) avec le signal ayant une constellation qui ressemble à la constellation d’un signal sous l'influence d'un bruit blanc gaussien additif. Enfin, nous fournissons une explication phénoménologique des formes des constellations révélant le fait que des sous-séquences différentes conduisent à un « type » différent de dégradation et nous utilisons ces informations pour définir un paramètre qui quantifie le bénéfice potentiel d'un algorithme de correction du type MAP(Maximum A Posteriori Probability) / The constant demand for capacity increase, together with the foreseen saturation of the single-mode optical fiber, paved the way to technological breakthroughs that have completely changed the landscape of fiber-optic telecommunications. The most important advance was, undeniably, the practical implementation of a coherent detection with the help of high-speed electronics. This has, first, enabled the use of advanced modulation formats that allowed for a more efficient use of the fiber bandwidth, compared to the classical On-Off Keying, while adapted algorithms could not be used in order to mitigate the optical signal degradation. This thesis began a little after the advent of coherent detection and its main objective was to revisit the propagation effects in optical transmission systems using "Quadrature phase shift keying" (QPSK) modulation in the context of terrestrial systems, i.e. for transmission distances of up to about 2000 km. The manuscript is divided into two parts. The first part is dedicated to a study on the data sequences that need to be used in numerical simulations, when advanced modulation is involved. Fiber propagation, and in particular the interplay between chromatic dispersion and nonlinearities, usually introduce a nonlinear inter-symbol interference (ISI) to the transmitted signal. Since this ISI depends on the actual transmitted data pattern, it is obvious that the choice of the sequence used in our numerical simulations will have a direct influence on the estimated channel quality. Since, an infinite length, random sequence is impractical; we very commonly use pseudorandom" (PR) sequences, i.e. finite-length, deterministic sequences with balanced pattern statistics that seem to be random. In the first part we describe the method of generating M-level (with M>2) pseudorandom sequences and we detail their properties. In addition, we propose numerical tools to characterize the non-pseudorandom sequences that we use in numerical simulations, or we are sometimes forced to use in laboratory experiments. Finally, we present results of numerical simulations that quantify the necessity to use PR sequences as a function of our system parameters. After having established the “fairest possible” finite sequences, in the second part of the manuscript, we focus on the study of the nonlinear propagation, in the context of a transmission system using QPSK modulation and assuming a variable dispersion management and fiber type. Specifically, we numerically study the signal statistics due to the interplay of chromatic dispersion and nonlinear effects, neglecting all polarization or multi-wavelength effects and the amplifier noise. In this context, we were first interested in determining whether some empirical laws developed for OOK systems, can be also used in the case of QPSK modulation, such as the criterion of cumulative nonlinear phase (ΦNL) or laws that allow for a quick optimization of the dispersion management. Next we reveal the importance of a global phase rotation added to the initial signal constellation, as a parameter that can provide interesting information for the post-optimization of our system. We also discuss the fact that the constellation shape critically depends on the applied dispersion management, while there are generally 3 types of constellations, concerning the complex signal statistics: (1) the phase variance is higher than the amplitude variance (2) the amplitude variance is higher than the phase variance and (3) the received signal constellation resembles to a constellation of a signal under the influence of just an Additive White Gaussian Noise. Finally, we provide a phenomenological explanation of the constellations shapes revealing the fact that different data sub-sequences suffer from a different kind of signal degradation, while we also use this information to define a parameter that quantifies the potential benefit from a MAP (Maximum A Posteriori probability) correction algorithm
149

Concept de corrélation dans l'espace fréquentiel de Fourier pour la télédection passive de la terre : application à la mission SMOS-Next / Fouier correlation imaging concept for passive earth observation : a proposal to the SMOS-Next mission

Monjid, Younès 12 October 2016 (has links)
La synthèse d'ouverture est une technique interférométrique similaire à la synthèse par rotation de la terre utilisée en radioastronomie où les signaux reçus par une paire de petites antennes sont traités de telle manière à synthétiser une seule grande antenne. Le concept de synthèse d'ouverture a été réadapté pour l'observation de la terre dans le cas de la télédétection de sources étendues de température. L'utilisation de cette technique pour l'observation de la terre a permis de contourner les limitations sur la taille d'antenne en télédétection passive. La fonction de corrélation, ou de visibilité, obtenue en inter-corrélant les signaux reçus par les an- tennes d'un système interférométrique employant une synthèse d'ouverture est définie comme étant la transformée de Fourier de la carte des températures de bril lance de la scène observée. Cette relation est connue sous le nom du théorème de Van Cittert-Zernike pour des observateurs en repos par rapport aux sources de température. La forme classique de ce théorème a été dérivée en inter-corrélant les échantillons temporels instantanés du champ électrique mesurés par différentes antennes. Un nouveau concept basé une interférométrie spatio-temporelle passive a été proposé comme étant la nouvelle génération qui succédera à la mission SMOS (Soil Moisture and Ocean Salinity) opérant dans l'espace depuis Novembre 2009. Celui-ci a pour objectif principal l'amélioration de la résolution spatiale à des ordres pouvant répondre aux applications hydrologiques à l'échelle locale où des résolutions kilométriques sont exigées. Ce concept interférométrique se base sur l'idée d'intégrer le déplacement de l'observateur (l'antenne) et ainsi la variable temps dans le calcul de la fonction de corrélation. Ceci engendre la création de nouvelles lignes de base virtuelles entre les positions des antennes à des instants différents, en plus des lignes de base physiques formées entres les positions des antennes instantanées. L'étude de ce concept de corrélation a malheureusement démontré la suppression exacte de l'information additionnelle due aux lignes de base virtuelles par le décalage Doppler induit par le déplacement. Une seconde étude du concept d'interférométrie spatio-temporelle combinée à une nouvelle procé- dure d'imagerie par corrélation dans l'espace fréquentiel, accomplie en inter-corrélant les spectres fréquentiels des champs électriques mesurés par une paire d'antennes séparées d'une distance Δr à bord d'un satellite à une hauteur h, a démontré l'obtention d'une information en 2D en températures de brillance de la scène observée. En plus, le développement théorique de la fonction de corrélation a mis en évidence une relation liant les visibilités aux températures de brillance par l'intermédiaire d'un noyau hautement oscillatoire. L'élément nouveau apporté par la corrélation dans l'espace fréquentiel consiste à exploiter l'informati- on de corrélation acquise par les antennes du satellite pour des fréquences présentant de petites dif- férences et pas seulement l'auto-corrélation. Cette propriété permet une reconstruction en 2D des températures de brillance avec seulement deux antennes / Aperture synthesis is an interferometric technique similar to Earth rotation synthesis employed in radio astronomy in which the signals received by a pair of small antennas are processed in a way to synthesize a single large antenna. The aperture synthetic concept used in radioastronomy was readapted to Earth remote sensing for large thermal sources. Thanks to this technique, limitations on antenna size in passive microwave remote sensing have been overcome. The correlation, or visibility, function obtained by cross-correlating the signals received by the antennas of an interferometric system using aperture synthesis is linked to the brightness temperature map of the observed scene by means of a Fourier-transform law. This is know as the standard form of the Van Cittert-Zernike theorem for fixed observers with respect to sources of temperature. This stan- dard formulation is derived by cross-correlating the instantaneous temporal components of the measured electric fields by different antennas. A new concept based on a passive spatio-temporal interferometry was proposed as the new generation to follow the well-known SMOS (Soil Moisture and Ocean Salinity) mission successfully operating since November 2, 2009. The aim of the proposed concept is a jump in the current achieved geometric resolution to orders capable of meeting the stringent users' needs for the study of hydrological applications in the local scale where sub-kilometric resolutions are required. This interferometric concept is based on the idea of integrating the displacement of the observer (satellite's antenna), and hence the time variable, in the calculation of the correlation function, which yields the creation of virtual baselines between the positions of antennas at different instants, in addition to the physical ones formed between the instantaneous antennas' spatial positions. Sadly, the additional information due to the virtual baseline was shown to be exactly canceled by the induced Doppler shift due to the observer's motion. We show furthermore that when using the aforementioned spatio-temporal interferometric system combined with a revolutionary Fourier Correlation Imaging (FouCoIm) procedure, consisting in cross-correlating, at slightly different frequencies, the Fourier components of the fluctuations of the re- ceived electric fields by a pair of antennas separated by a distance Δr on board of a satellite flying at height h, the 2D position-dependent brightness temperature can be reconstructed. Besides, the analytical derivation of the correlation function gives rise to a relationship linking the measured cor- relations to the position-dependent brightness temperatures by means of a Highly Oscillatory Integral (HOI) kernel. Interestingly, the analytical study of the HOI kernel showed the remarkable property that a corre- lation between both antenna-signals remains within a small frequency interval (different frequencies) outside the simple auto-correlation (same frequency). As a matter of fact, while existing systems had, until now, only considered the simple 1D information contained in the auto-correlation, it appears that the resulting correlation function from this concept bears a 2D information for the measurement of the position-dependent brightness temperature. Based on this, one is capable of reconstructing 2D bright- ness temperatures starting from a simple 1D geometry (two antennas arranged perpendicularly to the flight direction)
150

Exploring the Importance of Accounting for Nonlinearity in Correlated Count Regression Systems from the Perspective of Causal Estimation and Inference

Zhang, Yilei 07 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The main motivation for nearly all empirical economic research is to provide scientific evidence that can be used to assess causal relationships of interest. Essential to such assessments is the rigorous specification and accurate estimation of parameters that characterize the causal relationship between a presumed causal variable of interest, whose value is to be set and altered in the context of a relevant counterfactual and a designated outcome of interest. Relationships of this type are typically characterized by an effect parameter (EP) and estimation of the EP is the objective of the empirical analysis. The present research focuses on cases in which the regression outcome of interest is a vector that has count-valued elements (i.e., the model under consideration comprises a multi-equation system of equations). This research examines the importance of account for nonlinearity and cross-equation correlations in correlated count regression systems from the perspective of causal estimation and inference. We evaluate the efficiency and accuracy gains of estimating bivariate count valued systems-of-equations models by comparing three pairs of models: (1) Zellner’s Seemingly Unrelated Regression (SUR) versus Count-Outcome SUR - Conway Maxwell Poisson (CMP); (2) CMP SUR versus Single-Equation CMP Approach; (3) CMP SUR versus Poisson SUR. We show via simulation studies that it is more efficient to estimate jointly than equation-by-equation, it is more efficient to account for nonlinearity. We also apply our model and estimation method to real-world health care utilization data, where the dependent variables are correlated counts: count of physician office-visits, and count of non-physician health professional office-visits. The presumed causal variable is private health insurance status. Our model results in a reduction of at least 30% in standard errors for key policy EP (e.g., Average Incremental Effect). Our results are enabled by our development of a Stata program for approximating two-dimensional integrals via Gauss-Legendre Quadrature.

Page generated in 0.0511 seconds