• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 429
  • 219
  • 73
  • 66
  • 34
  • 29
  • 26
  • 24
  • 12
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1013
  • 1013
  • 1013
  • 120
  • 117
  • 99
  • 96
  • 83
  • 74
  • 65
  • 64
  • 61
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Monte Carlo Simulation of Heston Model in MATLAB GUI

Kheirollah, Amir January 2006 (has links)
<p>In the Black-Scholes model, the volatility considered being deterministic and it causes some</p><p>inefficiencies and trends in pricing options. It has been proposed by many authors that the</p><p>volatility should be modelled by a stochastic process. Heston Model is one solution to this</p><p>problem. To simulate the Heston Model we should be able to overcome the correlation</p><p>between asset price and the stochastic volatility. This paper considers a solution to this issue.</p><p>A review of the Heston Model presented in this paper and after modelling some investigations</p><p>are done on the applet.</p><p>Also the application of this model on some type of options has programmed by MATLAB</p><p>Graphical User Interface (GUI).</p>
442

Stochastic Volatility Models in Option Pricing

Kalavrezos, Michail, Wennermo, Michael January 2008 (has links)
<p>In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.</p>
443

Stochastic finite elements for elastodynamics: random field and shape uncertainty modelling using direct and modal perturbation-based approaches

Van den Nieuwenhof, Benoit 07 May 2003 (has links)
The handling of variability effects in structural models is a natural and necessary extension of deterministic analysis techniques. In the context of finite element and uncertainty modelling, the stochastic finite element method (SFEM), grouping the perturbation SFEM, the spectral SFEM and the Monte-Carlo simulation, has by far received the major attention. <br> The present work focuses on second moment approaches, in which the first two statistical moments of the structural response are estimated. Due to its efficiency for handling problems involving low variability levels, the perturbation method is selected for characterising the propagation of the parameter variability from an uncertain dynamic model to its structural response. A dynamic model excited by a time-harmonic loading is postulated and the extension of the perturbation SFEM to the frequency domain is provided. This method complements the deterministic analysis by a sensitivity analysis of the system response with respect to a finite set of random parameters and a response surface in terms of a Taylor series expansion truncated to the first or second order is built. Taking into account the second moment statistical data of the random design properties, the response sensitivities are appropriately condensed in order to obtain an estimation of the response mean value and covariance structure. <br> In order to handle a wide definition of variability, a computational tool is made available that is able to deal with material variability sources (material random variables and fields) as well as shape uncertainty sources. This second case requires an appropriate shape parameterisation and a shape design sensitivity analysis. The computational requirements of the tool are studied and optimised, by reducing the size of the random dimension of the problem and by improving the performances of the underlying deterministic analyses. In this context, modal approaches, which are known to provide efficient alternatives to direct approaches in frequency domain analyses, are developed. An efficient hybrid procedure, coupling the perturbation and the Monte-Carlo simulation SFEM, is proposed and analysed. <br> Finally, the developed methods are validated, by resorting mainly to the Monte-Carlo simulation technique, on different numerical applications: a cantilever beam structure, a plate bending problem (involving a 3-dimensional model), an articulated truss structure and a problem involving a plate with a random flatness default. The propagation of the model uncertainty in the response FRFs and the effects involved by random field modelling are examined. Some remarks are stated pertaining to the influence of the parameter PDF in simulation-based methods. <br> <br> La gestion de la variabilité présente dans les modèles structuraux est une extension naturelle et nécessaire des techniques de calcul déterministes. En incorporant la modélisation de l'incertitude dans le calcul aux éléments finis, la méthode des éléments finis stochastiques (groupant l'approche perturbative, l'approche spectrale et la technique de simulation Monte-Carlo) a reçu une large attention de la littérature scientifique. <br> Ce travail est orienté sur les approches dites de second moment, dans lesquelles les deux premiers moments statistiques de la réponse de la structure sont estimés. De par son aptitude à traiter des problèmes caractérisés par de faibles niveaux de variabilité, la méthode perturbative est choisie pour propager la variabilité des paramètres d'un modèle dynamique incertain sur sa réponse. Un modèle sous chargement dynamique harmonique est supposé et l'extension dans le domaine fréquentiel de l'approche perturbative est établie. Cette méthode complète l'analyse déterministe par une analyse de sensibilité de la réponse du système par rapport à un ensemble fini de variables aléatoires. Une surface de réponse en termes d'un développement de Taylor tronqué au premier ou second ordre peut alors être écrit. Les sensibilités de la réponse sont enfin condensées, en tenant compte des propriétés statistiques des paramètres de design aléatoires, pour obtenir une estimation de la valeur moyenne et de la structure de covariance de la réponse. <br> Un outil de calcul est développé avec la capacité de gestion d'une définition large de la variabilité: sources de variabilité matérielle (variables et champs aléatoires) ainsi que géométrique. Cette dernière source requiert une paramétrisation adéquate de la géométrie ainsi qu'une analyse de sensibilité à des paramètres de forme. Les exigences calcul de cet outil sont étudiées et optimisées, en réduisant la dimension aléatoire du problème et en améliorant les performances des analyses déterministes sous-jacentes. Dans ce contexte, des approches modales, fournissant une alternative efficace aux approches directes dans le domaine fréquentiel, sont dérivées. Une procédure hybride couplant la méthode perturbative et la technique de simulation Monte-Carlo est proposée et analysée. <br> Finalement, les méthodes étudiées sont validées, principalement sur base de résultats de simulations Monte-Carlo. Ces résultats sont relatifs à plusieurs applications numériques: une structure poutre-console, un problème de flexion de plaque (modèle tridimensionnel), une structure en treillis articulé et un problème de plaque présentant un défaut de planéité aléatoire. La propagation de l'incertitude du modèle dans les fonctions de réponse fréquentielle ainsi que les effets propres à la modélisation par champs aléatoires sont examinés. Quelques remarques relatives à l'influence de la loi de distribution des paramètres dans les méthodes de simulation sont évoquées.
444

The market impact of short-sale constraints

Nilsson, Roland January 2005 (has links)
The thesis addresses two areas of research within financial economics: empirical asset pricing and the borderline area between finance and economics with emphasis on econometrical methods. The empirical asset pricing section considers the effects of short-sale constraints on both the stock market as well as the derivatives market. Many arbitrage relations in the economy are intimately tied to the possibility to go short. One such arbitrage relation is the put-call-parity (PCP) relation that dictates a pricing relation between several derivative instruments and their underlying assets. During the latter part of the 1980s stock options could be traded in Sweden, while at the same time shorting was not permitted. The main contribution of the paper is to show that this shorting prohibition indeed implied larger deviations from PCP. Furthermore, this effect is only relevant for firms with stocks that were not shortable abroad, as firms with stocks shortable abroad did not show any deviations from PCP. The second paper investigates the asymmetries found in the momentum effect. Previous studies have found that the momentum effect is mostly due to the fact that a portfolio of loser firms tend to continue perform poorly, rather than because a portfolio of winner firms continue to do well. The explanation for this phenomenon investigated in the paper is based on the theoretical work by Diamond and Verrecchia (1985). In this model they demonstrate that the effects of restrictions on the ability to go short will have as a result that negative news are incorporated more slowly than positive news. The main contribution of my paper is to explore this hypothesis, and provide a link to the momentum effect. This has been achieved by considering Sweden during the 1980s during which the rare situation of a complete shorting prohibition was enforced. The second section of the thesis foremost addresses the CCAPM model. In the third paper the joint effect of market frictions, different utility specifications, as well as more stringent econometrical analysis, on the CCAPM are considered. Since all these remedies tend to co-exist and should not be considered on a stand alone basis, as has been the case in the previous literature. The paper also shows how several measures of misspecification available in the literature are implemented when market frictions are present. In particular, the paper presents the Hansen and Jagannathan measure with market frictions. The final paper considers L1-norm-based alternatives to the L2-norm-based Hansen and Jagannathan (1997) measure. It is well known that L1-norm methods may show good properties in the presence of non-normal distributions, for instance, with respect to heavy-tailed and/or asymmetric distributions. These methods provide more robust estimators, since they are less easily influenced by outliers or other extreme observations. The basic intuition for this is that L2-norm methods involve squaring errors, which magnifies large deviations, while L1-norm methods are based on absolute deviations. Since financial data are known to frequently display non-normal properties, L1-norm methods have found considerable use in financial economics. / Diss. Stockholm : Handelshögskolan, 2005
445

A Segmented Silicon Strip Detector for Photon-Counting Spectral Computed Tomography

Xu, Cheng January 2012 (has links)
Spectral computed tomography with energy-resolving detectors has a potential to improve the detectability of images and correspondingly reduce the radiation dose to patients by extracting and properly using the energy information in the broad x-ray spectrum. A silicon photon-counting detector has been developed for spectral CT and it has successfully solved the problem of high photon flux in clinical CT applications by adopting the segmented detector structure and operating the detector in edge-on geometry. The detector was evaluated by both the simulation and measurements. The effects of energy loss and charge sharing on the energy response of this segmented silicon strip detector with different pixel sizes were investigated by Monte Carlo simulation and a comparison to pixelated CdTe detectors is presented. The validity of spherical approximations of initial charge cloud shape in silicon detectors was evaluated and a more accurate statistical model has been proposed. A photon-counting energy-resolving application specific integrated circuit (ASIC) developed for spectral CT was characterized extensively by electrical pulses, pulsed laser and real x-ray photons from both the synchrotron and an x-ray tube. It has been demonstrated that the ASIC performs as designed. A noise level of 1.09 keV RMS has been measured and a threshold dispersion of 0.89 keV RMS has been determined. The count rate performance of the ASIC in terms of count loss and energy resolution was evaluated by real x-rays and promising results have been obtained. The segmented silicon strip detector was evaluated using synchrotron radiation. An energy resolution of 16.1% has been determined with 22 keV photons in the lowest flux limit, which deteriorates to 21.5% at an input count rate of 100 Mcps mm−2. The fraction of charge shared events has been estimated and found to be 11.1% for 22 keV and 15.3% for 30 keV. A lower fraction of charge shared events and an improved energy resolution can be expected by applying a higher bias voltage to the detector. / <p>QC 20121123</p>
446

Stochastic Volatility Models in Option Pricing

Kalavrezos, Michail, Wennermo, Michael January 2008 (has links)
In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.
447

Small Molecule Diffusion in Spherulitic Polyethylene : Experimental Results and Simulations

Mattozzi, Alessandro January 2006 (has links)
The diffusion of small-molecule penetrants in polyethylene is hindered by impenetrable crystals and by the segmental constraints imposed by the crystals on the penetrable phase. Liquid and vapour n-hexane sorption/desorption measurements were performed on metallocene catalyzed homogenous poly(ethylene-co-octene)s. It was shown that the fractional free volume of the polymer penetrable component increased with increasing amount of penetrable polymer. It also increased with the relative proportion of liquid-like component in the penetrable polymer fraction. The detour effect was found to increase with decreasing crystallinity. The experimental study of the morphology of the polymers showed that the geometrical impedance factor followed the same trend with increasing crystallinity as the data obtained from n-hexane desorption. The changes in phase composition and character upon n-hexane sorption were monitored with Raman spectroscopy, WAXS and NMR spectroscopy. Partial dissolution of the orthorhombic and the interfacial component was observed upon nhexane sorption. Changes in the character of the components were furthermore analyzed: an increase of the density in the crystalline component and a decrease of the density in the amorphous component were observed in the n-hexane-sorbed-samples. Molecular dynamics simulations were used for studying diffusion of n-hexane in fully amorphous poly(ethylene-co-octene)s. The branches in poly(ethylene-co-octene) decreased the density by affecting the packing of the chains in the rubbery state in accordance with experimental data. Diffusion of n-hexane at low penetrant concentration showed unexpectedly that the penetrant diffusivity decreased with increasing degree of branching. Spherulitic growth was mimicked with an algorithm able to generate structures comparable to those observed in polyethylene. The diffusion in the simulated structure was assessed with Monte Carlo simulations of random walks and the geometrical impedance factor of the spherulitic structures was calculated and compared with analytical values according to Fricke’s theory. The linear relationship between geometrical impedance factor and crystallinity in Fricke’s theory was confirmed. Fricke’s theory, however, underestimated the crystal blocking effect. By modelling systems having a distribution of crystal width-to-thickness ratio it was proven that wide crystals had a more pronounced effect on the geometrical impedance factor than is indicated by their number fraction weight. / QC 20100909
448

Establishing low-energy x-ray fields and determining operational dose equivalent conversion coefficients

Larsson, Ylva January 2008 (has links)
Reference radiation fields for x-ray qualities are described by the International Organization of Standards (ISO). This study describes the procedure to establish nine different low energy X-ray qualities at the national metrology laboratory, Swedish Radiation Protection Authority, following the document ISO 4037. Measurements of tube voltage, half-value layer, mean energy and spectral resolution have been performed for qualities N-15, N-20, N-25, N-30, N-40, L-20, L-30, L-35 and L-55. Furthermore, dose equivalent conversion coefficients for operational quantities ambient dose equivalent, personal dose equivalent and directional dose equivalent have been calculated by folding the mono-energetic conversion factors with measured spectral distributions of the x-ray qualities. The spectral distributions were unfolded from pulse-height distributions to photon distributions using simulated data of the semi-conductor detector used for measurements, generated with the Monte Carlo code PENELOPE.
449

Numerical study of electro-thermal effects in silicon devices

Nghiem Thi, Thu Trang 25 January 2013 (has links) (PDF)
The ultra-short gate (LG < 20 nm) CMOS components (Complementary Metal-Oxide-Semiconductor) face thermal limitations due to significant local heating induced by phonon emission by hot carriers in active regions of reduced size. This phenomenon, called self-heating effect, is identified as one of the most critical for the continuous increase in the integration density of circuits. This is especially crucial in SOI technology (silicon on insulator), where the presence of the buried insulator hinders the dissipation of heat.At the nanoscale, the theoretical study of these heating phenomena, which cannot be led using the macroscopic models (heat diffusion coefficient), requires a detailed microscopic description of heat transfers that are locally non-equilibrium. It is therefore appropriate to model, not only the electron transport and the phonon generation, but also the phonon transport and the phonon-phonon and electron-phonon interactions. The formalism of the Boltzmann transport equation (BTE) is very suitable to study this problem. In fact, it is widely used for years to study the transport of charged particles in semiconductor components. This formalism is much less standard to study the transport of phonons. One of the problems of this work concerns the coupling of the phonon BTE with the electron transport.In this context, wse have developed an algorithm to calculate the transport of phonons by the direct solution of the phonon BTE. This algorithm of phonon transport was coupled with the electron transport simulated by the simulator "MONACO" based on a statistical (Monte Carlo) solution of the BTE. Finally, this new electro-thermal simulator was used to study the self-heating effects in nano-transistors. The main interest of this work is to provide an analysis of electro-thermal transport beyond a macroscopic approach (Fourier formalism for thermal transport and the drift-diffusion approach for electric current, respectively). Indeed, it provides access to the distributions of phonons in the device for each phonon mode. In particular, the simulator provides a better understanding of the hot electron effects at the hot spots and of the electron relaxation in the access.
450

Modeling Study of Proposed Field Calibration Source Using K-40 Source and High-Z Targets for Sodium Iodide Detector

Rogers, Jeremy 1987- 14 March 2013 (has links)
The Department of Energy (DOE) has ruled that all sealed radioactive sources, even those considered exempt under Nuclear Regulatory Commission regulations, are subject to radioactive material controls. However, sources based on the primordial isotope potassium-40 (40K) are not subject to these restrictions. Potassium-40’s beta spectrum and 1460.8 keV gamma ray can be used to induce K-shell fluorescence x rays in high-Z metals between 60 and 80 keV. A gamma ray calibration source is thus proposed that uses potassium chloride salt and a high-Z metal to create a two-point calibration for a sodium iodide field gamma spectroscopy instrument. The calibration source was designed in collaboration with Sandia National Laboratory using the Monte Carlo N-Particle eXtended (MCNPX) transport code. The x ray production was maximized while attempting to preserve the detector system’s sensitivity to external sources by minimizing the count rate and shielding effect of the calibration source. Since the source is intended to be semi-permanently fixed to the detector, the weight of the calibration source was also a design factor. Two methods of x-ray production were explored. First, a thin high-Z layer (HZL) was interposed between the detector and the potassium chloride-urethane source matrix. Second, bismuth metal powder was homogeneously mixed with a urethane binding agent to form a potassium chloride-bismuth matrix (KBM). The two methods were directly compared using a series of simulations, including their x ray peak strengths, pulse-height spectral characteristics, and response to a simulated background environment. The bismuth-based source was selected as the development model because it is cheap, nontoxic, and outperforms the high-Z layer method in simulation. The overall performance for the bismuth-based source was significantly improved by splitting the calibration source longitudinally into two halves and placing them on either side of the detector. The performance was improved further by removing the binding agent and simulating a homogeneous mixture of potassium chloride and bismuth powder in a 0.1 cm plastic casing. The split plastic-encased potassium chloride-bismuth matrix would serve as a light, cheap, field calibration source that is not subject to DOE restrictions.

Page generated in 0.0393 seconds