Spelling suggestions: "subject:"estatistics"" "subject:"cstatistics""
291 |
Numerical Algorithms For Stock Option ValuationNickleach, Scott Brian 30 October 2008 (has links)
Since the formulation by Black, Scholes, and Merton in 1973 of the first rational option pricing formula which depended only on observable values, the volume of options traded daily on the Chicago Board of Exchange has grown rapidly. In fact, over the past three decades, options have undergone a transformation from specialized and obscure securities to ubiquitous components of the portfolios of not only large fund managers, but of ordinary individual investors. Essential ingredients of any successful modern investment strategy include the ability to generate income streams and reduce risk, as well as some level of speculation, all of which can be accomplished by effective use of options.
Naturally practitioners require an accurate method of pricing options. Furthermore, because today's market conditions evolve very rapidly, they also need to be able to obtain the price estimates quickly. This dissertation is devoted primarily to improving the efficiency of popular valuation procedures for stock options. In particular, we develop a method of simulating values of European stock options under the Heston stochastic volatility model in a fraction of the time required by the existing method. We also develop an efficient method of simulating the values of American stock option values under the same dynamic in conjunction with the Least-Squares Monte Carlo (LSM) algorithm. We attempt to improve the efficiency of the LSM algorithm by utilizing quasi-Monte Carlo techniques and spline methodology. We also consider optimal investor behavior and consider the notion of option trading as opposed to the much more commonly studied valuation problems.
|
292 |
MODELING AND ANALYZING MULTIVARIATE LONGITUDINAL LEFT-CENSORED BIOMARKER DATAGhebregiorgis, Ghideon Solomon 30 October 2008 (has links)
Many medical studies collect biomarker data to gain insight into the biological mechanisms underlying both acute and chronic diseases. These markers may be obtained at a single point in time to aid in the diagnosis of an illness or may be collected longitudinally to provide information on the relationship between changes in a given biomarker as it relates to the course of the illness. While there are many different biomarkers presented in the medical literature there are very few studies that examine the relationship between multiple biomarkers, measured longitudinally, and predictors of interest.
The first part of this dissertation addresses the analysis of multiple biomarkers subject to left-censoring over time. Imputation methods and methods that account for censoring are extended to handle multiple outcomes and are compared and evaluated for both accuracy and efficiency through a simulation study. Estimation is based on a parametric multivariate linear mixed model for longitudinally measured biomarkers. For left censored biomarkers an extension of this method based on MLE is used.
The linear mixed effects model based on a full likelihood is one of the few methods available to model longitudinal data subject to left-censoring. However, a full likelihood approach is complicated algebraically due to the large dimension of the numeric computations, and maximum likelihood estimation can be computationally prohibitive when the data are heavily censored. Moreover, the complexity of the computation increases as the dimension of the random effects in the model increases. The second part of the dissertation focuses on developing a method that addresses these problems. We propose a method based on a pseudo likelihood function that simplifies the computational complexities, allows all possible multivariate models, and that can be used for any data structure including settings where the level of censoring is high. A robust variance-covariance estimator is used to adjust and correct the variance-covariance estimate. A simulation study is conducted to evaluate and compare the performance of the proposed method for efficiency, simplicity and convergence with existing methods. The proposed methodology is illustrated in the analysis of Genetic and Inflammatory Markers for Sepsis study conducted at the University of Pittsburgh.
|
293 |
Comparing Spectral Densities in Replicated Time Series by Smoothing Spline ANOVAHan, Sangdae 30 October 2008 (has links)
Comparing several groups of populations based on replicated data is one of the main concerns in statistical analysis. A specific type of data, time series data, such as waves of earthquakes present difficulties because of the correlations amongst the data. Spectral analysis solves this problem somewhat because the discrete Fourier transform transforms the data to near independence under general conditions.
The goal of our research is to develop general, user friendly, statistical methods to compare group spectral density functions. To accomplish this, we consider two main problems: How can we construct an estimation function from replicated time series for each group and what method can be used to compare the estimated functions? For the first part, we present smooth estimates of spectral densities from time series data obtained from replication across subjects (units) (Wahba 1990; Guo et al. 2003). We assume that each spectral density is in some reproducing kernel Hilbert space and apply penalized least squares methods to estimate spectral density in smoothing spline ANOVA. For the second part, we consider confidence intervals to determine the frequencies where the spectrum of one spectral density may differ from another. These confidence intervals are the independent simultaneous confidence interval and the bootstrapping confidence interval (Babu et al. 1983; Olshen et al. 1989). Finally, as an application, we consider the replicated time series data that consist of shear (S) waves of 8 earthquakes and 8 explosions (Shumway & Stoffer 2006).
|
294 |
Bathtub failure rates of mixtures in reliability and the Simes inequality under dependence in multiple testingWang, Jie 05 November 2008 (has links)
Two topics are presented in this dissertation: (1) obtaining bathtub-shaped failure rates from mixture models; and (2) the Simes inequality under dependence.
The first topic is in the area of reliability theory. Bathtub-shaped failure rates are well-known in reliability due to their extensive applications for many electronic components, systems, products and even biological organisms. Here we derive some the conditions for obtaining bathtub-shaped failure rates distributions from mixtures, which have been utilized to model heterogeneous populations. In particular, we show that the mixtures of a family of exponential distributions and an IFR gamma distribution can yield distributions with bathtub-shaped failure rates.
The second topic is concerned with the area of multiple testing, but uses dependence concepts important in reliability. Simes(1986) considered an improved Bonferroni test procedure based on the so-called Simes inequality. It has been proved that this inequality holds for independent multivariate distributions and a wide class of positively dependent distributions. However, as we show in this dissertation, the inequality reverses for a broad class of negatively dependent distributions. We also make some comments with regard to the Simes inequality and positive dependence.
|
295 |
Applications of The Reflected Ornstein-Uhlenbeck ProcessHa, Won Ho 15 June 2009 (has links)
An Ornstein-Uhlenbeck process is the most basic mean-reversion model and has been used in various fields such as finance and biology. In some instances, reflecting boundary conditions are needed to restrict the state space of this process. We study an Ornstein-Uhlenbeck diffusion process with a reflecting boundary and its application to finance and neuroscience.
In the financial application, the Vasicek model which is an Ornstein-Uhlenbeck process has been used to capture the stochastic movement of the short term interest rate in the market. The shortcoming of applying this model is that it allows a negative interest rate theoretically. Thus we use a reflected Ornstein-Uhlenbeck process as an interest rate model to get around this problem. Then we price zero-coupon bond and European options with respect to our model.
In the application to neuroscience, we study integrate-and-fire (I-F) neuron models. We assume that the membrane voltage follows a reflected Ornstein-Uhlenbeck process and fires when it reaches a threshold. In this case, the interspike intervals (ISIs) are the same as the first hitting times of the process to a certain barrier. We find the first passage time density given ISIs using numerical inversion integration of the Laplace transform of the first passage time pdf. Then we estimate the unknown identifiable parameters in our model.
|
296 |
A Study of Treatment-by-Site Interaction in Multisite Clinical TrialsAbebe, Kaleab Zenebe 30 September 2009 (has links)
Currently, there is little discussion about methods to explain treatment-by-site interaction
in multisite clinical trials, so investigators are left to explain these differences post-hoc with
no formal statistical tests in the literature. Using mediated moderation techniques, three
significance tests used to detect mediation are extended to the multisite setting. Explicit
power functions are derived and compared.
In the two-site case, the mediated moderation framework is utilized to test two
difference-in-coefficients and one product-of-coefficients type tests. The test in the latter
group is based on the product of two independent standard normal variables, which is a
modified Bessel function of the second kind. Because the alternative distribution does not
have a closed form expression, power is approximated using Gauss-Hermite quadrature. This
test suffers from an inflated type I error, so two modifications are proposed: a combination
of intersection-union and union-intersection tests; and one based on a variance stabilizing
transformation. In addition, a modification of one of the difference-in-coefficients tests is
proposed.
The tests are also extended to deal with multiple sites in the ANOVA and logistic regres-
sion models, and the groundwork has been laid to account for multiple mediators as well.
The contribution of this is a group of formal significance tests for explaining treatment-
by-site interaction in the multisite clinical trial setting. This will serve to inform the design
of future clinical trials by accounting for this site-level variability. The proposed methodol-
ogy is illustrated in the analysis of the Treatment of SSRI-Resistant Depression in Adolescents study conducted across six sites coordinated at the University of Pittsburgh.
|
297 |
Optimal Design and Adaptive Design in StereologyZhang, Wei 29 January 2010 (has links)
Stereology is the science that uses geometric probability to extract the internal quantitative properties of a three dimensional object based on lower dimensional information. It is a valuable research tool in biological science and relies heavily on statistical principles. In this dissertation, we focus on studies that examine the number of neurons in a brain region of interest using stereological techniques in order to compare subjects in different diagnostic groups, e. g., subjects with schizophrenia and control subjects. A large number of counting frames are usually used to obtain a prespecified precision for an individual in these kinds of studies. Typically, researchers determine the number of counting frames for each individual by controlling the coefficient of error for the individual. However, the researchers from the Conte Center for the Neuroscience of Mental Disorders (CCNMD) at University of Pittsburgh primarily focus on comparing biomarkers among different diagnosis groups rather than evaluating individuals. A design goal for such stereological studies is to keep study cost within budget and time constraints, while maintaining sufficient statistical power to address the research aims. Statistical power can be increased by either adding more subjects or more counting frames. And the cost of a study can be approximated by a linear combination of the number of subjects and number of counting frames. To address this need, we have developed new technologies that enable researchers to design a cost efficient study balancing the number of subjects with the number of counting frames for each subject.
We also develop adaptive designs to conduct stereological studies. Adaptive designs allow the opportunity to look at the data at an interim stage, and to modify the design based on the information obtained from the first stage data. In our adaptive design, we estimate the stereological variance without breaking the blind of the Stage I data, and re-design the second stage based on the stereological variance estimator obtained from the first stage. Based on our procedure, we show researchers can cost-effectively modify the design while maintaining the desired study power.
|
298 |
Mapping Underlying Dynamic Effective Connectivity In Neural Systems Using The Deconvolved Neuronal ActivityBaik, Seo Hyon 28 September 2010 (has links)
ABSTRACT
MAPPING UNDERLYING DYNAMIC EFFECTIVE CONNECTIVITY IN NEURAL SYSTEMS USING THE DECONVOLVED NEURONAL
ACTIVITY
Seo Hyon Baik, PhD
University of Pittsburgh, 2010
Event-related functional magnetic resonance imaging (fMRI) has emerged as a tool for studying the functioning of the human brain. The study on fMRI supplies information on the underlying mechanism of the human brain, such as how a brain in good shape functions, how a brain affected by different diseases works, how a brain struggles to recover after damage and how different stimuli can modulate this recovery process.
The variable of interest is the neuronal activities given a stimulus, however the signal
being quantified by MRI scanner is the blood oxygenation level-dependent (BOLD) response which is the subordinate repercussion of the underlying neuronal activity such as local changes in blood
flow, volume and oxygenation level that takes place within a few second of changes in neuronal activity. From this point of view, one may think that the neuronal-activity-based and BOLD-based studies would be dissimilar in yielding information on the underlying mechanism of the human brain. This dissertation is devoted primarily to estimating underlying neuronal activities given a stimuli. In particular, we develop a method of estimating intrinsic neuronal signals and haemodynamic responses under the
fact that a BOLD response is expressed as a convolution of the underlying neuronal signal
and the haemodynamic response function. We also present differences between the use of estimated neuronal signals and of observed BOLD responses in investigating causal relationships among heterogeneous brain regions using an ordinary vector autoregressive model.
|
299 |
Statistical Treatment of Gravitational Clustering AlgorithmZhang, Yao 01 October 2010 (has links)
In neuroscience, simultaneously recorded spike trains from multiple neurons are increasingly common; however, the computational neuroscience problem of how to quantitatively analyze such data remains a challenge. Gerstein, et al. proposed a gravitational clustering algorithm (GCA) for multiple spike trains to qualitatively study interactions, in particular excitation, among multiple neurons. This thesis is mainly focused on a probabilistic treatment of GCA and a statistical treatment of Gerstein's interaction mode.
For a formal probabilistic treatment, we adopt homogeneous Poisson processes to generate the spike trains; define an interaction mode based on Gerstein's formulation; analyze the asymptotic properties of its cluster index -- GCA distances (GCAD). Under this framework, we show how the expectation of GCAD is related to a particular interaction mode, i.e., we prove that a time-adjusted-GCAD is a reasonable cluster index for large samples. We also indicate possible stronger results, such as central limit theorems and convergence to a Gaussian process.
In our statistical work, we construct a generalized mixture model to estimate Gerstein's interaction mode. We notice two key features of Gerstein's proposal: (1) each spike from each spike train was assumed to be triggered by either one previous spike from one other spike train or environment; (2) each spike train was transformed into a continuous longitudinal curve. Inspired by their work, we develop a Bayesian model to quantitatively estimate excitation effects in the network structure. Our approach generalizes the mixture model to accommodate the network structure through a matrix Dirichlet distribution. The network structure in our model could either approximate the directed acyclic graph of a Bayesian network or be the directed graph in a dynamic Bayesian network. This model can be generally applied on high-dimensional longitudinal data to model its dynamics. Finally, we assess the sampling properties of this model and its application to multiple spike trains by simulation.
|
300 |
The Effect of Student-Driven Projects on the Development of Statistical ReasoningSovak, Melissa M 30 September 2010 (has links)
Research has shown that even if students pass a standard introductory statistics course, they often still lack the ability to reason statistically. Many instructional techniques for enhancing the development of statistical reasoning have been discussed by several authors,
although, there is often little to no quantitative analysis to give evidence that they produce effective results in the classroom.
The purpose of this study was to produce quantitative data to investigate the effectiveness
of a particular teaching technique in enhancing students' statistical reasoning abilities. The
study compared students in a traditional lecture-based introductory statistics course with
students in a similar introductory course that adds a semester-long project. The project
was designed to target three main focus areas found in an introductory statistics course:
distributions, probability and inference. Seven sections of introductory statistics courses
were used. One section at each level served as an experimental section and used a five part project in the course curriculum. All other sections followed a typical introductory curriculum for the specific course level.
All sections involved completed both a pre-test and a post-test. Both assessments were designed to measure reasoning ability targeted by the project in order to determine if using the project aids in the increased development of statistical reasoning.
Additional purposes of this research were to develop assessment questions that target
students' reasoning abilities and to provide a template for a semester-long data analysis
project for introductory courses.
Analysis of the data was completed using methods that included ANCOVA and contingency tables to investigate the effect of the project on the development of students' statistical reasoning. A qualitative analysis is also discussed to provide information on aspects of the project not covered by the quantitative analysis.
Analysis of the data indicated that project participants had higher learning gains overall
when compared with the gains made by students not participating in the project. Results of the qualitative analysis also suggest that, in addition to providing larger learning gains,
projects were also enjoyed by students. These results indicate that the use of projects are a
valuable teaching technique for introductory statistics courses.
|
Page generated in 0.0925 seconds