• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 21
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 20
  • 18
  • 16
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Regression approach to software reliability models

Mostafa, Abdelelah M 01 June 2006 (has links)
Many software reliability growth models have beenanalyzed for measuring the growth of software reliability. In this dissertation, regression methods are explored to study software reliability models. First, two parametric linear models are proposed and analyzed, the simple linear regression and transformed linearregression corresponding to a power law process. Some software failure data sets do not follow the linear pattern. Analysis of popular real life data showed that these contain outliers andleverage values. Linear regression methods based on least squares are sensitive to outliers and leverage values. Even though the parametric regression methods give good results in terms of error measurement criteria, these results may not be accurate due to violation of the parametric assumptions. To overcome these difficulties, nonparametric regression methods based on ranks are proposed as alternative techniques to build software reliability models. In particular, monotone regre ssion and rank regression methods are used to evaluate the predictive capability of the models. These models are applied to real life data sets from various projects as well as to diverse simulated data sets. Both the monotone and the rank regression methods are robust procedures that are less sensitive to outliers and leverage values. In particular, the regression approach explains predictive properties of the mean time to failure for modeling the patterns of software failure times.In order to decide on model preference and to asses predictive accuracy of the mean time between failure time estimates for the defined data sets, the following error measurements evaluative criteria are used: the mean square error, mean absolute value difference, mean magnitude of relative error, mean magnitude oferror relative to the estimate, median of the absolute residuals, and a measure of dispersion. The methods proposed in this dissertation, when applied to real software failure data, give lesserror in terms of all the measurement criteria compared to other popular methods from literature. Experimental results show that theregression approach offers a very promising technique in software reliability growth modeling and prediction.
32

Multivariate Multiscale Analysis of Neural Spike Trains

Ramezan, Reza 10 December 2013 (has links)
This dissertation introduces new methodologies for the analysis of neural spike trains. Biological properties of the nervous system, and how they are reflected in neural data, can motivate specific analytic tools. Some of these biological aspects motivate multiscale frameworks, which allow for simultaneous modelling of the local and global behaviour of neurons. Chapter 1 provides the preliminary background on the biology of the nervous system and details the concept of information and randomness in the analysis of the neural spike trains. It also provides the reader with a thorough literature review on the current statistical models in the analysis of neural spike trains. The material presented in the next six chapters (2-7) have been the focus of three papers, which have either already been published or are being prepared for publication. It is demonstrated in Chapters 2 and 3 that the multiscale complexity penalized likelihood method, introduced in Kolaczyk and Nowak (2004), is a powerful model in the simultaneous modelling of spike trains with biological properties from different time scales. To detect the periodic spiking activities of neurons, two periodic models from the literature, Bickel et al. (2007, 2008); Shao and Li (2011), were combined and modified in a multiscale penalized likelihood model. The contributions of these chapters are (1) employinh a powerful visualization tool, inter-spike interval (ISI) plot, (2) combining the multiscale method of Kolaczyk and Nowak (2004) with the periodic models ofBickel et al. (2007, 2008) and Shao and Li (2011), to introduce the so-called additive and multiplicative models for the intensity function of neural spike trains and introducing a cross-validation scheme to estimate their tuning parameters, (3) providing the numerical bootstrap confidence bands for the multiscale estimate of the intensity function, and (4) studying the effect of time-scale on the statistical properties of spike counts. Motivated by neural integration phenomena, as well as the adjustments for the neural refractory period, Chapters 4 and 5 study the Skellam process and introduce the Skellam Process with Resetting (SPR). Introducing SPR and its application in the analysis of neural spike trains is one of the major contributions of this dissertation. This stochastic process is biologically plausible, and unlike the Poisson process, it does not suffer from limited dependency structure. It also has multivariate generalizations for the simultaneous analysis of multiple spike trains. A computationally efficient recursive algorithm for the estimation of the parameters of SPR is introduced in Chapter 5. Except for the literature review at the beginning of Chapter 4, the rest of the material within these two chapters is original. The specific contributions of Chapters 4 and 5 are (1) introducing the Skellam Process with Resetting as a statistical tool to analyze neural spike trains and studying its properties, including all theorems and lemmas provided in Chapter 4, (2) the two fairly standard definitions of the Skellam process (homogeneous and inhomogeneous) and the proof of their equivalency, (3) deriving the likelihood function based on the observable data (spike trains) and developing a computationally efficient recursive algorithm for parameter estimation, and (4) studying the effect of time scales on the SPR model. The challenging problem of multivariate analysis of the neural spike trains is addressed in Chapter 6. As far as we know, the multivariate models which are available in the literature suffer from limited dependency structures. In particular, modelling negative correlation among spike trains is a challenging problem. To address this issue, the multivariate Skellam distribution, as well as the multivariate Skellam process, which both have flexible dependency structures, are developed. Chapter 5 also introduces a multivariate version of Skellam Process with Resetting (MSPR), and a so-called profile-moment likelihood estimation of its parameters. This chapter generalizes the results of Chapter 4 and 5, and therefore, except for the brief literature review provided at the beginning of the chapter, the remainder of the material is original work. In particular, the contributions of this chapter are (1) introducing multivariate Skellam distribution, (2) introducing two definitions of the Multivariate Skellam process in both homogeneous and inhomogeneous cases and proving their equivalence, (3) introducing Multivariate Skellam Process with Resetting (MSPR) to simultaneously model spike trains from an ensemble of neurons, and (4) utilizing the so-called profile-moment likelihood method to compute estimates of the parameters of MSPR. The discussion of the developed methodologies as well as the ``next steps'' are outlined in Chapter 7.
33

Droplet Growth in Moist Turbulent Natural Convection in a Tube

Madival, Deepak Govind January 2017 (has links) (PDF)
Droplet growth processes in a cumulus cloud, beginning from its inception at sub-micron scale up to drizzle drop size of few hundred microns, in an average duration of about half hour, has been a topic of intense research. In particular role of turbulence in aiding droplet growth in clouds has been of immense interest. Motivated by this question, we have performed experiments in which turbulent natural convection coupled with phase change is set up inside a tall vertical insulated tube, by heating water located at tube bottom and circulating cold air at tube top. The resulting moist turbulent natural convection flow in the tube is expected to be axially homogeneous. Mixing of air masses of differing temperature and moisture content leads to condensation of water vapor into droplets, on aerosols available inside the tube. We there-fore have droplets in a turbulent flow, in which phase change is coupled to turbulence dynamics, just as in clouds. We obtain a linear mean-temperature pro le in the tube away from its ends. Because there is net flux of water vapor through the tube, there is a weak mean axial flow, but which is small compared to turbulent velocity fluctuations. We have experimented with two setups, the major difference between them being that in one setup, called AC setup, tube is open to atmosphere at its top and hence has higher aerosol concentration inside the tube, while the other setup, called RINAC setup, is closed to atmosphere and due to presence of aerosol filters has lower aerosol concentration inside the tube. Also in the latter setup, cold air temperature at tube top can be reduced to sub-zero levels. In both setups, turbulence attains a stationary state and is characterized by Rayleigh number based on temperature gradient inside the tube away from its ends, which is 107. A significant result from our experiments is that in RINAC setup, we obtain a broadened droplet size distribution at mid-height of tube which includes a few droplets of size 36 m, which in real clouds marks the beginning of rapid growth of droplets due to collisions among them by virtue of their interaction with turbulence. This shows that for broadening of droplet size distribution, high turbulence levels prevalent in clouds is not strictly necessary. Second part of our study comprises two pieces of theoretical work. First, we deal with the problem of a large collector drop settling amidst a population of smaller droplets whose spatial distribution is homogeneous in the direction of fall. This problem is relevant to the last stage of droplet growth in clouds, when the droplets have grown large enough that they interact weakly with turbulence and begin to settle under gravity. We propose a new method to solve this problem in which collision process is treated as a discrete stochastic process, and reproduce Telford's solution in which collision is treated as a homogeneous Poisson process. We then show how our method may be easily generalized to non-Poisson collision process. Second, we propose a new method to detect droplet clusters in images. This method is based on nearest neighbor relationship between droplets and does not employ arbitrary numerical criteria. Also this method has desirable invariance properties, in particular under the operation of uniform scaling of all distances and addition/deletion of empty space in an image, which therefore renders the proposed method robust. This method has advantage in dealing with highly clustered distributions, where cluster properties vary over the image and therefore average of properties computed over the entire image could be misleading.
34

Caminhadas com memória em meios regulares e desordenados: aspectos estáticos e dinâmicos / Memory Walks in Regular and Disordered Media: Static and Dynamic Features

Cristiano Roberto Fabri Granzotti 05 March 2015 (has links)
Propomos o estudo do meio desordenado onde a caminhada determinista parcialmente autorrepulsiva (CDPA) é desenvolvida e o estudo da caminhada aleatória autorrepulsiva (SAW) em rede regular. O meio desordenado na CDPA, gerado por um processo Poissônico espacial, é caracterizado pela estatística de vizinhança e de distâncias. A estatística de vizinhança mede a probabilidade de um ponto ser $m$-ésimo vizinho mais próximo de seu $n$-ésimo vizinho mais próximo. A estatística de distâncias mede a distribuição de distância de um ponto ao seu $k$-ésimo vizinho mais próximo. No problema da estatística de distâncias, calculamos a função densidade de probabilidade (pdf) e estudamos os casos limites de alta ordem de vizinhança e alta dimensionalidade. Um caso particular dessa pdf pode verificar se um conjunto de pontos foi gerado por um processo Poissônico. Na SAW em rede regular, um caminhante escolhe aleatoriamente um sítio adjacente para ser visitado no próximo passo, mas é proibido visitar um sítio duas ou mais vezes. Desenvolvemos uma nova abordagem para estudar grandezas conformacionais por meio do produto escalar entre o vetor posição e vetor deslocamento no $j$-ésimo passo: $\\langle\\vec{R}_{j}\\cdot\\vec{u}_{j}angle_{N}$. Mostramos que para $j=N$ o produto escalar é igual ao comprimento de persistência (projeção do vetor posição na direção do primeiro passo) e que converge para uma constante. Calculamos a distância quadrática média ponta-a-ponta, $\\langle \\vec{R}_{N}^{2}angle_{N}\\sim N^{2 u_{0}}$, como o somatório de $1\\leq j \\leq N$ do produto escalar. Os dados gerados pelo algoritmo de simulação Monte Carlo, codificado em linguagem C e paralelizado em MPI, fornecem o expoente $ u_{0}$ da regra de escala $\\langle \\vec{R}_{j}\\cdot\\vec{u}_{j}angle_{N}\\sim j^{2 u_{0}-1}$, para $1\\leq j \\leq \\Theta(N)$, próximo ao valor esperado. A partir de $\\Theta(N)\\approx N/2$ para rede quadrada e $\\Theta(N)\\approx N/3$ para rede cúbica, a caminhada torna-se mais flexível devido ao maior número de graus de liberdade disponível nos últimos passos. / We propose the study of disordered media where the deterministic partially self-avoiding walk (DPSW) is developed and the study of self-avoiding random walk (SAW) in regular lattices. The disordered media in the DPSW, generated by a spatial Poissonian process, is characterized by neighborhood and distance statistics. Neighborhood statistics quantifies the probability of a point to be the $m$th nearest neighbor of its $n$th nearest neighbor. Distance statistics quantifies the distance distribution of a given point to its $k$th nearest neighbor. For the distance statistics problem, we obtain the probability density function (pdf) and study the high dimensionality and high neighborhood order limits. A particular case of this pdf can verify if a points set is generated by a Poissonian process. In a SAW in regular lattice, the walker randomly chooses an adjacent site to be visited in the next step, but is forbidden to visit a site two or more times. We developed a new approach to study conformational quantities of SAW by means of the scalar product between the position vector and the displacement vector in the $j$th step: $\\langle\\vec{R}_{j}\\cdot\\vec{u}_{j}angle_{N}$. We show that for $j=N$ the scalar product is equal to the persistence length (projection of position vector in the direction of the first step) and that converges to a constant. We compute the square end-to-end distance, $\\langle \\vec{R}_{N}^{2}angle_{N}\\sim N^{2 u_{0}}$, as the summation $1\\leq j \\leq N$ of scalar product. The data generated by Monte Carlo simulation algorithm, coded in C language and parallelized in MPI, provides the exponent $ u_{0}$ of the scaling law $\\langle \\vec{R}_{j}\\cdot\\vec{u}_{j}angle_{N}\\sim j^{2 u_{0}-1}$, for $1\\leq j \\leq \\Theta(N)$, close to the expected value. Starting from $\\Theta(N)\\approx N/2$ for square lattice and $\\Theta(N)\\approx N/3$ for cubic lattice, the walk becomes more flexible due to the large number of degrees of freedom available in the last steps.
35

Modelagem de dados de eventos recorrentes via processo de Poisson com termo de fragilidade. / Modelling Recurrent Event Data Via Poisson Process With a Frailty Term.

Vera Lucia Damasceno Tomazella 28 July 2003 (has links)
Nesta tese é analisado situações onde eventos de interesse podem ocorrer mais que uma vez para o mesmo indivíduo. Embora os estudos nessa área tenham recebido considerável atenção nos últimos anos, as técnicas que podem ser aplicadas a esses casos especiais ainda são pouco exploradas. Além disso, em problemas desse tipo, é razoável supor que existe dependência entre as observações. Uma das formas de incorporá-la é introduzir um efeito aleatório na modelagem da função de risco, dando origem aos modelos de fragilidade. Esses modelos, em análise de sobrevivência, visam descrever a heterogeneidade não observada entre as unidades em estudo. Os modelos estatísticos apresentados neste texto são fundamentalmente modelos de sobrevivência baseados em processos de contagem, onde é representado o problema como um processo de Poisson homogêneo e não-homogêneo com um termo de fragilidade, para o qual um indivíduo com um dado vetor de covariável x é acometido pela ocorrência de eventos repetidos. Esses modelos estão divididos em duas classes: modelos de fragilidade multiplicativos e aditivos; ambos visam responder às diferentes formas de avaliar a influência da heterogeneidade entre as unidades na função de intensidade dos processos de contagem. Até agora, a maioria dos estudos tem usado a distribuição gama para o termo de fragilidade, a qual é matematicamente conveniente. Este trabalho mostra que a distribuição gaussiana inversa tem propriedade igualmente simples à distribuição gama. Consequências das diferentes distribuições são examinadas, visando mostrar que a escolha da distribuição de fragilidade é importante. O objetivo deste trabalho é propor alguns métodos estatísticos para a análise de eventos recorrentes e verificar o efeito da introdução do termo aleatório no modelo por meio do estudo do custo, da estimação dos outros parâmetros de interesse. Também um estudo de simulação bootstrap é apresentado para fazer inferências dos parâmetros de interesse. Além disso, uma abordagem Bayesiana é proposta para os modelos de fragilidade multiplicativos e aditivos. Métodos de simulações são utilizados para avaliar as quantidades de interesse a posteriori. Por fim para ilustrar a metodologia, considera-se um conjunto de dados reais sobre um estudo dos resultados experimentais de animais cancerígenos. / In this thesis we analyse situations where events of interest may occur more than once for the same individual and it is reasonable to assume that there is dependency among the observations. A way of incorporating this dependency is to introduce a random effect in the modelling include a frailty term in the intensity function. The statistical methods presented here are intensity models based, where we represent the problem as a homogeneous and nonhomogeneous Poisson process with a frailty term for which an individual with given fixed covariate vector x has reccurent events occuring. These models are divided into two classes: multiplicative and additive models, aiming to answer the different ways of assessing the influence of heterogeneity among individuals in the intensity function of the couting processes. Until now most of the studies have used a frailty gamma distribution, due to mathematical convenience. In this work however we show that a frailty gaussian inverse distribution has equally simple proprieties when compared to a frailty gamma distribution. Methods for regression analysis are presented where we verify the effect of the frailty term in the model through of the study of the cost of estimating the other parameters of interest. We also use the simulation bootstrap method to make inference on the parameters of interest. Besides we develop a Bayesian approach for the homogeneous and nonhomogeneous Poisson process with multiplicative and additive frailty. Simulation methods are used to assess the posterior quantities of interest. In order to ilustrate our methodology we considere a real data set on results of an experimental animal carcinogenesis study.
36

Functional clustering methods and marital fertility modelling

Arnqvist, Per January 2017 (has links)
This thesis consists of two parts.The first part considers further development of a model used for marital fertility, the Coale-Trussell's fertility model, which is based on age-specific fertility rates. A new model is suggested using individual fertility data and a waiting time after pregnancies. The model is named the waiting model and can be understood as an alternating renewal process with age-specific intensities. Due to the complicated form of the waiting model and the way data is presented, as given in the United Nation Demographic Year Book 1965, a normal approximation is suggested together with a normal approximation of the mean and variance of the number of births per summarized interval. A further refinement of the model was then introduced to allow for left truncated and censored individual data, summarized as table data. The waiting model suggested gives better understanding of marital fertility and by a simulation study it is shown that the waiting model outperforms the Coale-Trussell model when it comes to estimating the fertility intensity and to predict the mean and variance of the number of births for a population. The second part of the thesis focus on developing functional clustering methods.The methods are motivated by and applied to varved (annually laminated) sediment data from lake Kassj\"on in northern Sweden. The rich but complex information (with respect to climate) in the varves, including the shapes of the seasonal patterns, the varying varve thickness, and the non-linear sediment accumulation rates makes it non-trivial to cluster the varves. Functional representations, smoothing and alignment are functional data tools used to make the seasonal patterns comparable.Functional clustering is used to group the seasonal patterns into different types, which can be associated with different weather conditions. A new non-parametric functional clustering method is suggested, the Bagging Voronoi K-mediod Alignment algorithm, (BVKMA), which simultaneously clusters and aligns spatially dependent curves. BVKMA is used on the varved lake sediment, to infer on climate, defined as frequencies of different weather types, over longer time periods. Furthermore, a functional model-based clustering method is proposed that clusters subjects for which both functional data and covariates are observed, allowing different covariance structures in the different clusters. The model extends a model-based functional clustering method proposed by James and Suger (2003). An EM algorithm is derived to estimate the parameters of the model.
37

Hedging no modelo com processo de Poisson composto / Hedging in compound Poisson process model

Sung, Victor Sae Hon 07 December 2015 (has links)
Interessado em fazer com que o seu capital gere lucros, o investidor ao optar por negociar ativos, fica sujeito aos riscos econômicos de qualquer negociação, pois não existe uma certeza quanto a valorização ou desvalorização de um ativo. Eis que surge o mercado futuro, em que é possível negociar contratos a fim de se proteger (hedge) dos riscos de perdas ou ganhos excessivos, fazendo com que a compra ou venda de ativos, seja justa para ambas as partes. O objetivo deste trabalho consiste em estudar os processos de Lévy de puro salto de atividade finita, também conhecido como modelo de Poisson composto, e suas aplicações. Proposto pelo matemático francês Paul Pierre Lévy, os processos de Lévy tem como principal característica admitir saltos em sua trajetória, o que é frequentemente observado no mercado financeiro. Determinaremos uma estratégia de hedging no modelo de mercado com o processo de Poisson composto via o conceito de mean-variance hedging e princípio da programação dinâmica. / The investor, that negotiate assets, is subject to economic risks of any negotiation because there is no certainty regarding the appreciation or depreciation of an asset. Here comes the futures market, where contracts can be negotiated in order to protect (hedge) the risk of excessive losses or gains, making the purchase or sale assets, fair for both sides. The goal of this work consist in study Lévy pure-jump process with finite activity, also known as compound Poisson process, and its applications. Discovered by the French mathematician Paul Pierre Lévy, the Lévy processes admits jumps in paths, which is often observed in financial markets. We will define a hedging strategy for a market model with compound Poisson process using mean-variance hedging and dynamic programming.
38

Statistical inference for non-homogeneous Poisson process with competing risks: a repairable systems approach under power-law process / Inferência estatística para processo de Poisson não-homogêneo com riscos competitivos: uma abordagem de sistemas reparáveis sob processo de lei de potência

Almeida, Marco Pollo 30 August 2019 (has links)
In this thesis, the main objective is to study certain aspects of modeling failure time data of repairable systems under a competing risks framework. We consider two different models and propose more efficient Bayesian methods for estimating the parameters. In the first model, we discuss inferential procedures based on an objective Bayesian approach for analyzing failures from a single repairable system under independent competing risks. We examined the scenario where a minimal repair is performed at each failure, thereby resulting in that each failure mode appropriately follows a power-law intensity. Besides, it is proposed that the power-law intensity is reparametrized in terms of orthogonal parameters. Then, we derived two objective priors known as the Jeffreys prior and reference prior. Moreover, posterior distributions based on these priors will be obtained in order to find properties which may be optimal in the sense that, for some cases, we prove that these posterior distributions are proper and are also matching priors. In addition, in some cases, unbiased Bayesian estimators of simple closed-form expressions are derived. In the second model, we analyze data from multiple repairable systems under the presence of dependent competing risks. In order to model this dependence structure, we adopted the well-known shared frailty model. This model provides a suitable theoretical basis for generating dependence between the components failure times in the dependent competing risks model. It is known that the dependence effect in this scenario influences the estimates of the model parameters. Hence, under the assumption that the cause-specific intensities follow a PLP, we propose a frailty-induced dependence approach to incorporate the dependence among the cause-specific recurrent processes. Moreover, the misspecification of the frailty distribution may lead to errors when estimating the parameters of interest. Because of this, we considered a Bayesian nonparametric approach to model the frailty density in order to offer more flexibility and to provide consistent estimates for the PLP model, as well as insights about heterogeneity among the systems. Both simulation studies and real case studies are provided to illustrate the proposed approaches and demonstrate their validity. / Nesta tese, o objetivo principal é estudar certos aspectos da modelagem de dados de tempo de falha de sistemas reparáveis sob uma estrutura de riscos competitivos. Consideramos dois modelos diferentes e propomos métodos Bayesianos mais eficientes para estimar os parâmetros. No primeiro modelo, discutimos procedimentos inferenciais baseados em uma abordagem Bayesiana objetiva para analisar falhas de um único sistema reparável sob riscos competitivos independentes. Examinamos o cenário em que um reparo mínimo é realizado em cada falha, resultando em que cada modo de falha segue adequadamente uma intensidade de lei de potência. Além disso, propõe-se que a intensidade da lei de potência seja reparametrizada em termos de parâmetros ortogonais. Então, derivamos duas prioris objetivas conhecidas como priori de Jeffreys e priori de referência. Além disso, distribuições posteriores baseadas nessas prioris serão obtidas a fim de encontrar propriedades que podem ser ótimas no sentido de que, em alguns casos, provamos que essas distribuições posteriores são próprias e que também são matching priors. Além disso, em alguns casos, estimadores Bayesianos não-viesados de forma fechada são derivados. No segundo modelo, analisamos dados de múltiplos sistemas reparáveis sob a presença de riscos competitivos dependentes. Para modelar essa estrutura de dependência, adotamos o conhecido modelo de fragilidade compartilhada. Esse modelo fornece uma base teórica adequada para gerar dependência entre os tempos de falha dos componentes no modelo de riscos competitivos dependentes. Sabe-se que o efeito de dependência neste cenário influencia as estimativas dos parâmetros do modelo. Assim, sob o pressuposto de que as intensidades específicas de causa seguem um PLP, propomos uma abordagem de dependência induzida pela fragilidade para incorporar a dependência entre os processos recorrentes específicos da causa. Além disso, a especificação incorreta da distribuição de fragilidade pode levar a erros na estimativa dos parâmetros de interesse. Por isso, consideramos uma abordagem Bayesiana não paramétrica para modelar a densidade da fragilidade, a fim de oferecer mais flexibilidade e fornecer estimativas consistentes para o modelo PLP, bem como insights sobre a heterogeneidade entre os sistemas. São fornecidos estudos de simulação e estudos de casos reais para ilustrar as abordagens propostas e demonstrar sua validade.
39

Mathematical Modeling and Analysis of Options with Jump-Diffusion Volatility

Andreevska, Irena 09 April 2008 (has links)
Several existing pricing models of financial derivatives as well as the effects of volatility risk are analyzed. A new option pricing model is proposed which assumes that stock price follows a diffusion process with square-root stochastic volatility. The volatility itself is mean-reverting and driven by both diffusion and compound Poisson process. These assumptions better reflect the randomness and the jumps that are readily apparent when the historical volatility data of any risky asset is graphed. The European option price is modeled by a homogeneous linear second-order partial differential equation with variable coefficients. The case of underlying assets that pay continuous dividends is considered and implemented in the model, which gives the capability of extending the results to American options. An American option price model is derived and given by a non-homogeneous linear second order partial integro-differential equation. Using Fourier and Laplace transforms an exact closed-form solution for the price formula for European call/put options is obtained.
40

以用字分析紅樓夢之作者問題

王吉松 Unknown Date (has links)
摘要 《紅樓夢》是一部具有高度思想性和高度藝術性的文學鉅著,其前進思想和表現的寫作技巧,無可置疑的領先同時代的作家和作品。因為其具有獨特的藝術魅力,所以不但廣泛的流傳民間,也成功地站上世界文學之林。 《紅樓夢》雖然膾炙人口且流傳已逾兩百餘年,然而本書真正的作者是誰,卻一直是學者專家們爭論的話題。在大家的印象中,紅樓夢前八十回由清朝曹雪芹所寫,而後四十回則由高鶚所續編完成,但是研究紅樓夢的學者對於此一說法,仍抱著懷疑的態度,不斷的尋求證據以解答此問題的真相。 近年來,學者憑靠著殘存的證據,試圖以各種研究方法予以合理的推論,然時空變遷,只能恢復部分的歷史真相,無法給予完整的復原,而《紅樓夢》的作者究竟是誰,至今尚未有一個大家認同的答案。 本論文嘗試以品種比較、樣本重複性及品種涵蓋率等統計方法,配合電腦的檢索,藉由分析寫作風格及其用字習慣,以統計分析的角度來推論《紅樓夢》的作者。 關鍵詞:紅樓夢、品種問題、樣本重複性、卜瓦松過程。 / Abstract "The Dream of Chamber" is a greatly artistic novel in Chinese literature. Undoubtedly, the writing style and the delicate design of this book lead the other authors and novels at the same time. Because of its distinctive charm, it is wide-spreading not only in China but also in the other country. Although "The Dream of Chamber" has been spread more than two hundred years, however it also exists a mystery─"Who is the real author of this book?". Most people believe that Sher-Chin Tsao wrote the first 80 chapters, and Gao-E wrote the last 40 chapters. But many have doubt about this statement. People try to find evidence in order to solve this problem, but still have not a persuasive answer. In this report, we attempt to solve this riddle by statistical analysis, including the methods of species comparing, species overlap, and sample coverage etc., besides, we use computer to search words. We try to infer the author of "The Dream of Chamber" from the statistical point of view. Keyword: The Dream of Chamber, species comparison, sample coverage. Poisson Process.

Page generated in 0.0503 seconds