Spelling suggestions: "subject:"saddle"" "subject:"saddles""
1 |
Improved asymptotics for econometric estimators and testsMarsh, Patrick W. N. January 1996 (has links)
No description available.
|
2 |
Saddlepoint Approximation for Calculating Performance of Spectrum-Sliced WDM SystemsTeotia, Seemant 06 August 1999 (has links)
Spectrum slicing is a novel technique for the implementation of wavelength-division multiplexing (WDM). While conventional WDM systems employ laser diodes operating at discrete wavelengths as carriers for the different data channels that are to be multiplexed, spectrum-sliced systems make use of spectral slices of a broadband noise source for the different data channels, thus being economically attractive.
In spectrum-sliced WDM systems with an optical preamplifier receiver there is an optimum m=BoT (Bo = optical channel bandwidth, T = bit duration) to minimize the average number of photons-per-bit (Np) required at the receiver for a given error probability (Pe). Both the optimum m and the minimum increase as interchannel interference increases. This has been analyzed previously by using the Gaussian approximation, or by assuming that the signals at the decision point are chi-square distributed. Although the chi-square distribution is valid in the case where there is no interference, it is not valid in the presence of interference, since the interference from the neighboring channel has a smaller bandwidth than the signal. In this thesis, a different method is used to analyze this problem. This method is called the Saddlepoint Approximation, and while the exact analysis required determination of the probability density function (pdf) of the received signal, the saddlepoint method makes use of moment generating functions (MGFs) which have a much simpler form and don't require the convolution operations the pdfs require.
The saddlepoint method is validated by comparing results obtained with the chi-square analysis for the no interchannel interference case when a rectangular shaped filter is used. The effect of non-rectangular spectra on receiver sensitivity with the use of the Saddlepoint Approximation is also investigated. After verifying its validity, the method is applied to the interchannel interference case caused by filter overlap. It is shown that for small filter overlap, use of an equivalent chi-square distribution is valid, but when the overlap becomes larger, the performance approaches that calculated using the Gaussian distribution. It is shown that there is an optimum filter overlap to maximize the total system throughput when total bandwidth is constrained. Operating at this optimum, the total system throughput is 135 Gbits/s when the total system bandwidth is 4.4 THz (35 nm) for a Bit Error Rate (BER) of 10e-9. / Master of Science
|
3 |
Approaches to the multivariate random variables associated with stochastic processesYu, Jihnhee 15 November 2004 (has links)
Stochastic compartment models are widely used in modeling processes for biological populations. The residence time has been especially useful in describing the system dynamics in the models. The direct calculation of the distribution for the residence time of stochastic multi-compartment models is very complicated even with a relatively simple model and often impossible to calculate directly. This dissertation presents an analytical method to obtain the moment generating function for stochastic multi-compartment models and describe the distribution of the residence times, especially systems with nonexponential lifetime distributions.
A common method for obtaining moments of the residence time is using the coefficient matrix, however it has a limitation in obtaining high order moments and moments for combined compartments in a system.
In this dissertation, we first derive the bivariate moment generating function of the residence time distribution for stochastic two-compartment models with general lifetimes. It provides any order of moments and also enables us to approximate the density of the residence time using the saddlepoint approximation. The approximation method is applied to various situations including the approximation of the bivariate distribution of residence times in two-compartment models or approximations based on the truncated moment generating function.
Special attention is given to the distribution of the residence time for multi-compartment semi-Markov models. The cofactor rule and the analytic approach to the two-compartment model facilitate the derivation of the moment generating function. The properties from the embedded Markov chain are also used to extend the application of the approach.
This approach provides a complete specification of the residence time distribution based on the moment generating function and thus provides an easier calculation of high-order moments than the approach using the coefficient matrix.
Applications to drug kinetics demonstrate the simplicity and usefulness of this approach.
|
4 |
Four essays in finite-sample econometricsChen, Qian 09 August 2007 (has links)
In this dissertation, we explore the use of three different analytical techniques for approximating the finite-sample properties of estimators and test statistics. These techniques are the saddlepoint approximation, the large-n approximation and the small-disturbance approximation. The first of these enables us to approximate the complete density or distribution function for a statistic of interest, while the other two approximations provide analytical results for the first few moments of the finite-sample distribution. We consider a range of interesting estimation and testing problems that arise in econometrics and empirical economics. Saddlepoint approximations are used to determine the distribution of the half-life estimator that arises in the empirical purchasing power parity literature, and to show that its moments are undefined. They are also applied to the problem of obtaining accurate critical points for the Anderson-Darling goodness-of-fit test. The large-n approximation is used to study the first two moments of the MLE in the binary Logit model. Finally, we use small-disturbance approximations to examine the bias and mean squared error of some commonly used price index numbers, when the latter are viewed as point estimators.
|
5 |
Four essays in finite-sample econometricsChen, Qian 09 August 2007 (has links)
In this dissertation, we explore the use of three different analytical techniques for approximating the finite-sample properties of estimators and test statistics. These techniques are the saddlepoint approximation, the large-n approximation and the small-disturbance approximation. The first of these enables us to approximate the complete density or distribution function for a statistic of interest, while the other two approximations provide analytical results for the first few moments of the finite-sample distribution. We consider a range of interesting estimation and testing problems that arise in econometrics and empirical economics. Saddlepoint approximations are used to determine the distribution of the half-life estimator that arises in the empirical purchasing power parity literature, and to show that its moments are undefined. They are also applied to the problem of obtaining accurate critical points for the Anderson-Darling goodness-of-fit test. The large-n approximation is used to study the first two moments of the MLE in the binary Logit model. Finally, we use small-disturbance approximations to examine the bias and mean squared error of some commonly used price index numbers, when the latter are viewed as point estimators.
|
6 |
Statistická inference založená na aproximaci pomocí metody sedlového bodu / Statistical inference based on saddlepoint approximationsSabolová, Radka January 2014 (has links)
Title: Statistical inference based on saddlepoint approximations Author: Radka Sabolová Abstract: The saddlepoint techniques for M-estimators have proved to be very accurate and robust even for small sample sizes. Based on these results, saddle- point approximations of density of regression quantile and saddlepoint tests on the value of regression quantile were derived, both in parametric and nonpara- metric setup. Among these, a test on the value of regression quantile based on the asymptotic distribution of averaged regression quantiles was also proposed and all these tests were compared in a numerical study to the classical tests. Finally, special case of Kullback-Leibler divergence in exponential family was studied and saddlepoint approximations of the density of maximum likelihood estimator and sufficient statistic were also derived using this divergence. 1
|
7 |
Revisiting Empirical Bayes Methods and Applications to Special Types of DataDuan, Xiuwen 29 June 2021 (has links)
Empirical Bayes methods have been around for a long time and have a wide range of
applications. These methods provide a way in which historical data can be aggregated
to provide estimates of the posterior mean. This thesis revisits some of the empirical
Bayesian methods and develops new applications. We first look at a linear empirical Bayes estimator and apply it on ranking and symbolic data. Next, we consider
Tweedie’s formula and show how it can be applied to analyze a microarray dataset.
The application of the formula is simplified with the Pearson system of distributions.
Saddlepoint approximations enable us to generalize several results in this direction.
The results show that the proposed methods perform well in applications to real data
sets.
|
8 |
A Multidimensional Convolutional Bootstrapping Method for the Analysis of Degradation DataClark, Jared M. 18 April 2022 (has links)
While Monte Carlo methods for bootstrapping are typically easy to implement, they can be quite time intensive. This work aims to extend an established convolutional method of bootstrapping to work when convolutions in two or more dimensions are required. The convolutional method relies on efficient computational tools rather than Monte Carlo simulation which can greatly reduce the computation time. The proposed method is particularly well suited for the analysis of degradation data when the data are not collected on time intervals of equal length. The convolutional bootstrapping method is typically much faster than the Monte Carlo bootstrap and can be used to produce exact results in some simple cases. Even in more complicated applications, where it is not feasible to find exact results, mathematical bounds can be placed on the resulting distribution. With these benefits of the convolutional method, this bootstrapping approach has been shown to be a useful alternative to the traditional Monte Carlo bootstrap.
|
9 |
Novel computational methods for stochastic design optimization of high-dimensional complex systemsRen, Xuchun 01 January 2015 (has links)
The primary objective of this study is to develop new computational methods for robust design optimization (RDO) and reliability-based design optimization (RBDO) of high-dimensional, complex engineering systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) development of new sensitivity analysis methods for RDO and RBDO; (2) development of novel optimization methods for solving RDO problems; (3) development of novel optimization methods for solving RBDO problems; and (4) development of a novel scheme and formulation to solve stochastic design optimization problems with both distributional and structural design parameters.
The major achievements are as follows. Firstly, three new computational methods were developed for calculating design sensitivities of statistical moments and reliability of high-dimensional complex systems subject to random inputs. The first method represents a novel integration of PDD of a multivariate stochastic response function and score functions, leading to analytical expressions of design sensitivities of the first two moments. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD-SPA method, entailing the saddlepoint approximation (SPA) and score functions; and the PDD-MCS method, utilizing the embedded Monte Carlo simulation (MCS) of the PDD approximation and score functions. For all three methods developed, both the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Secondly, four new methods were developed for RDO of complex engineering systems. The methods involve PDD of a high-dimensional stochastic response for statistical moment analysis, a novel integration of PDD and score functions for calculating the second-moment sensitivities with respect to the design variables, and standard gradient-based optimization algorithms. The methods, depending on how statistical moment and sensitivity analyses are dovetailed with an optimization algorithm, encompass direct, single-step, sequential, and multi-point single-step design processes. Thirdly, two new methods were developed for RBDO of complex engineering systems. The methods involve an adaptive-sparse polynomial dimensional decomposition (AS-PDD) of a high-dimensional stochastic response for reliability analysis, a novel integration of AS-PDD and score functions for calculating the sensitivities of the failure probability with respect to design variables, and standard gradient-based optimization algorithms, resulting in a multi-point, single-step design process. The two methods, depending on how the failure probability and its design sensitivities are evaluated, exploit two distinct combinations built on AS-PDD: the AS-PDD-SPA method, entailing SPA and score functions; and the AS-PDD-MCS method, utilizing the embedded MCS of the AS-PDD approximation and score functions. In addition, a new method, named as the augmented PDD method, was developed for RDO and RBDO subject to mixed design variables, comprising both distributional and structural design variables. The method comprises a new augmented PDD of a high-dimensional stochastic response for statistical moment and reliability analyses; an integration of the augmented PDD, score functions, and finite-difference approximation for calculating the sensitivities of the first two moments and the failure probability with respect to distributional and structural design variables; and standard gradient-based optimization algorithms, leading to a multi-point, single-step design process.
The innovative formulations of statistical moment and reliability analysis, design sensitivity analysis, and optimization algorithms have achieved not only highly accurate but also computationally efficient design solutions. Therefore, these new methods are capable of performing industrial-scale design optimization with numerous design variables.
|
10 |
Essays on Spatial EconometricsGrahl, Paulo Gustavo de Sampaio 22 December 2012 (has links)
Submitted by Paulo Gustavo Grahl (pgrahl@fgvmail.br) on 2013-10-18T05:32:44Z
No. of bitstreams: 1
DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Approved for entry into archive by ÁUREA CORRÊA DA FONSECA CORRÊA DA FONSECA (aurea.fonseca@fgv.br) on 2013-10-28T18:22:53Z (GMT) No. of bitstreams: 1
DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-10-29T18:24:15Z (GMT) No. of bitstreams: 1
DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Made available in DSpace on 2013-10-29T18:25:35Z (GMT). No. of bitstreams: 1
DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5)
Previous issue date: 2012-12-22 / Esta dissertação concentra-se nos processos estocásticos espaciais definidos em um reticulado, os chamados modelos do tipo Cliff & Ord. Minha contribuição nesta tese consiste em utilizar aproximações de Edgeworth e saddlepoint para investigar as propriedades em amostras finitas do teste para detectar a presença de dependência espacial em modelos SAR (autoregressivo espacial), e propor uma nova classe de modelos econométricos espaciais na qual os parâmetros que afetam a estrutura da média são distintos dos parâmetros presentes na estrutura da variância do processo. Isto permite uma interpretação mais clara dos parâmetros do modelo, além de generalizar uma proposta de taxonomia feita por Anselin (2003). Eu proponho um estimador para os parâmetros do modelo e derivo a distribuição assintótica do estimador. O modelo sugerido na dissertação fornece uma interpretação interessante ao modelo SARAR, bastante comum na literatura. A investigação das propriedades em amostras finitas dos testes expande com relação a literatura permitindo que a matriz de vizinhança do processo espacial seja uma função não-linear do parâmetro de dependência espacial. A utilização de aproximações ao invés de simulações (mais comum na literatura), permite uma maneira fácil de comparar as propriedades dos testes com diferentes matrizes de vizinhança e corrigir o tamanho ao comparar a potência dos testes. Eu obtenho teste invariante ótimo que é também localmente uniformemente mais potente (LUMPI). Construo o envelope de potência para o teste LUMPI e mostro que ele é virtualmente UMP, pois a potência do teste está muito próxima ao envelope (considerando as estruturas espaciais definidas na dissertação). Eu sugiro um procedimento prático para construir um teste que tem boa potência em uma gama de situações onde talvez o teste LUMPI não tenha boas propriedades. Eu concluo que a potência do teste aumenta com o tamanho da amostra e com o parâmetro de dependência espacial (o que está de acordo com a literatura). Entretanto, disputo a visão consensual que a potência do teste diminui a medida que a matriz de vizinhança fica mais densa. Isto reflete um erro de medida comum na literatura, pois a distância estatística entre a hipótese nula e a alternativa varia muito com a estrutura da matriz. Fazendo a correção, concluo que a potência do teste aumenta com a distância da alternativa à nula, como esperado. / This dissertation focus on spatial stochastic process on a lattice (Cliff & Ord--type of models). My contribution consists of using Edgeworth and saddlepoint series to investigate small sample size and power properties of tests for detecting spatial dependence in spatial autoregressive (SAR) stochastic processes, and proposing a new class of spatial econometric models where the spatial dependence parameters that enter the mean structure are different from the ones in the covariance structure. This allows a clearer interpretation of models' parameters and generalizes the set of local and global models suggested by Anselin (2003) as an alternative to the traditional Cliff & Ord models. I propose an estimation procedure for the model's parameters and derive the asymptotic distribution of the parameters' estimators. The suggested model provides some insights on the structure of the commonly used mixed regressive, spatial autoregressive model with spatial autoregressive disturbances (SARAR). The study of the small sample properties of tests to detect spatial dependence expands on the existing literature by allowing the neighborhood structure to be a nonlinear function of the spatial dependence parameter. The use of series approximations instead of the often used Monte Carlo simulation allows a simple way to compare test properties across different neighborhood structures and to correct for size when comparing power. I obtain the power envelope for testing the presence of spatial dependence in the SAR process using the optimal invariant test statistic, which is also locally uniformly most powerful invariant (LUMPI). I have found that the LUMPI test is virtually UMP since its power is very close to the power envelope. I suggest a practical procedure to build a test that, while not UMP, retain good power properties in a wider range for the spatial parameter when compared to the LUMPI test. I find that power increases with sample size and with the spatial dependence parameter -- which agrees with the literature. However, I call into question the consensus view that power decreases as the spatial weight matrix becomes more densely connected. This finding in the literature reflects an error of measure because the hypothesis being compared are at very different statistical distance from the null. After adjusting for this, the power is larger for alternative hypothesis further away from the null -- as one would expect.
|
Page generated in 0.048 seconds