• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Probabilistic Forecast of Wind Power Generation by Stochastic Differential Equation Models

Elkantassi, Soumaya 04 1900 (has links)
Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.
2

The Black-Scholes and Heston Models for Option Pricing

Ye, Ziqun 14 May 2013 (has links)
Stochastic volatility models on option pricing have received much study following the discovery of the non-at implied surface following the crash of the stock markets in 1987. The most widely used stochastic volatility model is introduced by Heston (1993) because of its ability to generate volatility satisfying the market observations, being non-negative and mean-reverting, and also providing a closed-form solution for the European options. However, little research has been done on Heston model used to price early-exercise options. This presumably is largely due to the absence of a closed-form solution and the increase in computational requirement that complicates the required calibration exercise. This thesis examines the performance of the Heston model versus the Black-Scholes model for the American Style equity option of Microsoft and the index option of S&P 100 index. We employ a finite difference method combined with a Projected Successive Over-relaxation method for pricing an American put option under the Black-Scholes model, while an Alternating Direction Implicit method is utilized to decompose a multi-dimensional partial differential equation into several one dimensional steps under the Heston model. For the calibration of the Heston model, we apply a two step procedure where in the first step we apply an indirect inference method to historical stock prices to estimate diffusion parameters under a probability measure and then use a least squares method to estimate the instantaneous volatility and the market risk premium which are used to switch from working under the probability measure to working under the risk-neutral measure. We find that option price is positively related with the value of the mean reverting speed and the long-term variance. It is not sensitive to the market price of risk and it is negatively related with the risk free rate and the volatility of volatility. By comparing the European put option and the American put option under the Heston model, we observe that their implied volatility generally follow similar patterns. However, there are still some interesting observations that can be made from the comparison of the two put options. First, for the out-of-the-money category, the American and European options have rather comparable implied volatilities with the American options' implied volatility being slightly bigger than the European options. While for the in-the-money category, the implied volatility of the European options is notably higher than the American options and its value exceeds the implied volatility of the American options. We also assess the performance of the Heston model by comparing its result with the result from the Black-Scholes model. We observe that overall the Heston model performs better than the Black-Scholes model. In particular, the Heston model has tendency of underpricing the in-the-money option and overpricing the out-of-the-money option. Whereas, the Black-Scholes model is inclined to underprice both the in-the-money option and the out-of-the-money option.b
3

ESSAYS ON SPATIAL ECONOMETRICS: THEORIES AND APPLICATIONS

Xiaotian Liu (11090646) 22 July 2021 (has links)
<div> <div> <div> <p>First Chapter: The ordinary least squares (OLS) estimator for spatial autoregressions may be consistent as pointed out by Lee (2002), provided that each spatial unit is influenced aggregately by a significant portion of the total units. This paper presents a unified asymptotic distribution result of the properly recentered OLS estimator and proposes a new estimator that is based on the indirect inference (II) procedure. The resulting estimator can always be used regardless of the degree of aggregate influence on each spatial unit from other units and is consistent and asymptotically normal. The new estimator does not rely on distributional assumptions and is robust to unknown heteroscedasticity. Its good finite-sample performance, in comparison with existing estimators that are also robust to heteroscedasticity, is demonstrated by a Monte Carlo study.<br></p><p><br></p><p>Second Chapter: This paper proposes a new estimation procedure for the first-order spatial autoregressive (SAR) model, where the disturbance term also follows a first-order autoregression and its innovations may be heteroscedastic. The estimation procedure is based on the principle of indirect inference that matches the ordinary least squares estimator of the two SAR coefficients (one in the outcome equation and the other in the disturbance equation) with its approximate analytical expectation. The resulting estimator is shown to be consistent, asymptotically normal and robust to unknown heteroscedasticity. Monte Carlo experiments are provided to show its finite-sample performance in comparison with existing estimators that are based on the generalized method of moments. The new estimation procedure is applied to empirical studies on teenage pregnancy rates and Airbnb accommodation prices.<br></p><p><br></p><p>Third Chapter: This paper presents a sample selection model with spatial autoregressive interactions and studies the maximum likelihood (ML) approach to estimating this model. Consistency and asymptotic normality of the ML estimator are established by the spatial near-epoch dependent (NED) properties of the selection and outcome variables. Monte Carlo simulations, based on the characteristics of female labor supply example, show that the proposed estimator has good finite-sample performance. The new model is applied to empirical study on examining the impact of climate change on agriculture in Southeast Asia.<br></p></div></div></div><div><div><div> </div> </div> </div>
4

ROBUST ESTIMATION OF THE PARAMETERS OF g - and - h DISTRIBUTIONS, WITH APPLICATIONS TO OUTLIER DETECTION

Xu, Yihuan January 2014 (has links)
The g - and - h distributional family is generated from a relatively simple transformation of the standard normal. By changing the skewness and elongation parameters g and h, this distributional family can approximate a broad spectrum of commonly used distributional shapes, such as normal, lognormal, Weibull and exponential. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. The current available methods to estimate the g - and - h distributional family include: letter value based method (LV), numerical maximal likelihood method (NMLE), and moment methods. Although these methods work well when no outliers or contaminations exist, they are not resistant to a moderate amount of contaminated observations or outliers. Meanwhile, NMLE is a computational time consuming method when data sample size is large. In this dissertation a quantile based least squares (QLS) estimation method is proposed to fit the g - and - h distributional family parameters and then derive its basic properties. Then QLS method is extended to a robust version (rQLS). Simulation studies are performed to compare the performance of QLS and rQLS methods with LV and NMLE methods to estimate the g - and - h parameters from random samples with or without outliers. In random samples without outliers, QLS and rQLS estimates are comparable to LV and NMLE in terms of bias and standard error. On the other hand, rQLS performs better than other non-robust method to estimate the g - and - h parameters when moderate amount of contaminated observations or outliers exist. The flexibility of the g - and - h distribution and the robustness of rQLS method make it a useful tool in various fields. The boxplot (BP) method had been used in multiple outlier detections by controlling the some-outside rate, which is the probability of one or more observations, in an outlier-free sample, falling into the outlier region. The BP method is distribution dependent. Usually the random sample is assumed normally distributed; however, this assumption may not be valid in many applications. The robustly estimated g - and - h distribution provides an alternative approach without distributional assumptions. Simulation studies indicate that the BP method based on robustly estimated g - and - h distribution identified reasonable number of true outliers while controlling number of false outliers and some-outside rate compared to normal distributional assumption when it is not valid. Another application of the robust g - and - h distribution is as an empirical null distribution in false discovery rate method (denoted as BH method thereafter). The performance of BH method depends on the accuracy of the null distribution. It has been found that theoretical null distributions were often not valid when simultaneously performing many thousands, even millions, of hypothesis tests. Therefore, an empirical null distribution approach is introduced that uses estimated distribution from the data. This is recommended as a substitute to the currently used empirical null methods of fitting a normal distribution or another member of the exponential family. Similar to BP outlier detection method, the robustly estimated g - and - h distribution can be used as empirical null distribution without any distributional assumptions. Several real data examples of microarray are used as illustrations. The QLS and rQLS methods are useful tools to estimate g - and - h parameters, especially rQLS because it noticeably reduces the effect of outliers on the estimates. The robustly estimated g - and - h distributions have multiple applications where distributional assumptions are required, such as boxplot outlier detection or BH methods. / Statistics
5

Estimação indireta de modelos R-GARCH / Indirect inference of R-GARCH models

Sampaio, Jhames Matos 01 March 2012 (has links)
Processos lineares não capturam a estrutura dos dados em finanças. Há uma variedade muito grande de modelos não lineares disponíveis na literatura. A classe de modelos ARCH (Autoregressive Conditional Heterokedastic) foi introduzida por Engle (1982) com o objetivo de estimar a variância da inflação. A idéia nesta classe é que os retornos sejam não correlacionados serialmente, mas a volatilidade (variância condicional) dependa de retornos passados. A classe de modelos GARCH (Generalized Autoregressive Conditional Heterokedastic) sugerida por Bollerslev (1986, 1987, 1988) pode ser usada para descrever a volatilidade com menos parâmetros que um modelo ARCH. Modelos da classe GARCH são processos estocásticos não lineares, suas distribuições tem cauda pesada com variância condicional dependente do tempo e modelam agrupamento de volatilidade. Apesar da razoável descrição, a forma como os modelos acima foram construídos apresentaram algumas limitações no que se refere ao peso das caudas em suas distribuições não condicionais. Muitos estudos em dados financeiros apontam para caudas com peso considerável. Modelos R-GARCH (Randomized Generalized Autoregressive Conditional Heterokedastic) foram propostos por Nowicka (1998) e incluem os modelos ARCH e GARCH possibilitando o uso de inovações estáveis além da conhecida distribuição normal. Estas permitem captar melhor a propriedade de cauda pesada. Como a função de autocovariância não existe para tais processos introduz-se novas medida de dependência. Métodos de estimação e análises empíricas da classe R-GARCH, assim como de suas medidas de dependência não estão disponíveis na literatura e são o foco deste trabalho. / Linear processes do not capture the structure of financial data. There is a large variety of nonlinear models available in literature. The class of ARCH models (Autoregressive Conditional Heterokedastic) was introduced by Engle (1982) in order to estimate inflation\'s variance. The idea is that, in this class, returns are serially uncorrelated, but the volatility (conditional variance) depends on past returns. The class of GARCH models (Generalized Autoregressive Conditional Heterokedastic) suggested by Bollerslev (1986, 1987, 1988) can be used to describe the volatility with less parameters than ARCH-type models. GARCH-type models are nonlinear stochastic processes, their distribution are heavy-tailed with time-dependent conditional variance model and they model clustering of volatility. Despite the reasonable description, the way that GARCH models are built imposes limits on the heaviness of the tails of their unconditional distribution. Many studies in financial data point to considerable heaviness of the tails. The class of Randomized Generalized Autoregressive Conditional Heterokedastic (R-GARCH) were proposed by Nowicka (1998) and include the ARCH and GARCH models allowing the use of stable innovations in place of normal distribution. This distribution allows to capture the heaviness tail property. As the autocovariance function does not exist for these processes a new measure of dependence was introduced. Estimation methods and empirical analysis of R-GARCH class, as well as their measures of dependence are not available in literature and are the focus of this work.
6

Analyse économétrique des décisions de production des propriétaires forestiers privés non industriels en France

Kere, Eric Nazindigouba 21 March 2013 (has links)
La production de bois intègre notamment des enjeux économiques, climatiques et énergétiques. En France, selon les données de l'Institut National de l'Information Géographique et Forestière, l'accroissement biologique de la forêt est largement supérieur aux prélèvements de bois. C'est pourquoi l'État français a fixé l'objectif de prélever 21 millions de m3 supplémentaires de bois d'ici 2020 (Grenelle de l'environnement, 2007). Cependant, la forêt française appartient majoritairement à des propriétaires forestiers privés qui ont des préférences à la fois pour le revenu issu de la vente de bois et pour les aménités non-bois. Les politiques visant à accroître la production de bois doivent donc intégrer ces aspects. L'objectif de ce travail de thèse est de comprendre les déterminants de la production jointe de bois et d'aménités non-bois en France. Pour ce faire, nous nous sommes d'abord intéressés aux déterminants individuels et régionaux de l'offre de bois. Nous montrons que le comportement d'offre de bois d'un propriétaire peut varier en fonction du comportement de production de bois constaté chez ses pairs (effets sociaux). Ensuite, nous mettons en évidence un comportement de mimétisme dans les décisions de production jointe de bois et d'aménités des propriétaires forestiers privés. Enfin, nous analysons les arbitrages inter-temporels réalisés par les propriétaires entre aménités non-bois et revenu de la vente de bois en prenant en compte explicitement les anticipations de prix et de croissance. Nous évaluons à 23e par an la valeur que les propriétaires de notre échantillon accordent à 1m3/ha de bois supplémentaire laissé sur pied par rapport au niveau de stock des propriétaires industriels afin d'avoir des aménités plus importantes.Un des enjeux de ce travail est d?offrir des pistes pour mobiliser la ressource forestière ne faisant pas l'objet d'une offre, faute d'implication des propriétaires privés, soit par manque de connaissance ou d'intérêt pour leur forêt, soit parce que d'autres aspects sont privilégiés (services d'aménités non-bois par exemple). Dans cette thèse, nous montrons que les effets de mimétisme et d'entrainement social (effets sociaux) peuvent être utilisés pour amener les propriétaires forestiers à produire plus de bois. Nous montrons également, qu'une hausse du prix du bois ou la mise en place d'une taxepeut favoriser la prise de la décision de coupe de bois et augmenter l'intensité de la récolte. / Timber production is related to economic, climate and energy issues. In France,according to data from the National Institute of Geoinformation and Forestry, thebiological growth rate of the forest is greater than the timber harvest rate. Thus, theFrench government has set a target of harvesting an additional quantity of 21 millioncubic meter of timber by 2020 ("Grenelle de l'environnement, 2007"). However, theFrench forest is majority owned by private forest owners who have preferences forboth income from timber trade and from non-timber amenities. The policies toincrease timber production must include these aspects. The objective of this thesisis to understand the determinants of joint production of timber and non-timberamenities in France.Therefore, we first analyze private forest owners' timber supply, taking into accountindividual and regional determinants. Afterwards, we investigate whether thedrivers of forest owners behavior differ within and between these different levels.We show that similar timber supply behavior can be observed when regional characteristicsor those of peers are similar. Then, we highlight a mimicry behavior injoint production decisions of timber and amenities made by private forest owners.Finally, we analyze inter-temporal trade-offs made by the owners from non-timberamenities and income from the sale of wood. We explicitly take into account theprice expectations and growth. Our estimations show that the willingness to pay fornon-timber amenities is e23 for our case study. This value is the difference betweenthe value they could have earned if they tried to maximize timber revenue and therevenue of their actual logging.Mainly beacause of a lack of involvement of private owners, either through a lackof knowledge or interest in their forest, or because other aspects are privileged (nontimberamenities, e.g.), a part of forest ressource is not subject to a commercial offer.Providing ways to mobilize this ressource is one of the challenges of this work. Weshow that the mimetic effects and the contextual effects can be used to encourageforest owners to produce more timber. An effective policy could be a combinationof these two effects. We also show that an increase in the price of timber or theadoption of a tax may be an incentive for timber harvesting.
7

Estimação indireta de modelos R-GARCH / Indirect inference of R-GARCH models

Jhames Matos Sampaio 01 March 2012 (has links)
Processos lineares não capturam a estrutura dos dados em finanças. Há uma variedade muito grande de modelos não lineares disponíveis na literatura. A classe de modelos ARCH (Autoregressive Conditional Heterokedastic) foi introduzida por Engle (1982) com o objetivo de estimar a variância da inflação. A idéia nesta classe é que os retornos sejam não correlacionados serialmente, mas a volatilidade (variância condicional) dependa de retornos passados. A classe de modelos GARCH (Generalized Autoregressive Conditional Heterokedastic) sugerida por Bollerslev (1986, 1987, 1988) pode ser usada para descrever a volatilidade com menos parâmetros que um modelo ARCH. Modelos da classe GARCH são processos estocásticos não lineares, suas distribuições tem cauda pesada com variância condicional dependente do tempo e modelam agrupamento de volatilidade. Apesar da razoável descrição, a forma como os modelos acima foram construídos apresentaram algumas limitações no que se refere ao peso das caudas em suas distribuições não condicionais. Muitos estudos em dados financeiros apontam para caudas com peso considerável. Modelos R-GARCH (Randomized Generalized Autoregressive Conditional Heterokedastic) foram propostos por Nowicka (1998) e incluem os modelos ARCH e GARCH possibilitando o uso de inovações estáveis além da conhecida distribuição normal. Estas permitem captar melhor a propriedade de cauda pesada. Como a função de autocovariância não existe para tais processos introduz-se novas medida de dependência. Métodos de estimação e análises empíricas da classe R-GARCH, assim como de suas medidas de dependência não estão disponíveis na literatura e são o foco deste trabalho. / Linear processes do not capture the structure of financial data. There is a large variety of nonlinear models available in literature. The class of ARCH models (Autoregressive Conditional Heterokedastic) was introduced by Engle (1982) in order to estimate inflation\'s variance. The idea is that, in this class, returns are serially uncorrelated, but the volatility (conditional variance) depends on past returns. The class of GARCH models (Generalized Autoregressive Conditional Heterokedastic) suggested by Bollerslev (1986, 1987, 1988) can be used to describe the volatility with less parameters than ARCH-type models. GARCH-type models are nonlinear stochastic processes, their distribution are heavy-tailed with time-dependent conditional variance model and they model clustering of volatility. Despite the reasonable description, the way that GARCH models are built imposes limits on the heaviness of the tails of their unconditional distribution. Many studies in financial data point to considerable heaviness of the tails. The class of Randomized Generalized Autoregressive Conditional Heterokedastic (R-GARCH) were proposed by Nowicka (1998) and include the ARCH and GARCH models allowing the use of stable innovations in place of normal distribution. This distribution allows to capture the heaviness tail property. As the autocovariance function does not exist for these processes a new measure of dependence was introduced. Estimation methods and empirical analysis of R-GARCH class, as well as their measures of dependence are not available in literature and are the focus of this work.
8

Comparison of Indirect Inference and the Two Stage Approach

Hernadi, Victor, Carocca Jeria, Leandro January 2022 (has links)
Parametric models are used to understand dynamical systems and predict its future behavior. It is difficult to estimate the model’s parametric values since there are usually many parameters and they are highly correlated. The aim of this project is to apply the method of indirect inference and the two stage approach to estimate the drift and volatility parameters of a Geometric Brownian Motion. This was first done by estimating the parameters of a known Geometric Brownian process. Then, the Coca-Cola Company’s stock was used for a five-year forecast to study the estimators’ predictive power. The two stage approach struggles when the data does not truly follow a Geometric Brownian Motion, but when it does it produces highly efficient and accurate estimates. The method of indirect inference produces better estimates, than the two stage approach,for data that deviates from a Geometric Brownian Motion.Therefore, it is preferable to use indirect inference over two stage approach for stock price forecasting. / Parametriska modeller används för attförstå dynamiska system och förutspå dess framtida beteende.Det är utmanande att skatta modellens parametriska värdeneftersom det vanligtvis finns många parametrar och de är oftastarkt korrelerade. Målet med detta projekt är att tillämpametoderna indirect inference och two stage approach för attskatta drivnings- och volatilitetsparametrarna av en geometriskBrownsk rörelse. Först skattades parametrarna av en kändGeometrisk Brownsk rörelse. Sedan användes The Coca-ColaCompanys aktie i syfte att studera estimatorernas förmåga attförutspå en femårig period. Two stage approach fungerar dåligtför data som inte helt följer en geometrisk Brownsk rörelse, mennär datan gör det är skattningarna noggranna och effektiva.Indirect inference ger bättre skattningar än two stage approachnär datan inte helt följer en geometrisk Brownsk rörelse. Därförär indirect inference att föredra för aktieprognoser. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
9

System Identification of continuous-time systems with quantized output data using indirect inference

Persson, Frida January 2021 (has links)
Continuous-time system identification is an important subject with applications within many fields. Many physical processes are continuous in time. Therefore, when identifying a continuous-time model, we can use our insight into the system to decide the system structure and have a direct interpretation of the parameters. Furthermore, in systems such as network control systems and sensor networks, there is a common feature that the output data is quantized meaning we can only represent our data with a limited amount of distinct values. When performing continuous-time system identification of a system with quantized output data, we have errors from process and measurement noise and also a quantization error. This will make it more difficult to estimate the system parameters. This thesis aims to evaluate if it is possible to obtain accurate estimates of continuous-time systems with quantized output data using the indirect inference method. Indirect Inference is a simulation-based method that first estimates a misspecified auxiliary model to the observed data and in the second step, the parameters of the true system are estimated by simulations. Experiments are done both on one linear and two non-linear Hammerstein systems with quantized output data. The indirect inference estimator is shown to have the means to yield accurate estimates on both linear systems as well as non-linear Hammerstein systems with quantized output. The method performs better than the simplified refined instrumental variable method for continuous-time systems (SRIVC), which is commonly used for system identification of continuous-time systems, on a linear system. Furthermore, it performed significantly better compared to the Hammerstein Simplified Refined Instrumental Variable method for continuous-time systems (HSRIVC) for one of the non-linear systems and slightly better for the second one. The downside is that indirect inference is computationally expensive and time-consuming, hence not a good choice when computation time is a critical factor / Identifiering av Tidskontinuerlig system är ett viktigt ämne med användningsområde inom många områden. De flesta fysiska processer är tidskontinuerliga och när vi identifierar tidskontinuerliga modeller av dessa system kan vi använda vår insikt av systemet för att bestämma systemstrukturen och även direkt tolka dessa parametrar. I nätverkssystem och sensor-nätverk är det vanligt att vår utdata är kvantiserad, därav kan vi endast representera vår data med ett begränsat antal distinka värden. När vi identifierar tidskontinuerliga system med kvantiserad utdata, har vi därför både fel som ett resultat av process och mätbrus ovh ett kvantiseringsfel. Detta gör det svårare att identifiera parametrarna av systemet. I detta projekt var målet att utvärdera om det är möjligt att erhålla bra estimat för ett tidskontinuerligt system med kvantiserad utdata genom att använda metoden indrect inference. Indirect inference är en simuleringsbaserad metod som först estimerar en misspecificerad model från det observerade datat och i nästa steg, estimerar paramtrarna av det sanna systemet via simulering. Experiment utfördes både på ett linjärt och två olinära Hammerstein system med kvantiserad utdata. Indirect inference metoden visas ha potential att genere bra estimat på både linjära och icke-linära Hammerstein system med kvantiserad utdata. Metoden presterar bättre än SimplifiedRefined Instrumental Variable Method for continuous-time systems (SRIVC) på det linjära systemet och även mycket bättre än Hammerstein Simplified Refined InstrumentalVariable method for continuous-time systems (HSRIVC) för ett av det olinjära systemen och lite bättre för det andra. En nackdel med indirect inference är att det är beräkningstungt och att det tar lång tid att generera estimaten. Därav är denna metod inte att rekomendera när tid är en kritisk faktor.
10

GENERAL-PURPOSE STATISTICAL INFERENCE WITH DIFFERENTIAL PRIVACY GUARANTEES

Zhanyu Wang (13893375) 06 December 2023 (has links)
<p dir="ltr">Differential privacy (DP) uses a probabilistic framework to measure the level of privacy protection of a mechanism that releases data analysis results to the public. Although DP is widely used by both government and industry, there is still a lack of research on statistical inference under DP guarantees. On the one hand, existing DP mechanisms mainly aim to extract dataset-level information instead of population-level information. On the other hand, DP mechanisms introduce calibrated noises into the released statistics, which often results in sampling distributions more complex and intractable than the non-private ones. This dissertation aims to provide general-purpose methods for statistical inference, such as confidence intervals (CIs) and hypothesis tests (HTs), that satisfy the DP guarantees. </p><p dir="ltr">In the first part of the dissertation, we examine a DP bootstrap procedure that releases multiple private bootstrap estimates to construct DP CIs. We present new DP guarantees for this procedure and propose to use deconvolution with DP bootstrap estimates to derive CIs for inference tasks such as population mean, logistic regression, and quantile regression. Our method achieves the nominal coverage level in both simulations and real-world experiments and offers the first approach to private inference for quantile regression.</p><p dir="ltr">In the second part of the dissertation, we propose to use the simulation-based ``repro sample'' approach to produce CIs and HTs based on DP statistics. Our methodology has finite-sample guarantees and can be applied to a wide variety of private inference problems. It appropriately accounts for biases introduced by DP mechanisms (such as by clamping) and improves over other state-of-the-art inference methods in terms of the coverage and type I error of the private inference. </p><p dir="ltr">In the third part of the dissertation, we design a debiased parametric bootstrap framework for DP statistical inference. We propose the adaptive indirect estimator, a novel simulation-based estimator that is consistent and corrects the clamping bias in the DP mechanisms. We also prove that our estimator has the optimal asymptotic variance among all well-behaved consistent estimators, and the parametric bootstrap results based on our estimator are consistent. Simulation studies show that our framework produces valid DP CIs and HTs in finite sample settings, and it is more efficient than other state-of-the-art methods.</p>

Page generated in 0.4827 seconds