231 |
Envelopes of broad band processesVan Dyke, Jozef Frans Maria January 1981 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaf 93. / by Jozef Frans Maria Van Dyke. / M.S.
|
232 |
Frequency analysis of low flows: comparison of a physically based approach and hypothetical distribution methodsMattejat, Peter Paul January 1985 (has links)
Several different approaches are applied in low flow frequency analysis. Each method's theory and application is explained. The methods are (1) physically based recession model dealing with time series, (2) log-Pearson type III and mixed log-Pearson type III using annual minimum series, (3) Double Bounded pdf using annual minimum series, (4) Partial Duration Series applying truncated and censored flows.
Each method has a computer program for application. One day low flow analysis was applied to 15 stations, 10 perennial streams and 5 intermittent streams. The physically based method uses the exponential baseflow recession with duration, initial recession flow, and recharge due to incoming storm as random variables, and shows promise as an alternative to black box methods, and is appealing because it contains the effect of drought length. Log-Pearson is modified to handle zero flows by adding a point mass probability for zero flows. Another approach to zero flows is the Double Bounded probability density function which also includes a point mass probability for zero flows. Maximum likelihood estimation is used to estimate distribution parameters. Partial Duration Series is applied due to drawbacks of using only one low flow per year in annual minimum series. Two approaches were used in Partial Duration Series (i) truncation, and (ii) censorship which represent different low flow populations. The parameters are estimated by maximum likelihood estimation. / M.S.
|
233 |
Growth Mixture Modeling with Non-Normal Distributions - Implications for Class ImbalanceHan, Lu January 2024 (has links)
Previous simulation studies on the non-normal GMM are very limited with respect to examining effects of a high degree of class imbalance. To extend previous studies, the present study aims to examine through Monte Carlo simulation the impact of a higher degree of imbalanced class proportion (i.e., 0.90/0.10) on the performance of different distribution methods (i.e., normal, t, skew-normal, and skew-t) in estimating non-normal GMMs.
To fulfill this purpose, a Monte Carlo simulation was based on a two-class skew-t growth mixture model under different conditions of sample sizes (1000, 3000), class proportions (0.90/0.10, 0.50/0.50), skewness for intercept (1, 4), kurtosis (2, 6), and class separations (high, low), using the four different distributions (i.e., normal, t, skew-normal, and skew-t). Furthermore, another aim of the present study was to assess the ability of various model fit indices and LRT-based tests (i.e., AIC, BIC, sample size-adjusted BIC, LMR-LRT, LMR-adjusted LRT, and entropy) for detection non-normal GMMs under a higher degree of class imbalance (0.90/0.10).
The results indicate that (1) the skew-t distribution is highly recommended for estimating non-normal GMMs under high-class separation with highly imbalanced class proportions of 0.90/0.10, irrespective of sample size, skewness for intercept, and kurtosis; (2) For low-class separation with high class imbalance (0.90/0.10), the normal distribution is highly recommended based on the AIC, BIC, and sample size-adjusted BIC, while the skew-t distribution is most recommended based on the entropy; (3) poor class separation significantly reduces the performance of every distribution for estimating non-normal GMMs with high class imbalance, especially for the skew-t and t GMMs; (4) insufficient sample size significantly reduces the performance of the skew-t and t distributions for estimating non-normal GMMs with high class imbalance; (5) high class imbalance (0.90/0.10) and poor class separation significantly reduces the ability of the LRT-based tests for all distributions across different conditions; (6) excessive levels of skewness for the intercept significantly decreases the ability of most fit indices for the skew-t distribution (BIC and LRT-based tests), t (AIC, BIC, sBIC, and LRT-based tests), skew-normal (AIC and BIC), and normal (LRT-based tests) distributions when estimating non-normal GMMs with high class imbalance; (7) excessive levels of kurtosis has a partial negative effect on the performance of the skew-t (AIC, BIC, and LRT-based tests) and t (AIC, BIC, sBIC, and LRT-based tests) distributions when the level of skewness for intercept is excessive; and (8) for the highly imbalanced class proportions of 0.90/0.10, the sBIC and entropy for the skew-t distribution outperform the other fit indices under high-class separation, while the AIC, BIC, and sample size-adjusted BIC for the normal distribution and the entropy for the skew-t distribution are the most reliable fit indices under low-class separation.
|
234 |
Essays on Adaptive Experimentation: Bringing Real-World Challenges to Multi-Armed BanditsQin, Chao January 2024 (has links)
Classical randomized controlled trials have long been the gold standard for estimating treatment effects. However, adaptive experimentation, especially through multi-armed bandit algorithms, aims to improve efficiency beyond traditional randomized controlled trials. While there is a vast literature on multi-armed bandits, a simple yet powerful framework in reinforcement learning, real-world challenges can hinder the successful implementation of adaptive algorithms. This thesis seeks to bridge this gap by integrating real-world challenges into multi-armed bandits.
The first chapter examines two competing priorities that practitioners often encounter in adaptive experiments: maximizing total welfare through effective treatment assignments and swiftly conducting experiments to implement population-wide treatments. We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes. We provide a sharp theory of optimal performance that not only unifies canonical results from the literature on regret minimization and best-arm identification but also uncovers novel insights. Our theory reveals that familiar algorithms, such as the recently proposed top-two Thompson sampling algorithm, can optimize a broad class of objectives if a single scalar parameter is appropriately adjusted. Furthermore, we demonstrate that substantial reductions in experiment duration can often be achieved with minimal impact on total regret.
The second chapter studies the fundamental tension between the distinct priorities of non-adaptive and adaptive experiments: robustness to exogenous variation and efficient information gathering. We introduce a novel multi-armed bandit model that incorporates nonstationary exogenous factors, and propose deconfounded Thompson sampling, a more robust variant of the prominent Thompson sampling algorithm. We provide bounds on both within-experiment and post-experiment regret of deconfounded Thompson sampling, illustrating its resilience to exogenous variation and the delicate balance it strikes between exploration and exploitation. Our proofs leverage inverse propensity weights to analyze the evolution of the posterior distribution, a departure from established methods in the literature. Hinting that new understanding is indeed necessary, we demonstrate that a deconfounded variant of the popular upper confidence bound algorithm can fail completely.
|
235 |
Data-dependent Regret Bounds for Adversarial Multi-Armed Bandits and Online Portfolio SelectionPutta, Sudeep Raja January 2024 (has links)
This dissertation studies 𝐷𝑎𝑡𝑎-𝐷𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 regret bounds for two online learning problems. As opposed to worst-case regret bounds, data-dependent bounds are able to adapt to the particular sequence of losses seen by the player. Thus, they offer a more fine grained performance guarantee compared to worst-case bounds.
We start off with the Adversarial 𝑛-Armed Bandit problem. In prior literature it was a standard practice to assume that the loss vector belonged to a known domain, typically [0,1]ⁿ or [-1,1]ⁿ. We make no such assumption on the loss vectors, they may be completely arbitrary. We term this problem the Scale-Free Adversarial Multi Armed Bandit. At the beginning of the game, the player only knows the number of arms 𝑛. It does not know the scale and magnitude of the losses chosen by the adversary or the number of rounds 𝑇. In each round, it sees bandit feedback about the loss vectors 𝑙₁, . . . , 𝑙_𝑇 ⋲ ℝⁿ. Our goal is to bound its regret as a function of 𝑛 and norms of 𝑙₁, . . . , 𝑙_𝑇 . We design a bandit Follow The Regularized Leader (FTRL) algorithm, that uses a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. We give two different regret bounds, based on the exploration parameter used. With non-adaptive exploration, our algorithm has a regret of 𝑂̃(√𝑛𝐿₂ + 𝐿_∞√𝑛𝑇) and with adaptive exploration, it has a regret of 𝑂(√𝑛𝐿₂ + 𝐿∞√𝑛𝐿₁). Here 𝐿∞ = sup_𝑡 ∥𝑙_𝑡∥_∞, 𝐿₂ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥²₂, 𝐿₁ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥₁ and the 𝑂̃ notation suppress logarithmic factors. These are the first MAB bounds that adapt to the ∥・∥₂, ∥・∥₁ norms of the losses. The second bound is the first data-dependent scale-free MAB bound as 𝑇 does not directly appear in the regret. We also develop a new technique for obtaining a rich class of local-norm lower-bounds for Bregman Divergences. This technique plays a crucial role in our analysis for controlling the regret when using importance weighted estimators of unbounded losses.
Next, we consider the Online Portfolio Selection (OPS) problem over 𝑛 assets and 𝑇 time periods. This problem was first studied by Cover [1], who proposed the Universal Portfolio (UP) algorithm. UP is a computationally expensive algorithm with minimax optimal regret of 𝑂(𝑛 log 𝑇). There has been renewed interest in OPS due to a recently posed open problem Van Erven 𝑒𝑡 𝑎𝑙. [2] which asks for a computationally efficient algorithm that is also has minimax optimal regret. We study data-dependent regret bounds for OPS problem that adapt to the sequence of returns seen by the investor. Our proposed algorithm called AdaCurv ONS modifies the Online Newton Step(ONS) algorithm of [3] using a new adaptive curvature surrogate function for the log losses — log(𝑟_𝑡ᵀ𝑤). We show that the AdaCurv ONS algorithm has 𝑂(𝑅𝑛𝑙𝑜𝑔𝑇) regret where 𝑅 is the data-dependent quantity. For sequences where 𝑅=𝑂(1), the regret of AdaCurv ONS matches the optimal regret. However, for some sequences 𝑅 could be unbounded, making the regret bound vacuous. To overcome this issue, we propose the LB-AdaCurv ONS algorithm that adds a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. LB-AdaCurv ONS has an adaptive regret of the form 𝑂(min(𝑅 log 𝑇, √𝑛𝑇 log 𝑇)). Thus, LB-AdaCurv ONS has a worst case regret of 𝑂(√𝑛𝑇 log 𝑇) while also having a data-dependent regret of 𝑂(𝑛𝑅 log 𝑇) when 𝑅 = 𝑂(1). Additionally, we show logarithmic First-Order and Second-Order regret bounds for AdaCurv ONS and LB-AdaCurv ONS.
Finally, we consider the problem of Online Portfolio Selection (OPS) with predicted returns. We are the first to extend the paradigm of online learning with predictions to the portfolio selection problem. In this setting, the investor has access to noisy predictions of returns for the 𝑛 assets that can be incorporated into the portfolio selection process. We propose the Optimistic Expected Utility LB-FTRL (OUE-LB-FTRL) algorithm that incorporates the predictions using a utility function into the LB-FTRL algorithm. We explore the consistency-robustness properties for our algorithm. If the predictions are accurate, OUE-LB-FTRL's regret is 𝑂(𝑛 log 𝑇), providing a consistency guarantee. Even if the predictions are arbitrary, OUE-LB-FTRL's regret is always bounded by 𝑂(√𝑛𝑇 log 𝑇) providing a providing a robustness guarantee. Our algorithm also recovers a Gradual-variation regret bound for OPS. In the presence of predictions, we argue that the benchmark of static-regret becomes less meaningful. So, we consider the regret with respect to an investor who only uses predictions to select their portfolio (i.e., an expected utility investor). We provide a meta-algorithm called Best-of-Both Worlds for OPS (BoB-OPS), that combines the portfolios of an expected utility investor and a purely regret minimizing investor using a higher level portfolio selection algorithm. By instantiating the meta-algorithm and the purely regret minimizing investor with Cover's Universal Portfolio, we show that the regret of BoB-OPS with respect to the expected utility investor is 𝑂(log 𝑇). Simultaneously, BoB-OPS's static regret is 𝑂(𝑛 log 𝑇). This achieves a stronger form of consistency-robustness guarantee for OPS with predicted returns.
|
236 |
Stochastic optimal impulse control of jump diffusions with application to exchange rateUnknown Date (has links)
We generalize the theory of stochastic impulse control of jump diffusions introduced by Oksendal and Sulem (2004) with milder assumptions. In particular, we assume that the original process is affected by the interventions. We also generalize the optimal central bank intervention problem including market reaction introduced by Moreno (2007), allowing the exchange rate dynamic to follow a jump diffusion process. We furthermore generalize the approximation theory of stochastic impulse control problems by a sequence of iterated optimal stopping problems which is also introduced in Oksendal and Sulem (2004). We develop new results which allow us to reduce a given impulse control problem to a sequence of iterated optimal stopping problems even though the original process is affected by interventions. / by Sandun C. Perera. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
237 |
Simplicial matter in discrete and quantum spacetimesUnknown Date (has links)
A discrete formalism for General Relativity was introduced in 1961 by Tulio Regge in the form of a piecewise-linear manifold as an approximation to (pseudo-)Riemannian manifolds. This formalism, known as Regge Calculus, has primarily been used to study vacuum spacetimes as both an approximation for classical General Relativity and as a framework for quantum gravity. However, there has been no consistent effort to include arbitrary non-gravitational sources into Regge Calculus or examine the structural details of how this is done. This manuscript explores the underlying framework of Regge Calculus in an effort elucidate the structural properties of the lattice geometry most useful for incorporating particles and fields. Correspondingly, we first derive the contracted Bianchi identity as a guide towards understanding how particles and fields can be coupled to the lattice so as to automatically ensure conservation of source. In doing so, we derive a Kirchhoff-like conservation principle that identifies the flow of energy and momentum as a flux through the circumcentric dual boundaries. This circuit construction arises naturally from the topological structure suggested by the contracted Bianchi identity. Using the results of the contracted Bianchi identity we explore the generic properties of the local topology in Regge Calculus for arbitrary triangulations and suggest a first-principles definition that is consistent with the inclusion of source. This prescription for extending vacuum Regge Calculus is sufficiently general to be applicable to other approaches to discrete quantum gravity. We discuss how these findings bear on a quantized theory of gravity in which the coupling to source provides a physical interpretation for the approximate invariance principles of the discrete theory. / by Jonathan Ryan McDonald. / Vita. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
238 |
Modelo estocástico para estimação de produtividade potencial de milho em Piracicaba - SP. / Sthocastic model for estimating potential maize productivity in Piracicaba-SP, Brazil.Assis, Janilson Pinheiro de 26 April 2004 (has links)
Com o objetivo de propor um modelo estocástico para estimação da produtividade potencial da cultura de milho em Piracicaba (SP), em função de temperatura e radiação solar média diária, foi desenvolvido um programa computacional em linguagem Visual Basic para ambiente Windows, o qual foi utilizado em diferentes períodos agroclimáticos (datas de semeadura). Em função dos resultados obtidos, pode-se concluir que: (i) em escala diária, as variáveis temperatura média do ar e radiação solar em Piracicaba (SP) (períodos de 1917 a 2002 e 1978 a 2002, respectivamente) apresentaram distribuição normal; (ii) as distribuições normal truncada, triangular simétrica, e triangular assimétrica podem ser utilizadas no modelo estocástico para previsão da produtividade de milho; (iii) o programa computacional é uma ferramenta que viabiliza a operacionalização da estimação da produtividade potencial de milho utilizando a opinião de especialistas. / With the purpose of proposing a stochastic model for estimating potential maize productivity in Piracicaba (SP), as function of mean values of daily air temperature and solar radiation, a software was developed using Visual Basic for Windows, where it was applied for different agro climatic periods (sowing dates). The results allowed the following conclusions: (i) at daily scale, the variables air temperature and solar radiation (periods from 1917 to 2002 and 1978 to 2002, respectively) presented normal distribution; (ii) the normal and triangular (symmetric and asymmetric) distributions can be used in the stochastic model to forecast potential maize productivity; (iii) the software is a tool that allows to estimate the potential maize productivity using the specialist opinion.
|
239 |
Bayesian analysis of multinomial regression with gamma utilities. / CUHK electronic theses & dissertations collectionJanuary 2012 (has links)
多項式回歸模型可用來模擬賽馬過程。不同研究者對模型中馬匹的效用的分佈採取不同的假設,包括指數分佈,它與Harville 模型(Harville, 1973)相同,伽馬分佈(Stern, 1990)和正態分佈(Henery, 1981)。Harville 模型無法模擬賽馬過程中競爭第二位和第三位等非冠軍位置時增加的隨機性(Benter, 1994)。Stern 模型假設效用服從形狀參數大於一的伽馬分佈,Henery 模型假設效用服從正態分佈。Bacon-Shone,Lo 和 Busche(1992),Lo 和 Bacon-Shone(1994)和 Lo(1994)研究證明了相較於Harville 模型,這兩個模型能更好地模擬賽馬過程。本文利用賽馬歷史數據,採用貝葉斯方法對賽馬結果中馬匹勝出的概率進行預測。本文假設效用服從伽馬分佈。本文針對多項式回歸模型,提出一個在Metropolis-Hastings 抽樣方法中選擇提議分佈的簡便方法。此方法由Scott(2008)首次提出。我們在似然函數中加入服從伽馬分佈的效用作為潛變量。通過將服從伽馬分佈的效用變換成一個服從Mihram(1975)所描述的廣義極值分佈的隨機變量,我們得到一個線性回歸模型。由此線性模型我們可得到最小二乘估計,本文亦討論最小二乘估計的漸進抽樣分佈。我們利用此估計的方差得到Metropolis-Hastings 抽樣方法中的提議分佈。最後,我們可以得到回歸參數的後驗分佈樣本。本文用香港賽馬數據做模擬賽馬投資以檢驗本文提出的估計方法。 / In multinomial regression of racetrack betting, dierent distributions of utilities have been proposed: exponential distribution which is equivalent to Harville’s model (Harville, 1973), gamma distribution (Stern, 1990) and normal distribution (Henery, 1981). Harville’s model has the drawback that it ignores the increasing randomness of the competitions for the second and third place (Benter, 1994). The Stern’s model using gamma utilities with shape parameter greater than 1 and the Henery’s model using normal utilities have been shown to produce a better t (Bacon-Shone, Lo and Busche, 1992; Lo and Bacon-Shone, 1994; Lo, 1994). In this thesis, we use the Bayesian methodology to provide prediction on the winning probabilities of horses with the historical observed data. The gamma utility is adopted throughout the thesis. In this thesis, a convenient method of selecting Metropolis-Hastings proposal distributions for multinomial models is developed. A similar method is rst exploited by Scott (2008). We augment the gamma distributed utilities in the likelihood as latent variables. The gamma utility is transformed to a variable that follows generalized extreme value distribution described by Mihram (1975) through which we get a linear regression model. Least squares estimate of the parameters is easily obtained from this linear model. The asymptotic sampling distribution of the least squares estimate is discussed. The Metropolis-Hastings proposal distribution is generated conditioning on the variance of the estimator. Finally, samples from the posterior distribution of regression parameters are obtained. The proposed method is tested through betting simulations using data from Hong Kong horse racing market. / Detailed summary in vernacular field only. / Xu, Wenjun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 46-48). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Hong Kong Horse Racing Market and Models in Horse Racing --- p.4 / Chapter 2.1 --- Hong Kong Horse Racing Market --- p.4 / Chapter 2.2 --- Models in Horse Racing --- p.6 / Chapter 3 --- Metropolis-Hastings Algorithm in Multinomial Regression with Gamma Utilities --- p.10 / Chapter 3.1 --- Notations and Posterior Distribution --- p.10 / Chapter 3.2 --- Metropolis-Hastings Algorithm --- p.11 / Chapter 4 --- Application --- p.15 / Chapter 4.1 --- Variables --- p.16 / Chapter 4.2 --- Markov Chain Simulation --- p.17 / Chapter 4.3 --- Model Selection --- p.27 / Chapter 4.4 --- Estimation Result --- p.31 / Chapter 4.5 --- Betting Strategies and Comparisons --- p.33 / Chapter 5 --- Conclusion --- p.41 / Appendix A --- p.43 / Appendix B --- p.44 / Bibliography --- p.46
|
240 |
"Volatility smile" of Hang Seng Index options: unlocking market information.January 1997 (has links)
by Wan Chi-Keung. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 33-34). / TABLE OF CONTENTS / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iii / LIST OF FIGURES --- p.iv / LIST OF TABLE --- p.v / ACKNOWLEDGEMENT --- p.vi / Chapter / Chapter I. --- INTRODUCTION --- p.1 / Chapter II. --- VOLATILITY SMILE --- p.4 / The Black- Scholes Model --- p.6 / The Implied Tree --- p.7 / The Implied Probability Distribution --- p.10 / Chapter III. --- LITERATURE REVIEW --- p.12 / Chapter IV. --- RECOVERING PROBABILITY DISTRIBUTIONS OF HSI --- p.18 / Shimko's Method --- p.19 / Data Selection --- p.22 / Probability Distributions of HSI --- p.23 / Chapter V. --- CONLUDING REMARKS --- p.27 / APPENDIX --- p.30 / BIBLIOGRAPHY --- p.33
|
Page generated in 0.1051 seconds