• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 856
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1886
  • 660
  • 330
  • 234
  • 220
  • 216
  • 212
  • 212
  • 208
  • 204
  • 189
  • 182
  • 169
  • 150
  • 144
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Principal typings for interactive ruby programming

Hnativ, Andriy 16 December 2009 (has links)
A novel and promising method of software development is the interactive style of development, where code is written and incrementally tested simultaneously. Interpreted dynamic languages such as Ruby, Python, and Lua support this interactive development style. However, because they lack semantic analysis as part of a compilation phase, they do not provide type-checking. The programmer is only informed of type errors when they are encountered in the execution of the programfar too late and often at a less-informative location in the code. We introduce a typing system for Ruby, where types will be determined before execution by inferring principal typings. This system overcomes the obstacles that interactive and dynamic program development imposes on type checking; yielding an effective type-checking facility for dynamic programming languages. Our development is embodied as an extension to irb, the Ruby interactive mode, allowing us to evaluate principal typings for interactive development.
112

Parametric inference for time series based upon goodness-of-fit /

Woo, Pao-sun. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 127-132).
113

Statistical inference on some long memory volatility models

Li, Muyi., 李木易. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
114

Statistical inference of a threshold model in extreme value analysis

Lee, David., 李大為. January 2012 (has links)
In many data sets, a mixture distribution formulation applies when it is known that each observation comes from one of the underlying categories. Even if there are no apparent categories, an implicit categorical structure may justify a mixture distribution. This thesis concerns the modeling of extreme values in such a setting within the peaks-over-threshold (POT) approach. Specifically, the traditional POT modeling using the generalized Pareto distribution is augmented in the sense that, in addition to threshold exceedances, data below the threshold are also modeled by means of the mixture exponential distribution. In the first part of this thesis, the conventional frequentist approach is applied for data modeling. In view of the mixture nature of the problem, the EM algorithm is employed for parameter estimation, where closed-form expressions for the iterates are obtained. A simulation study is conducted to confirm the suitability of such method, and the observation of an increase in standard error due to the variability of the threshold is addressed. The model is applied to two real data sets, and it is demonstrated how computation time can be reduced through a multi-level modeling procedure. With the fitted density, it is possible to derive many useful quantities such as return periods and levels, value-at-risk, expected tail loss and bounds for ruin probabilities. A likelihood ratio test is then used to justify model choice against the simpler model where the thin-tailed distribution is homogeneous exponential. The second part of the thesis deals with a fully Bayesian approach to the same model. It starts with the application of the Bayesian idea to a special case of the model where a closed-form posterior density is computed for the threshold parameter, which serves as an introduction. This is extended to the threshold mixture model by the use of the Metropolis-Hastings algorithm to simulate samples from a posterior distribution known up to a normalizing constant. The concept of depth functions is proposed in multidimensional inference, where a natural ordering does not exist. Such methods are then applied to real data sets. Finally, the issue of model choice is considered through the use of posterior Bayes factor, a criterion that stems from the posterior density. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
115

Essays on Causal Inference for Public Policy

Zajonc, Tristan 07 August 2012 (has links)
Effective policymaking requires understanding the causal effects of competing proposals. Relevant causal quantities include proposals' expected effect on different groups of recipients, the impact of policies over time, the potential trade-offs between competing objectives, and, ultimately, the optimal policy. This dissertation studies causal inference for public policy, with an emphasis on applications in economic development and education. The first chapter introduces Bayesian methods for time-varying treatments that commonly arise in economics, health, and education. I present methods that account for dynamic selection on intermediate outcomes and can estimate the causal effect of arbitrary dynamic treatment regimes, recover the optimal regime, and characterize the set of feasible outcomes under different regimes. I demonstrate these methods through an application to optimal student tracking in ninth and tenth grade mathematics. The proposed estimands characterize outcomes, mobility, equity, and efficiency under different tracking regimes. The second chapter studies regression discontinuity designs with multiple forcing variables. Leading examples include education policies where treatment depends on multiple test scores and spatial treatment discontinuities arising from geographic borders. I give local linear estimators for both the conditional effect along the boundary and the average effect over the boundary. For two-dimensional RD designs, I derive an optimal, data-dependent, bandwidth selection rule for the conditional effect. I demonstrate these methods using a summer school and grade retention example. The third chapters illustrate the central role of persistence in estimating and interpreting value-added models of learning. Using data from Pakistani public and private schools, I apply dynamic panel methods that address three key empirical challenges: imperfect persistence, unobserved student heterogeneity, and measurement error. After correcting for these difficulties, the estimates suggest that only a fifth to a half of learning persists between grades and that private schools increase average achievement by 0.25 standard deviations each year. In contrast, value-added models that assume perfect persistence yield severely downwardly biased and occasionally wrong-signed estimates of the private school effect.
116

The Method of Batch Inference for Multivariate Diffusions

Lysy, Martin January 2012 (has links)
Diffusion processes have been used to model a variety of continuous-time phenomena in Finance, Engineering, and the Natural Sciences. However, parametric inference has long been complicated by an intractable likelihood function, the solution of a partial differential equation. For many multivariate models, the most effective inference approach involves a large amount of missing data for which the typical Gibbs sampler can be arbitrarily slow. On the other hand, a recent method of joint parameter and missing data proposals can lead to a radical improvement, but their acceptance rate scales exponentially with the number of observations. We consider here a method of dividing the inference process into separate data batches, each small enough to benefit from joint proposals. A conditional independence argument allows batch-wise missing data to be sequentially integrated out. Although in practice the integration is only approximate, the Batch posterior and the exact parameter posterior can often have similar performance under a Frequency evaluation, for which the true parameter value is fixed. We present an example using Heston’s stochastic volatility model for financial assets, but much of the methodology extends to Hidden Markov and other State-Space models. / Statistics
117

Aging and inferencing ability : an examination of factors underlying text comprehension

Hancock, Holly Elizabeth 08 1900 (has links)
No description available.
118

Search, Inference and Opponent Modelling in an Expert-Caliber Skat Player

Long, Jeffrey Richard Unknown Date
No description available.
119

Bayesian Methods for On-Line Gross Error Detection and Compensation

Gonzalez, Ruben Unknown Date
No description available.
120

Nonlinear estimation techniques for target tracking

McGinnity, Shaun Joseph January 1998 (has links)
No description available.

Page generated in 0.323 seconds