Spelling suggestions: "subject:"shape constraints"" "subject:"chape constraints""
1 |
Essays in Efficiency AnalysisDemchuk, Pavlo 16 September 2013 (has links)
Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represents. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo study with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches.
Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite the fact that at a certain point after the reform private generators opted not to provide their data to the regulator we were able to construct tree different data samples comprising two and three groups of generators and analyze their production/efficiency patterns.
In the third chapter we propose a semiparametric approach with shape constrains which is consistent with monotonicity and concavity constraints. Penalized splines are used to maintain the shape constrained via nonlinear transformations of spline basis expansions. The large sample properties, an effective algorithm and method of smoothing parameter selection are presented in the paper. Monte Carlo simulations and empirical examples demonstrate the finite sample performance and the usefulness of the proposed method.
|
2 |
Bayesian Modeling Using Latent StructuresWang, Xiaojing January 2012 (has links)
<p>This dissertation is devoted to modeling complex data from the</p><p>Bayesian perspective via constructing priors with latent structures.</p><p>There are three major contexts in which this is done -- strategies for</p><p>the analysis of dynamic longitudinal data, estimating</p><p>shape-constrained functions, and identifying subgroups. The</p><p>methodology is illustrated in three different</p><p>interdisciplinary contexts: (1) adaptive measurement testing in</p><p>education; (2) emulation of computer models for vehicle crashworthiness; and (3) subgroup analyses based on biomarkers.</p><p>Chapter 1 presents an overview of the utilized latent structured</p><p>priors and an overview of the remainder of the thesis. Chapter 2 is</p><p>motivated by the problem of analyzing dichotomous longitudinal data</p><p>observed at variable and irregular time points for adaptive</p><p>measurement testing in education. One of its main contributions lies</p><p>in developing a new class of Dynamic Item Response (DIR) models via</p><p>specifying a novel dynamic structure on the prior of the latent</p><p>trait. The Bayesian inference for DIR models is undertaken, which</p><p>permits borrowing strength from different individuals, allows the</p><p>retrospective analysis of an individual's changing ability, and</p><p>allows for online prediction of one's ability changes. Proof of</p><p>posterior propriety is presented, ensuring that the objective</p><p>Bayesian analysis is rigorous.</p><p>Chapter 3 deals with nonparametric function estimation under</p><p>shape constraints, such as monotonicity, convexity or concavity. A</p><p>motivating illustration is to generate an emulator to approximate a computer</p><p>model for vehicle crashworthiness. Although Gaussian processes are</p><p>very flexible and widely used in function estimation, they are not</p><p>naturally amenable to incorporation of such constraints. Gaussian</p><p>processes with the squared exponential correlation function have the</p><p>interesting property that their derivative processes are also</p><p>Gaussian processes and are jointly Gaussian processes with the</p><p>original Gaussian process. This allows one to impose shape constraints</p><p>through the derivative process. Two alternative ways of incorporating derivative</p><p>information into Gaussian processes priors are proposed, with one</p><p>focusing on scenarios (important in emulation of computer</p><p>models) in which the function may have flat regions.</p><p>Chapter 4 introduces a Bayesian method to control for multiplicity</p><p>in subgroup analyses through tree-based models that limit the</p><p>subgroups under consideration to those that are a priori plausible.</p><p>Once the prior modeling of the tree is accomplished, each tree will</p><p>yield a statistical model; Bayesian model selection analyses then</p><p>complete the statistical computation for any quantity of interest,</p><p>resulting in multiplicity-controlled inferences. This research is</p><p>motivated by a problem of biomarker and subgroup identification to</p><p>develop tailored therapeutics. Chapter 5 presents conclusions and</p><p>some directions for future research.</p> / Dissertation
|
3 |
Stochastic approximation and least-squares regression, with applications to machine learning / Approximation stochastique et régression par moindres carrés : applications en apprentissage automatiqueFlammarion, Nicolas 24 July 2017 (has links)
De multiples problèmes en apprentissage automatique consistent à minimiser une fonction lisse sur un espace euclidien. Pour l’apprentissage supervisé, cela inclut les régressions par moindres carrés et logistique. Si les problèmes de petite taille sont résolus efficacement avec de nombreux algorithmes d’optimisation, les problèmes de grande échelle nécessitent en revanche des méthodes du premier ordre issues de la descente de gradient. Dans ce manuscrit, nous considérons le cas particulier de la perte quadratique. Dans une première partie, nous nous proposons de la minimiser grâce à un oracle stochastique. Dans une seconde partie, nous considérons deux de ses applications à l’apprentissage automatique : au partitionnement de données et à l’estimation sous contrainte de forme. La première contribution est un cadre unifié pour l’optimisation de fonctions quadratiques non-fortement convexes. Celui-ci comprend la descente de gradient accélérée et la descente de gradient moyennée. Ce nouveau cadre suggère un algorithme alternatif qui combine les aspects positifs du moyennage et de l’accélération. La deuxième contribution est d’obtenir le taux optimal d’erreur de prédiction pour la régression par moindres carrés en fonction de la dépendance au bruit du problème et à l’oubli des conditions initiales. Notre nouvel algorithme est issu de la descente de gradient accélérée et moyennée. La troisième contribution traite de la minimisation de fonctions composites, somme de l’espérance de fonctions quadratiques et d’une régularisation convexe. Nous étendons les résultats existants pour les moindres carrés à toute régularisation et aux différentes géométries induites par une divergence de Bregman. Dans une quatrième contribution, nous considérons le problème du partitionnement discriminatif. Nous proposons sa première analyse théorique, une extension parcimonieuse, son extension au cas multi-labels et un nouvel algorithme ayant une meilleure complexité que les méthodes existantes. La dernière contribution de cette thèse considère le problème de la sériation. Nous adoptons une approche statistique où la matrice est observée avec du bruit et nous étudions les taux d’estimation minimax. Nous proposons aussi un estimateur computationellement efficace. / Many problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally.
|
Page generated in 0.1054 seconds