Spelling suggestions: "subject:"[een] APPROXIMATION"" "subject:"[enn] APPROXIMATION""
271 |
Study on efficient sparse and low-rank optimization and its applicationsLou, Jian 29 August 2018 (has links)
Sparse and low-rank models have been becoming fundamental machine learning tools and have wide applications in areas including computer vision, data mining, bioinformatics and so on. It is of vital importance, yet of great difficulty, to develop efficient optimization algorithms for solving these models, especially under practical design considerations of computational, communicational and privacy restrictions for ever-growing larger scale problems. This thesis proposes a set of new algorithms to improve the efficiency of the sparse and low-rank models optimization. First, facing a large number of data samples during training of empirical risk minimization (ERM) with structured sparse regularization, the gradient computation part of the optimization can be computationally expensive and becomes the bottleneck. Therefore, I propose two gradient efficient optimization algorithms to reduce the total or per-iteration computational cost of the gradient evaluation step, which are new variants of the widely used generalized conditional gradient (GCG) method and incremental proximal gradient (PG) method, correspondingly. In detail, I propose a novel algorithm under GCG framework that requires optimal count of gradient evaluations as proximal gradient. I also propose a refined variant for a type of gauge regularized problem, where approximation techniques are allowed to further accelerate linear subproblem computation. Moreover, under the incremental proximal gradient framework, I propose to approximate the composite penalty by its proximal average under incremental gradient framework, so that a trade-off is made between precision and efficiency. Theoretical analysis and empirical studies show the efficiency of the proposed methods. Furthermore, the large data dimension (e.g. the large frame size of high-resolution image and video data) can lead to high per-iteration computational complexity, thus results into poor-scalability of the optimization algorithm from practical perspective. In particular, in spectral k-support norm regularized robust low-rank matrix and tensor optimization, traditional proximal map based alternating direction method of multipliers (ADMM) requires to evaluate a super-linear complexity subproblem in each iteration. I propose a set of per-iteration computational efficient alternatives to reduce the cost to linear and nearly linear with respect to the input data dimension for matrix and tensor case, correspondingly. The proposed algorithms consider the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. The overall convergence rate is also provided. Experiments on various computer vision and image processing applications demonstrate the superior prediction performance and computation efficiency of the proposed algorithm. In addition, since machine learning datasets often contain sensitive individual information, privacy-preserving becomes more and more important during sparse optimization. I provide two differentially private optimization algorithms under two common large-scale machine learning computing contexts, i.e., distributed and streaming optimization, correspondingly. For the distributed setting, I develop a new algorithm with 1) guaranteed strict differential privacy requirement, 2) nearly optimal utility and 3) reduced uplink communication complexity, for a nearly unexplored context with features partitioned among different parties under privacy restriction. For the streaming setting, I propose to improve the utility of the private algorithm by trading the privacy of distant input instances, under the differential privacy restriction. I show that the proposed method can either solve the private approximation function by a projected gradient update for projection-friendly constraints, or by a conditional gradient step for linear oracle-friendly constraint, both of which improve the regret bound to match the nonprivate optimal counterpart.
|
272 |
Efficient Numerical Methods for High-Dimensional Approximation ProblemsWolfers, Sören 06 February 2019 (has links)
In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm.
In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized
using standard optimization routines.
|
273 |
An Algorithm for Symbolic Computing of Singular Limits of Dynamical SystemsBjork, Dane Jordan 01 July 2018 (has links)
The manifold boundary approximation method, MBAM is a new technique used in approximating systems of equations using parameter reduction. This method and other approximation methods are introduced and described. Several current issues in performing MBAM are discussed in further detail. These issues significantly slow down the process of MBAM and create a barrier of entry for those wishing to use the method without a strong background in mathematics. A solution is proposed to automatically reparameterize models and evaluate specific types of variables approaching limits -- significantly speeding up the process of MBAM. An implementation of the solution is discussed.
|
274 |
Functional integration applied to the nuclear many body-problemTroudet, Thierry. January 1982 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1982. / Includes bibliographical references. / by Thierry Troudet. / Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Physics, 1982.
|
275 |
Meta-analysis of safety data: approximation of arcsine transformation and application of mixture distribution modelingCheng, Hailong 23 September 2015 (has links)
Meta-analysis is frequently used in the analysis of safety data. In dealing with rare events, the commonly used risk measures, such as the odds ratio, or risk difference, or their variance, can become undefined when no events are observed in studies. The use of an arcsine transformation and arcsine difference (AD) as treatment effect were shown to have desirable statistical properties (Rucker, 2009). However, the interpretation of the AD remains challenging and this may hamper its utility. To convert the AD to a risk measure similar to the risk difference, two previously proposed linear approximation methods, along with new linear and non-linear methods were discussed and evaluated. The existing approximation methods generally provide satisfactory approximation when the event proportions are between 0.15 and 0.85. We propose a new linear approximation method, the modified rationalized arcsine unit (MRAU) which improves the approximation when proportions fall outside the range from 0.15 to 0.85. However, the MRAU can still lead to under- or over-estimation depending on the underlying proportion. We then proposed a non-linear approximation method, based on a Taylor series expansion (TRAUD), which shows the best approximation across the full range of risk levels. However, the variance for TRAUD is less easily estimated and requires bootstrap estimation. Results from simulation studies confirm these findings under a wide array of scenarios.
In the second section, heterogeneity in meta-analysis is discussed along with current methods that address the issue. To provide an exploration of the nature of heterogeneity, finite mixture model methods (FMM) were presented, and their application in meta-analysis discussed. The estimates derived from the components in FMM indicate that even with a pre-specified protocol, the studies included in a meta-analysis may come from different distributions that can cause heterogeneity. The estimated number of components may suggest the existence of multiple sub-populations that a simple overall effect estimate will neglect. We propose that in the analysis of safety data, the estimates of the number of components and their respective means can provide valuable information for better patient care.
In the final section, the application of the approximation methods and the use of FMM are demonstrated in the analysis of two published meta-analysis examples from the medical literature.
|
276 |
Construction of approximate medial shape representations by continuous optimizationRebain, Daniel 23 December 2019 (has links)
The Medial Axis Transform (MAT) is a powerful tool for shape analysis and manipulation. Traditional methods for working with shapes usually define shapes as boundaries between some “inside” and some “outside” region. While this definition is simple and intuitive, it does not lend itself well to the construction of algorithms for a number of seemingly simple tasks such as classification, deformation, and collision detection. The MAT is an alternative representation of shape that defines the “inside” region by its center and thickness. We present a method of constructing the MAT which overcomes a significant limitation of its use with real-world data: instability. As classically defined, the MAT is unstable with respect to the shape boundary that it represents. For data sources afflicted by noise this is a serious problem. We propose an algorithm, LSMAT, which constructs a stable least squares approximation to the MAT. / Graduate
|
277 |
Variations on the Matching ProblemJudkovich, David 23 May 2019 (has links)
No description available.
|
278 |
Structural modelling of tall buildings using generalized parametersSalhi, Sana January 1987 (has links)
No description available.
|
279 |
Detection And Approximation Of Function Of Two Variables In High DimensionsPan, Minzhe 01 January 2010 (has links)
This thesis originates from the deterministic algorithm of DeVore, Petrova, and Wojtaszcsyk for the detection and approximation of functions of one variable in high dimensions. We propose a deterministic algorithm for the detection and approximation of function of two variables in high dimensions.
|
280 |
Effect Of Inner Scale Atmospheric Spectrum Models On Scintillation In All Optical Turbulence RegimesMayer, Kenneth 01 January 2007 (has links)
Experimental studies have shown that a "bump" occurs in the atmospheric spectrum just prior to turbulence cell dissipation.1,3,4 In weak optical turbulence, this bump affects calculated scintillation. The purpose of this thesis was to determine if a "non-bump" atmospheric power spectrum can be used to model scintillation for plane waves and spherical waves in moderate to strong optical turbulence regimes. Scintillation expressions were developed from an "effective" von Karman spectrum using an approach similar to that used by Andrews et al.8,14,15 in developing expressions from an "effective" modified (bump) spectrum. The effective spectrum extends the Rytov approximation into all optical turbulence regimes using filter functions to eliminate mid-range turbulent cell size effects to the scintillation index. Filter cutoffs were established by matching to known weak and saturated scintillation results. The resulting new expressions track those derived from the effective bump spectrum fairly closely. In extremely strong turbulence, differences are minimal.
|
Page generated in 0.0474 seconds