Spelling suggestions: "subject:"approximation theory."" "subject:"eapproximation theory.""
101 |
Trigonometria do ensino médio e aproximação de funções por polinômios trigonométricos / High school trigonometry and approximation of functions by trigonometric polynomialsOliveira, Carlos Eduardo, 1981- 25 August 2018 (has links)
Orientador: Ary Orozimbo Chiacchio / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T03:11:09Z (GMT). No. of bitstreams: 1
Oliveira_CarlosEduardo_M.pdf: 6339093 bytes, checksum: ef1c6e7f77bfce98e679d319c779ec3f (MD5)
Previous issue date: 2014 / Resumo: Neste trabalho apresentamos uma proposta de abordagem do conteúdo didático-programático de trigonometria de ensino médio, com apresentação teórica, exercícios e problemas, ambos classificados por nível de dificuldade com o intuito de auxiliar e mensurar a evolução individual de cada aluno no decorrer do curso. Apresentamos também, além do conteúdo supramencionado, a aproximação de funções contínuas por polinômios trigonométricos com o auxílio do Teorema da Aproximação de Weierstrass / Abstract: In this work we present a proposed approach of the educational content of high school trigonometry with theoretical presentation, exercises and problems, both ranked by difficulty level in order to assist and measure the individual progress of each student during the course. We also present, in addition to the above content, the approximation of continuous functions by trigonometric polynomials with the aid of the Weierstrass Approximation Theorem / Mestrado / Matemática em Rede Nacional - PROFMAT / Mestre em Matemática em Rede Nacional - PROFMAT
|
102 |
Study on efficient sparse and low-rank optimization and its applicationsLou, Jian 29 August 2018 (has links)
Sparse and low-rank models have been becoming fundamental machine learning tools and have wide applications in areas including computer vision, data mining, bioinformatics and so on. It is of vital importance, yet of great difficulty, to develop efficient optimization algorithms for solving these models, especially under practical design considerations of computational, communicational and privacy restrictions for ever-growing larger scale problems. This thesis proposes a set of new algorithms to improve the efficiency of the sparse and low-rank models optimization. First, facing a large number of data samples during training of empirical risk minimization (ERM) with structured sparse regularization, the gradient computation part of the optimization can be computationally expensive and becomes the bottleneck. Therefore, I propose two gradient efficient optimization algorithms to reduce the total or per-iteration computational cost of the gradient evaluation step, which are new variants of the widely used generalized conditional gradient (GCG) method and incremental proximal gradient (PG) method, correspondingly. In detail, I propose a novel algorithm under GCG framework that requires optimal count of gradient evaluations as proximal gradient. I also propose a refined variant for a type of gauge regularized problem, where approximation techniques are allowed to further accelerate linear subproblem computation. Moreover, under the incremental proximal gradient framework, I propose to approximate the composite penalty by its proximal average under incremental gradient framework, so that a trade-off is made between precision and efficiency. Theoretical analysis and empirical studies show the efficiency of the proposed methods. Furthermore, the large data dimension (e.g. the large frame size of high-resolution image and video data) can lead to high per-iteration computational complexity, thus results into poor-scalability of the optimization algorithm from practical perspective. In particular, in spectral k-support norm regularized robust low-rank matrix and tensor optimization, traditional proximal map based alternating direction method of multipliers (ADMM) requires to evaluate a super-linear complexity subproblem in each iteration. I propose a set of per-iteration computational efficient alternatives to reduce the cost to linear and nearly linear with respect to the input data dimension for matrix and tensor case, correspondingly. The proposed algorithms consider the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. The overall convergence rate is also provided. Experiments on various computer vision and image processing applications demonstrate the superior prediction performance and computation efficiency of the proposed algorithm. In addition, since machine learning datasets often contain sensitive individual information, privacy-preserving becomes more and more important during sparse optimization. I provide two differentially private optimization algorithms under two common large-scale machine learning computing contexts, i.e., distributed and streaming optimization, correspondingly. For the distributed setting, I develop a new algorithm with 1) guaranteed strict differential privacy requirement, 2) nearly optimal utility and 3) reduced uplink communication complexity, for a nearly unexplored context with features partitioned among different parties under privacy restriction. For the streaming setting, I propose to improve the utility of the private algorithm by trading the privacy of distant input instances, under the differential privacy restriction. I show that the proposed method can either solve the private approximation function by a projected gradient update for projection-friendly constraints, or by a conditional gradient step for linear oracle-friendly constraint, both of which improve the regret bound to match the nonprivate optimal counterpart.
|
103 |
Efficient Numerical Methods for High-Dimensional Approximation ProblemsWolfers, Sören 06 February 2019 (has links)
In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm.
In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized
using standard optimization routines.
|
104 |
Detection And Approximation Of Function Of Two Variables In High DimensionsPan, Minzhe 01 January 2010 (has links)
This thesis originates from the deterministic algorithm of DeVore, Petrova, and Wojtaszcsyk for the detection and approximation of functions of one variable in high dimensions. We propose a deterministic algorithm for the detection and approximation of function of two variables in high dimensions.
|
105 |
Estimates for the rate of approximation of functions of bounded variation by positive linear operators /Cheng, Fuhua January 1982 (has links)
No description available.
|
106 |
Some applications of Faber polynomials to approximation of functions of a complex variableMackenzie, Kenneth. January 1970 (has links)
No description available.
|
107 |
Soved problems of M.A. Krasnoselʹskii and V. Ya Stetsenko on the approximate solution of operator equationsCarling, Robert Laurence. January 1975 (has links)
No description available.
|
108 |
Continued fractions in rational approximations, and number theory.Edwards, David Charles. January 1971 (has links)
No description available.
|
109 |
Modeling, Approximation, and Control for a Class of Nonlinear SystemsBobade, Parag Suhas 05 December 2017 (has links)
This work investigates modeling, approximation, estimation, and control for classes of nonlinear systems whose state evolves in space $mathbb{R}^n times H$, where $mathbb{R}^n$ is a n-dimensional Euclidean space and $H$ is a infinite dimensional Hilbert space. Specifically, two classes of nonlinear systems are studied in this dissertation. The first topic develops a novel framework for adaptive estimation of nonlinear systems using reproducing kernel Hilbert spaces. A nonlinear adaptive estimation problem is cast as a time-varying estimation problem in $mathbb{R}^d times H$. In contrast to most conventional strategies for ODEs, the approach here embeds the estimate of the unknown nonlinear function appearing in the plant in a reproducing kernel Hilbert space (RKHS), $H$. Furthermore, the well-posedness of the framework in the new formulation is established. We derive the sufficient conditions for existence, uniqueness, and stability of an infinite dimensional adaptive estimation problem. A condition for persistence of excitation in a RKHS in terms of an evaluation functional is introduced to establish the convergence of finite dimensional approximations of the unknown function in RKHS. Lastly, a numerical validation of this framework is presented, which could have potential applications in terrain mapping algorithms.
The second topic delves into estimation and control of history dependent differential equations. This study is motivated by the increasing interest in estimation and control techniques for robotic systems whose governing equations include history dependent nonlinearities. The governing dynamics are modeled using a specific form of functional differential equations. The class of history dependent differential equations in this work is constructed using integral operators that depend on distributed parameters. Consequently, the resulting estimation and control equations define a distributed parameter system whose state, and distributed parameters evolve in finite and infinite dimensional spaces, respectively. The well-posedness of the governing equations is established by deriving sufficient conditions for existence, uniqueness and stability for the class of functional differential equations. The error estimates for multiwavelet approximation of such history dependent operators are derived. These estimates help determine the rate of convergence of finite dimensional approximations of the online estimation equations to the infinite dimensional solution of distributed parameter system. At last, we present the adaptive sliding mode control strategy developed for the history dependent functional differential equations and numerically validate the results on a simplified pitch-plunge wing model. / Ph. D. / This dissertation aims to contribute towards our understanding of certain classes of estimation and control problems that arise in applications where the governing dynamics are modeled using nonlinear ordinary differential equations and certain functional differential equations. A common theme throughout this dissertation is to leverage ideas from approximation theory to extend the conventional adaptive estimation and control frameworks. The first topic develops a novel framework for adaptive estimation of nonlinear systems using reproducing kernel Hilbert spaces. The numerical validation of the framework presented has potential applications in terrain mapping algorithms. The second topic delves into estimation and control of history dependent differential equations. This study is motivated by the increasing interest in estimation and control techniques for robotic systems whose governing equations include history dependent nonlinearities.
|
110 |
Wiener's Approximation Theorem for Locally Compact Abelian GroupsShu, Ven-shion 08 1900 (has links)
This study of classical and modern harmonic analysis extends the classical Wiener's approximation theorem to locally compact abelian groups. The first chapter deals with harmonic analysis on the n-dimensional Euclidean space. Included in this chapter are some properties of functions in L1(Rn) and T1(Rn), the Wiener-Levy theorem, and Wiener's approximation theorem. The second chapter introduces the notion of standard function algebra, cospectrum, and Wiener algebra. An abstract form of Wiener's approximation theorem and its generalization is obtained. The third chapter introduces the dual group of a locally compact abelian group, defines the Fourier transform of functions in L1(G), and establishes several properties of functions in L1(G) and T1(G). Wiener's approximation theorem and its generalization for L1(G) is established.
|
Page generated in 0.0823 seconds