1 |
Refined error estimates for matrix-valued radial basis functions /Fuselier, Edward J., January 2006 (has links)
Thesis (Ph. D.)--Texas A&M University, 2006. / "May 2006." "Major subject: Mathematics." Vita. Includes bibliographical references (p. 67-70). Also available online.
|
2 |
L^p Bernstein Inequalities and Radial Basis Function ApproximationWard, John P. 2010 August 1900 (has links)
In approximation theory, three classical types of results are direct theorems,
Bernstein inequalities, and inverse theorems. In this paper, we include results about
radial basis function (RBF) approximation from all three classes. Bernstein inequalities
are a recent development in the theory of RBF approximation, and on Rd, only
L2 results are known for RBFs with algebraically decaying Fourier transforms (e.g.
the Sobolev splines and thin-plate splines). We will therefore extend what is known
by establishing Lp Bernstein inequalities for RBF networks on Rd. These inequalities
involve bounding a Bessel-potential norm of an RBF network by its corresponding Lp
norm in terms of the separation radius associated with the network. While Bernstein
inequalities have a variety of applications in approximation theory, they are most commonly
used to prove inverse theorems. Therefore, using the Lp Bernstein inequalities
for RBF approximants, we will establish the corresponding inverse theorems. The
direct theorems of this paper relate to approximation in Lp(Rd) by RBFs which are
perturbations of Green's functions. Results of this type are known for certain compact
domains, and results have recently been derived for approximation in Lp(Rd)
by RBFs that are Green's functions. Therefore, we will prove that known results for
approximation in Lp(Rd) hold for a larger class of RBFs. We will then show how this
result can be used to derive rates for approximation by Wendland functions.
|
3 |
Construction of Compact 3D Objects by Radial Basis Functions and Progressive CompressionHuang, Wei-Chiuan 02 February 2006 (has links)
Abstract
The representation of 3D Computer Graphics has been studied for a long time. Most 3D object models are obtained by 3D scan systems. These kinds of data are not only very huge, but also have a lot of redundancy. It consumes a large mount of time and resources. For this reason, how to represent the object efficiently is always an important issue. The purpose of this study is to present the objects by implicit functions. Different with the presentation of polygon mesh, implicit function is very compact to present objects because that the mathematical form of object can be obtained in different data forms. The implicit function used is Radial Basis Function, and then the BSP tree is used to partition the object to reduce the amount of computing. We also use the compression of progressive mesh to decrease the storage and the computing time. In addition, the object can be rendered according to the sampling points on each implicit surface.
|
4 |
Refined error estimates for matrix-valued radial basis functionsFuselier, Edward J., Jr. 17 September 2007 (has links)
Radial basis functions (RBFs) are probably best known for their applications to
scattered data problems. Until the 1990s, RBF theory only involved functions that
were scalar-valued. Matrix-valued RBFs were subsequently introduced by Narcowich
and Ward in 1994, when they constructed divergence-free vector-valued functions
that interpolate data at scattered points. In 2002, Lowitzsch gave the first error
estimates for divergence-free interpolants. However, these estimates are only valid
when the target function resides in the native space of the RBF. In this paper we develop
Sobolev-type error estimates for cases where the target function is less smooth
than functions in the native space. In the process of doing this, we give an alternate
characterization of the native space, derive improved stability estimates for the interpolation
matrix, and give divergence-free interpolation and approximation results
for band-limited functions. Furthermore, we introduce a new class of matrix-valued
RBFs that can be used to produce curl-free interpolants.
|
5 |
Radial parts of invariant differential operators on Grassmann manifolds /Kurgalina, Olga S. January 2004 (has links)
Thesis (Ph.D.)--Tufts University, 2004. / Adviser: Fulton B. Gonzalez. Submitted to the Dept. of Mathematics. Includes bibliographical references (leaves 72-73). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
|
6 |
Multilevel collocation with radial basis functionsFarrell, Patricio January 2014 (has links)
In this thesis, we analyse multilevel collocation methods involving compactly supported radial basis functions. We focus on linear second-order elliptic bound- ary value problems as well as Darcy's problem. While in the former case we use scalar-valued positive definite functions for constructing multilevel approximants, in the latter case we use matrix-valued functions that are automatically divergence-free. A similar result is presented for interpolating divergence-free vector fields. Even though it had been observed more than a decade ago that the stationary setting, i.e. when the support radii shrink as fast as the mesh norm, does not lead to convergence, it was up to now an open question how the support radii should depend on the mesh norm to ensure convergence. For each case above, we answer this question here thoroughly. Furthermore, we analyse and improve the stability of the linear systems. And lastly, we examine the case when the approximant does not lie in the same space as the solution to the PDE.
|
7 |
Parametric shape and topology structure optimization with radial basis functions and level set method.January 2008 (has links)
Lui, Fung Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 83-92). / Abstracts in English and Chinese. / Acknowledgement --- p.iii / Abbreviation --- p.xii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Related Work --- p.6 / Chapter 1.2.1 --- Parametric Optimization Method and Radial Basis Functions --- p.6 / Chapter 1.3 --- Contribution and Organization of the Dissertation --- p.7 / Chapter 2 --- Level Set Method for Structure Shape and Topology Optimization --- p.8 / Chapter 2.1 --- Primary Ideas of Shape and Topology Optimization --- p.8 / Chapter 2.2 --- Level Set models of implicit moving boundaries --- p.11 / Chapter 2.2.1 --- Representation of the Boundary via Level Set Method --- p.11 / Chapter 2.2.2 --- Hamilton-Jacobin Equations --- p.13 / Chapter 2.3 --- Numerical Techniques --- p.13 / Chapter 2.3.1 --- Sign-distance function --- p.14 / Chapter 2.3.2 --- Discrete Computational Scheme --- p.14 / Chapter 2.3.3 --- Level Set Surface Re-initialization --- p.16 / Chapter 2.3.4 --- Velocity Extension --- p.16 / Chapter 3 --- Structure Topology Optimization with Discrete Level Sets --- p.18 / Chapter 3.1 --- A Level Set Method for Structural Shape and Topology Optimization --- p.18 / Chapter 3.1.1 --- Problem Definition --- p.18 / Chapter 3.2 --- Shape Derivative: an Engineering-oriented Deduction --- p.21 / Chapter 3.2.1 --- Sensitivity Analysis --- p.23 / Chapter 3.2.2 --- Optimization Algorithm --- p.28 / Chapter 3.3 --- Limitations of Discrete Level Set Method --- p.30 / Chapter 4 --- RBF based Parametric Level Set Method --- p.32 / Chapter 4.1 --- Introduction --- p.32 / Chapter 4.2 --- Radial Basis Functions Modeling --- p.33 / Chapter 4.2.1 --- Inverse Multiquadric (IMQ) Radial Basis Functions --- p.38 / Chapter 4.3 --- Parameterized Level Set Method in Structure Topology Optimization --- p.39 / Chapter 4.4 --- Parametric Shape and Topology Structure Optimization Method with Radial Basis Functions --- p.42 / Chapter 4.4.1 --- Changing Coefficient Method --- p.43 / Chapter 4.4.2 --- Moving Knot Method --- p.45 / Chapter 4.4.3 --- Combination of Changing Coefficient and Moving Knot method --- p.46 / Chapter 4.5 --- Numerical Implementation --- p.48 / Chapter 4.5.1 --- Sensitivity Calculation --- p.48 / Chapter 4.5.2 --- Optimization Algorithms --- p.49 / Chapter 4.5.3 --- Numerical Examples --- p.52 / Chapter 4.6 --- Summary --- p.65 / Chapter 5 --- Conclusion and Future Work --- p.80 / Chapter 5.1 --- Conclusion --- p.80 / Chapter 5.2 --- Future Work --- p.81 / Bibliography --- p.83
|
8 |
Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive SplinesGirosi, Federico, Jones, Michael, Poggio, Tomaso 01 June 1993 (has links)
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
|
9 |
Prediction of permeate flux decline in crossflow membrane filtration of colloidal suspension : a radial basis function neural network approach /Chen, Huaiqun. January 2005 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2005. / Includes bibliographical references (leaves 63-67). Also available via World Wide Web.
|
10 |
A radial basis memory model for human maze learningDrewell, Lisa Y. 30 June 2008 (has links)
This research develops a memory model capable of performing in a human-like fashion on a maze traversal task. The model is based on and retains the underlying ideas of Minerva 2 but is executed with different mathematical operations and with some added parameters and procedures that enable more capabilities. When applied to the same maze traversal task as was used in a previous experiment with human subjects, the performance of a maze traversal agent with the developed model as its memory emulated the error rates of the human data remarkably well. As well, the maze traversal agent and memory model successfully emulated the human data when it was divided into two groups: fast maze learners and slow maze learners. It was able to account for individual differences in performance, specifically, individual differences in the learning rate. Because forgetting was not applied and therefore all experiences were flawlessly encoded in memory, the model additionally demonstrates that error can be due to interference between memories rather than forgetting. / Thesis (Master, Computing) -- Queen's University, 2008-06-04 13:39:38.179
|
Page generated in 0.0216 seconds