Spelling suggestions: "subject:"error estimation"" "subject:"arror estimation""
21 |
Cut finite element methods on parametric multipatch surfacesJonsson, Tobias January 2019 (has links)
No description available.
|
22 |
Real-Time Optimal Parametric Design of a Simple Infiltration-Evaporation Model Using the Assess-Predict-Optimize (APO) StrategyAli, S., Damodaran, Murali, Patera, Anthony T. 01 1900 (has links)
Optimal parametric design of a system must be able to respond quickly to short term needs as well as long term conditions. To this end, we present an Assess-Predict-Optimize (APO) strategy which allows for easy modification of a system’s characteristics and constraints, enabling quick design adaptation. There are three components to the APO strategy: Assess - extract necessary information from given data; Predict - predict future behavior of system; and Optimize – obtain optimal system configuration based on information from the other components. The APO strategy utilizes three key mathematical ingredients to yield real-time results which would certainly conform to given constraints: dimension reduction of the model, a posteriori error estimation, and optimization methods. The resulting formulation resembles a bilevel optimization problem with an inherent nonconvexity in the inner level. Using a simple infiltration-evaporation model to simulate an irrigation system, we demonstrate the APO strategy’s ability to yield real-time optimal results. The linearized model, described by a coercive elliptic partial differential equation, is discretized by the reduced-basis output bounds method. A primal-dual interior point method is then chosen to solve the resulting APO problem. / Singapore-MIT Alliance (SMA)
|
23 |
Reliable Real-Time Solution of Parametrized Elliptic Partial Differential Equations: Application to ElasticityVeroy, K., Leurent, T., Prud'homme, C., Rovas, D.V., Patera, Anthony T. 01 1900 (has links)
The optimization, control, and characterization of engineering components or systems require fast, repeated, and accurate evaluation of a partial-differential-equation-induced input-output relationship. We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic partial differential equations with affine parameter dependence. The method has three components: (i) rapidly convergent reduced{basis approximations; (ii) a posteriori error estimation; and (iii) off-line/on-line computational procedures. These components -- integrated within a special network architecture -- render partial differential equation solutions truly "useful": essentially real{time as regards operation count; "blackbox" as regards reliability; and directly relevant as regards the (limited) input-output data required. / Singapore-MIT Alliance (SMA)
|
24 |
High-dimensional data mining: subspace clustering, outlier detection and applications to classificationFoss, Andrew 06 1900 (has links)
Data mining in high dimensionality almost inevitably faces the consequences of increasing sparsity and declining differentiation between points. This is problematic because we usually exploit these differences for approaches such as clustering and outlier detection. In addition, the exponentially increasing sparsity tends to increase false negatives when clustering.
In this thesis, we address the problem of solving high-dimensional problems using low-dimensional solutions. In clustering, we provide a new framework MAXCLUS for finding candidate subspaces and the clusters within them using only two-dimensional clustering. We demonstrate this through an implementation GCLUS that outperforms many state-of-the-art clustering algorithms and is particularly robust with respect to noise. It also handles overlapping clusters and provides either `hard' or `fuzzy' clustering results as desired. In order to handle extremely high dimensional problems, such as genome microarrays, given some sample-level diagnostic labels, we provide a simple but effective classifier GSEP which weights the features so that the most important can be fed to GCLUS. We show that this leads to small numbers of features (e.g. genes) that can distinguish the diagnostic classes and thus are candidates for research for developing therapeutic applications.
In the field of outlier detection, several novel algorithms suited to high-dimensional data are presented (T*ENT, T*ROF, FASTOUT). It is shown that these algorithms outperform the state-of-the-art outlier detection algorithms in ranking outlierness for many datasets regardless of whether they contain rare classes or not. Our research into high-dimensional outlier detection has even shown that our approach can be a powerful means of classification for heavily overlapping classes given sufficiently high dimensionality and that this phenomenon occurs solely due to the differences in variance among the classes. On some difficult datasets, this unsupervised approach yielded better separation than the very best supervised classifiers and on other data, the results are competitive with state-of-the-art supervised approaches.kern-1pt The elucidation of this novel approach to classification opens a new field in data mining, classification through differences in variance rather than spatial location.
As an appendix, we provide an algorithm for estimating false negative and positive rates so these can be compensated for.
|
25 |
Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling FrameworkDalton, Lori Anne 2012 May 1900 (has links)
With the advent of high-throughput genomic and proteomic technologies, in conjunction with the difficulty in obtaining even moderately sized samples, small-sample classifier design has become a major issue in the biological and medical communities. Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework of minimum mean-square error (MMSE) signal estimation in the presence of uncertainty, where uncertainty is relative to a prior over a family of distributions. This results in a Bayesian approach to error estimation that is optimal and unbiased relative to the model. The prior addresses a trade-off between estimator robustness (modeling assumptions) and accuracy.
Closed-form representations for Bayesian error estimators are provided for two important models: discrete classification with Dirichlet priors (the discrete model) and linear classification of Gaussian distributions with fixed, scaled identity or arbitrary covariances and conjugate priors (the Gaussian model). We examine robustness to false modeling assumptions and demonstrate that Bayesian error estimators perform especially well for moderate true errors.
The Bayesian modeling framework facilitates both optimization and analysis. It naturally gives rise to a practical expected measure of performance for arbitrary error estimators: the sample-conditioned mean-square error (MSE). Closed-form expressions are provided for both Bayesian models. We examine the consistency of Bayesian error estimation and illustrate a salient application in censored sampling, where sample points are collected one at a time until the conditional MSE reaches a stopping criterion.
We address practical considerations for gene-expression microarray data, including the suitability of the Gaussian model, a methodology for calibrating normal-inverse-Wishart priors from unused data, and an approximation method for non-linear classification. We observe superior performance on synthetic high-dimensional data and real data, especially for moderate to high expected true errors and small feature sizes.
Finally, arbitrary error estimators may be optimally calibrated assuming a fixed Bayesian model, sample size, classification rule, and error estimation rule. Using a calibration function mapping error estimates to their optimally calibrated values off-line, error estimates may be calibrated on the fly whenever the assumptions apply.
|
26 |
Adaptive finite element methods for multiphysics problemsBengzon, Fredrik January 2009 (has links)
In this thesis we develop and analyze the performance ofadaptive finite element methods for multiphysics problems. Inparticular, we propose a methodology for deriving computable errorestimates when solving unidirectionally coupled multiphysics problemsusing segregated finite element solvers. The error estimates are of a posteriori type and are derived using the standard frameworkof dual weighted residual estimates. A main feature of themethodology is its capability of automatically estimating thepropagation of error between the involved solvers with respect to anoverall computational goal. The a posteriori estimates are used todrive local mesh refinement, which concentrates the computationalpower to where it is most needed. We have applied and numericallystudied the methodology to several common multiphysics problems usingvarious types of finite elements in both two and three spatialdimensions. Multiphysics problems often involve convection-diffusion equations for whichstandard finite elements can be unstable. For such equations we formulatea robust discontinuous Galerkin method of optimal order with piecewiseconstant approximation. Sharp a priori and a posteriori error estimatesare proved and verified numerically. Fractional step methods are popular for simulating incompressiblefluid flow. However, since they are not genuine Galerkin methods, butrather based on operator splitting, they do not fit into the standardframework for a posteriori error analysis. We formally derive an aposteriori error estimate for a prototype fractional step method byseparating the error in a functional describing the computational goalinto a finite element discretization residual, a time steppingresidual, and an algebraic residual.
|
27 |
Finite Element Methods for Thin Structures with Applications in Solid MechanicsLarsson, Karl January 2013 (has links)
Thin and slender structures are widely occurring both in nature and in human creations. Clever geometries of thin structures can produce strong constructions while requiring a minimal amount of material. Computer modeling and analysis of thin and slender structures have their own set of problems, stemming from assumptions made when deriving the governing equations. This thesis deals with the derivation of numerical methods suitable for approximating solutions to problems on thin geometries. It consists of an introduction and four papers. In the first paper we introduce a thread model for use in interactive simulation. Based on a three-dimensional beam model, a corotational approach is used for interactive simulation speeds in combination with adaptive mesh resolution to maintain accuracy. In the second paper we present a family of continuous piecewise linear finite elements for thin plate problems. Patchwise reconstruction of a discontinuous piecewise quadratic deflection field allows us touse a discontinuous Galerkin method for the plate problem. Assuming a criterion on the reconstructions is fulfilled we prove a priori error estimates in energy norm and L2-norm and provide numerical results to support our findings. The third paper deals with the biharmonic equation on a surface embedded in R3. We extend theory and formalism, developed for the approximation of solutions to the Laplace-Beltrami problem on an implicitly defined surface, to also cover the biharmonic problem. A priori error estimates for a continuous/discontinuous Galerkin method is proven in energy norm and L2-norm, and we support the theoretical results by numerical convergence studies for problems on a sphere and on a torus. In the fourth paper we consider finite element modeling of curved beams in R3. We let the geometry of the beam be implicitly defined by a vector distance function. Starting from the three-dimensional equations of linear elasticity, we derive a weak formulation for a linear curved beam expressed in global coordinates. Numerical results from a finite element implementation based on these equations are compared with classical results.
|
28 |
The Bootstrap in Supervised Learning and its Applications in Genomics/ProteomicsVu, Thang 2011 May 1900 (has links)
The small-sample size issue is a prevalent problem in Genomics and Proteomics today.
Bootstrap, a resampling method which aims at increasing the efficiency of data usage,
is considered to be an effort to overcome the problem of limited sample size. This dissertation
studies the application of bootstrap to two problems of supervised learning with small
sample data: estimation of the misclassification error of Gaussian discriminant analysis,
and the bagging ensemble classification method.
Estimating the misclassification error of discriminant analysis is a classical problem in
pattern recognition and has many important applications in biomedical research. Bootstrap
error estimation has been shown empirically to be one of the best estimation methods in
terms of root mean squared error. In the first part of this work, we conduct a detailed
analytical study of bootstrap error estimation for the Linear Discriminant Analysis (LDA)
classification rule under Gaussian populations. We derive the exact formulas of the first
and the second moment of the zero bootstrap and the convex bootstrap estimators, as well
as their cross moments with the resubstitution estimator and the true error. Based on these
results, we obtain the exact formulas of the bias, the variance, and the root mean squared
error of the deviation from the true error of these bootstrap estimators. This includes the
moments of the popular .632 bootstrap estimator. Moreover, we obtain the optimal weight
for unbiased and minimum-RMS convex bootstrap estimators. In the univariate case, all
the expressions involve Gaussian distributions, whereas in the multivariate case, the results are written in terms of bivariate doubly non-central F distributions.
In the second part of this work, we conduct an extensive empirical investigation of
bagging, which is an application of bootstrap to ensemble classification. We investigate
the performance of bagging in the classification of small-sample gene-expression data and
protein-abundance mass spectrometry data, as well as the accuracy of small-sample error
estimation with this ensemble classification rule. We observed that, under t-test and
RELIEF filter-based feature selection, bagging generally does a good job of improving
the performance of unstable, overtting classifiers, such as CART decision trees and neural
networks, but that improvement was not sufficient to beat the performance of single stable,
non-overtting classifiers, such as diagonal and plain linear discriminant analysis, or
3-nearest neighbors. Furthermore, the ensemble method did not improve the performance
of these stable classifiers significantly. We give an explicit definition of the out-of-bag estimator
that is intended to remove estimator bias, by formulating carefully how the error
count is normalized, and investigate the performance of error estimation for bagging of
common classification rules, including LDA, 3NN, and CART, applied on both synthetic
and real patient data, corresponding to the use of common error estimators such as resubstitution,
leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus,
bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the
numerical experiments indicated that the performance of the out-of-bag estimator is very
similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically
biased. The performance of the other estimators is consistent with their performance
with the corresponding single classifiers, as reported in other studies. The results of this
work are expected to provide helpful guidance to practitioners who are interested in applying
the bootstrap in supervised learning applications.
|
29 |
A metamodeling approach for approximation of multivariate, stochastic and dynamic simulationsHernandez Moreno, Andres Felipe 04 April 2012 (has links)
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error.
The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
|
30 |
Anisotropic mesh construction and error estimation in the finite element methodKunert, Gerd 13 January 2000 (has links) (PDF)
In an anisotropic adaptive finite element algorithm one usually needs an error estimator that yields the error size but also the stretching directions and stretching ratios of the elements of a (quasi) optimal anisotropic mesh.
However the last two ingredients can not be extracted from any of the known anisotropic a posteriori error estimators.
Therefore a heuristic approach is pursued here, namely, the desired information is provided by the so-called Hessian strategy. This strategy produces favourable anisotropic meshes which result in a small discretization error.
The focus of this paper is on error estimation on anisotropic meshes.
It is known that such error estimation is reliable and efficient only
if the anisotropic mesh is aligned with the anisotropic solution.
The main result here is that the Hessian strategy produces anisotropic meshes that show the required alignment with the anisotropic solution.
The corresponding inequalities are proven, and the underlying heuristic assumptions are given in a stringent yet general form.
Hence the analysis provides further inside into a particular aspect of anisotropic error estimation.
|
Page generated in 0.1126 seconds