1 |
Adaptive learning in lasso modelsPatnaik, Kaushik 07 January 2016 (has links)
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern (also known as model selection) in linear models from observations contaminated by noise. We examine a scenario where a fraction of the zero co-variates are highly correlated with non-zero co-variates making sparsity recovery difficult. We propose two methods that adaptively increment the regularization parameter to prune the Lasso solution set. We prove that the algorithms achieve consistent model selection with high probability while using fewer samples than traditional Lasso. The algorithm can be extended to a broad set of L1-regularized M-estimators for linear statistical models.
|
2 |
Accurate Finite Difference Methods for Option PricingPersson, Jonas January 2006 (has links)
Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance levels. The adaptive second order method is compared to an alternative discretization technique using radial basis functions. This method is not adaptive but shows potential in option pricing for one and several underlying assets. A finite difference method and a Monte Carlo method are applied to a new financial contract called Turbo warrant. A comparison of these two methods shows that for the case considered the finite difference method is superior.
|
3 |
The WN adaptive method for numerical solution of particle transport problemsWatson, Aaron Michael 12 April 2006 (has links)
The source and nature, as well as the history of ray-effects, is described. A
benchmark code, using piecewise constant functions in angle and diamond differencing
in space, is derived in order to analyze four sample problems. The results of this
analysis are presented showing the ray effects and how increasing the resolution
(number of angles) eliminates them. The theory of wavelets is introduced and the use of
wavelets in multiresolution analysis is discussed. This multiresolution analysis is
applied to the transport equation, and equations that can be solved to calculate the
coefficients in the wavelet expansion for the angular flux are derived. The use of
thresholding to eliminate wavelet coefficients that are not required to adequately solve a
problem is then discussed. An iterative sweeping algorithm, called the SN-WN method,
is derived to solve the wavelet-based equations. The convergence of the SN-WN method
is discussed. An algorithm for solving the equations is derived, by solving a matrix
within each cell directly for the expansion coefficients. This algorithm is called the CWWN
method. The results of applying the CW-WN method to the benchmark problems are presented. These results show that more research is needed to improve the convergence
of the SN-WN method, and that the CW-WN method is computationally too costly to be
seriously considered.
|
4 |
Adaptive methodologies in multi-arm dose response and biosimilarity clinical trialsWu, Joseph Moon Wai 12 March 2016 (has links)
As most adaptive clinical trial designs are implemented in stages, well-understood methods of sequential trial monitoring are needed. In the frequentist paradigm, examples of sequential monitoring methodologies include the p-value combination tests, conditional error, conditional power, and alpha spending approaches. Within the Bayesian framework, posterior and predictive probabilities are used as monitoring criteria, with the latter being analogous to the conditional power approach. In a placebo or active-contolled dose response clinical trial, we are interested in achieving two objectives: selecting the best therapeutic dose and confirming this selected dose. Traditional approach uses the parallel group design with Dunnett's adjustment. Recently, some two- stage Seamless II/III designs have been proposed. The drop-the-losers design considers selecting the dose with the highest empirical mean after the first stage, while another design assumes a dose-response model to aid dose selection. These designs however do not consider prioritizing the doses and adaptively inserting new doses. We propose an adaptive staggered dose design for a normal endpoint that makes minimal assumption regarding the dose response and sequentially adds doses to the trial. An alpha spending function is applied in a novel way to monitor the doses across the trial. Through numerical and simulation studies, we confirm that optimistic alpha spending coupled with informative dose ordering jointly produce some desirable operating characteristics when compared to drop-the-losers and model-based Seamless designs. In addition, we show how the design parameters can be flexibly varied to further improve its performance and how it can be extended to binary and survival endpoints. In a biosimilarity trial, we are interested in establishing evidence of comparable efficacy between a follow-on biological product and a reference innovator product. So far, no standard method for biosimilarity has been endorsed by regulatory agency. We propose a Bayesian hierarchical bias model and a non-inferiority hypothesis framework to prove biosimilarity. A two-stage adaptive design using predictive probability as early stopping criterion is pro- posed. Through simulation study, the proposed design controls the type I error better than the frequentist approach and Bayesian power is superior when biosimilarity is plausible. Two-stage design further reduces the expected sample size.
|
5 |
Wavelet based fast solution of boundary integral equationsHarbrecht, Helmut, Schneider, Reinhold 11 April 2006 (has links) (PDF)
This paper presents a wavelet Galerkin scheme for the fast solution of boundary integral equations. Wavelet Galerkin schemes employ appropriate wavelet bases for the discretization of boundary integral operators which yields quasi-sparse system matrices. These matrices can be compressed such that the complexity for solving a boundary integral equation scales linearly with the number of unknowns without compromising the accuracy of the underlying Galerkin scheme. Based on the wavelet Galerkin scheme we present also an adaptive algorithm. By numerical experiments we provide results which demonstrate the performance of our algorithm.
|
6 |
Adaptivní časoprostorová nespojitá Galerkinova metoda pro řešení nestacionárních úloh / Adaptive space-time discontinuous Galerkin method for the solution of non-stationary problemsVu Pham, Quynh Lan January 2015 (has links)
This thesis studies the numerical solution of non-linear convection-diffusion problems using the space- time discontinuous Galerkin method, which perfectly suits the space as well as time local adaptation. We aim to develop a posteriori error estimates reflecting the spatial, temporal, and algebraic errors. These estimates are based on the measurement of the residuals in dual norms. We derive these estimates and numerically verify their properties. Finally, we derive an adaptive algorithm and apply it to the numerical simulation of non-stationary viscous compressible flows. Powered by TCPDF (www.tcpdf.org)
|
7 |
Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty QuantificationWinokur, Justin Gregory January 2015 (has links)
<p>Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.</p><p>Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.</p><p>We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.</p><p>In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. </p><p>In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.</p> / Dissertation
|
8 |
Adaptive Wavelet Galerkin BEMHarbrecht, Helmut, Schneider, Reinhold 06 April 2006 (has links) (PDF)
The wavelet Galerkin scheme for the fast solution of boundary integral equations produces approximate solutions within discretization error accuracy offered by the underlying Galerkin method at a computational expense that stays proportional to the number of unknowns. In this paper we present an adaptive version of the scheme which preserves the super-convergence of the Galerkin method.
|
9 |
Stable evaluation of the Jacobians for curved trianglesMeyer, Arnd 11 April 2006 (has links) (PDF)
In the adaptive finite element method, the solution of a p.d.e. is approximated
from finer and finer meshes, which are controlled by error estimators. So,
starting from a given coarse mesh, some elements are subdivided a couple of
times. We investigate the question of avoiding instabilities which limit this
process from the fact that nodal coordinates of one element coincide in more
and more leading digits. In a previous paper the stable calculation of the
Jacobian matrices of the element mapping was given for straight line triangles,
quadrilaterals and hexahedrons. Here, we generalize this ideas to linear and
quadratic triangles on curved boundaries.
|
10 |
A New Efficient Preconditioner for Crack Growth ProblemsMeyer, Arnd 11 September 2006 (has links) (PDF)
A new preconditioner for the quick solution of a crack growth problem in 2D adaptive finite element analysis is proposed. Numerical experiments demonstrate the power of the method.
|
Page generated in 0.0673 seconds