Spelling suggestions: "subject:"smooth approximation"" "subject:"smooth eapproximation""
1 |
Efficient Knot Optimization for Accurate B-spline-based Data ApproximationYo-Sing Yeh (9757565) 14 December 2020
<div>Many practical applications benefit from the reconstruction of a smooth multivariate function from discrete data for purposes such as reducing file size or improving analytic and visualization performance. Among the different reconstruction methods, tensor product B-spline has a number of advantageous properties over alternative data representation. However, the problem of constructing a best-fit B-spline approximation effectively contains many roadblocks. Within the many free parameters in the B-spline model, the choice of the knot vectors, which defines the separation of each piecewise polynomial patch in a B-spline construction, has a major influence on the resulting reconstruction quality. Yet existing knot placement methods are still ineffective, computationally expensive, or impose limitations on the dataset format or the B-spline order. Moving beyond the 1D cases (curves) and onto higher dimensional datasets (surfaces, volumes, hypervolumes) introduces additional computational challenges as well. Further complications also arise in the case of undersampled data points where the approximation problem can become ill-posed and existing regularization proves unsatisfactory.</div><div><br></div><div>This dissertation is concerned with improving the efficiency and accuracy of the construction of a B-spline approximation on discrete data. Specifically, we present a novel B-splines knot placement approach for accurate reconstruction of discretely sampled data, first in 1D, then extended to higher dimensions for both structured and unstructured formats. Our knot placement methods take into account the feature or complexity of the input data by estimating its high-order derivatives such that the resulting approximation is highly accurate with a low number of control points. We demonstrate our method on various 1D to 3D structured and unstructured datasets, including synthetic, simulation, and captured data. We compare our method with state-of-the-art knot placement methods and show that our approach achieves higher accuracy while requiring fewer B-spline control points. We discuss a regression approach to the selection of the number of knots for multivariate data given a target error threshold. In the case of the reconstruction of irregularly sampled data, where the linear system often becomes ill-posed, we propose a locally varying regularization scheme to address cases for which a straightforward regularization fails to produce a satisfactory reconstruction.</div>
|
2 |
Smoothing stochastic bang-bang problemsEichmann, Katrin 24 July 2013 (has links)
Motiviert durch das Problem der optimalen Strategie beim Handel einer großen Aktienposition, behandelt diese Arbeit ein stochastisches Kontrollproblem mit zwei besonderen Eigenschaften. Zum einen wird davon ausgegangen, dass das Kontrollproblem eine exponentielle Verzögerung in der Kontrollvariablen beinhaltet, zum anderen nehmen wir an, dass die Koeffizienten des Kontrollproblems linear in der Kontrollvariablen sind. Wir erhalten ein degeneriertes stochastisches Kontrollproblem, dessen Lösung - sofern sie existiert - Bang-Bang-Charakter hat. Die resultierende Unstetigkeit der optimalen Kontrolle führt dazu, dass die Existenz einer optimalen Lösung nicht selbstverständlich ist und bewiesen werden muss. Es wird eine Folge von stochastischen Kontrollproblemen mit Zustandsprozessen konstruiert, deren jeweilige Diffusionsmatrix invertierbar ist und die ursprüngliche degenerierte Diffusionsmatrix approximiert. Außerdem stellen die Kostenfunktionale der Folge eine konvexe Approximation des ursprünglichen linearen Kostenfunktionals dar. Um die Konvergenz der Lösungen dieser Folge zu zeigen, stellen wir die Kontrollprobleme in Form von stochastischen Vorwärts-Rückwärts-Differential-gleichungen (FBSDEs) dar. Wir zeigen, dass die zu der konstruierten Folge von Kontrollproblemen gehörigen Lösungen der Vorwärts-Rückwärtsgleichungen – zumindest für eine Teilfolge - in Verteilung konvergieren. Mit Hilfe einer Konvexitätsannahme der Koeffizienten ist es möglich, einen Kontroll-prozess auf einem passenden Wahrscheinlichkeitsraum zu konstruieren, der optimal für das ursprüngliche stochastische Kontrollproblem ist. Neben der damit bewiesenen Existenz einer optimalen (Bang-Bang-) Lösung, wird damit auch eine glatte Approximation der unstetigen Bang-Bang-Lösung erreicht, welche man für die numerische Approximation des Problems verwenden kann. Die Ergebnisse werden schließlich dann in Form von numerischen Simulationen auf das Problem der optimalen Handels¬ausführung angewendet. / Motivated by the problem of how to optimally execute a large stock position, this thesis considers a stochastic control problem with two special properties. First, the control problem has an exponential delay in the control variable, and so the present value of the state process depends on the moving average of past control decisions. Second, the coefficients are assumed to be linear in the control variable. It is shown that a control problem with these properties generates a mathematically challenging problem. Specifically, it becomes a stochastic control problem whose solution (if one exists) has a bang-bang nature. The resulting discontinuity of the optimal solution creates difficulties in proving the existence of an optimal solution and in solving the problem with numerical methods. A sequence of stochastic control problems with state processes is constructed, whose diffusion matrices are invertible and approximate the original degenerate diffusion matrix. The cost functionals of the sequence of control problems are convex approximations of the original linear cost functional. To prove the convergence of the solutions, the control problems are written in the form of forward-backward stochastic differential equations (FBSDEs). It is then shown that the solutions of the FBSDEs corresponding to the constructed sequence of control problems converge in law, at least along a subsequence. By assuming convexity of the coefficients, it is then possible to construct from this limit an admissible control process which, for an appropriate reference stochastic system, is optimal for our original stochastic control problem. In addition to proving the existence of an optimal (bang-bang) solution, we obtain a smooth approximation of the discontinuous optimal bang-bang solution, which can be used for the numerical solution of the problem. These results are then applied to the optimal execution problem in form of numerical simulations.
|
3 |
Direct optimization of dose-volume histogram metrics in intensity modulated radiation therapy treatment planning / Direkt optimering av dos-volym histogram-mått i intensitetsmodulerad strålterapiplaneringZhang, Tianfang January 2018 (has links)
In optimization of intensity-modulated radiation therapy treatment plans, dose-volumehistogram (DVH) functions are often used as objective functions to minimize the violationof dose-volume criteria. Neither DVH functions nor dose-volume criteria, however,are ideal for gradient-based optimization as the former are not continuously differentiableand the latter are discontinuous functions of dose, apart from both beingnonconvex. In particular, DVH functions often work poorly when used in constraintsdue to their being identically zero when feasible and having vanishing gradients on theboundary of feasibility.In this work, we present a general mathematical framework allowing for direct optimizationon all DVH-based metrics. By regarding voxel doses as sample realizations ofan auxiliary random variable and using kernel density estimation to obtain explicit formulas,one arrives at formulations of volume-at-dose and dose-at-volume which are infinitelydifferentiable functions of dose. This is extended to DVH functions and so calledvolume-based DVH functions, as well as to min/max-dose functions and mean-tail-dosefunctions. Explicit expressions for evaluation of function values and corresponding gradientsare presented. The proposed framework has the advantages of depending on onlyone smoothness parameter, of approximation errors to conventional counterparts beingnegligible for practical purposes, and of a general consistency between derived functions.Numerical tests, which were performed for illustrative purposes, show that smoothdose-at-volume works better than quadratic penalties when used in constraints and thatsmooth DVH functions in certain cases have significant advantage over conventionalsuch. The results of this work have been successfully applied to lexicographic optimizationin a fluence map optimization setting. / Vid optimering av behandlingsplaner i intensitetsmodulerad strålterapi används dosvolym- histogram-funktioner (DVH-funktioner) ofta som målfunktioner för att minimera avståndet till dos-volymkriterier. Varken DVH-funktioner eller dos-volymkriterier är emellertid idealiska för gradientbaserad optimering då de förstnämnda inte är kontinuerligt deriverbara och de sistnämnda är diskontinuerliga funktioner av dos, samtidigt som båda också är ickekonvexa. Speciellt fungerar DVH-funktioner ofta dåligt i bivillkor då de är identiskt noll i tillåtna områden och har försvinnande gradienter på randen till tillåtenhet. I detta arbete presenteras ett generellt matematiskt ramverk som möjliggör direkt optimering på samtliga DVH-baserade mått. Genom att betrakta voxeldoser som stickprovsutfall från en stokastisk hjälpvariabel och använda ickeparametrisk densitetsskattning för att få explicita formler, kan måtten volume-at-dose och dose-at-volume formuleras som oändligt deriverbara funktioner av dos. Detta utökas till DVH-funktioner och så kallade volymbaserade DVH-funktioner, såväl som till mindos- och maxdosfunktioner och medelsvansdos-funktioner. Explicita uttryck för evaluering av funktionsvärden och tillhörande gradienter presenteras. Det föreslagna ramverket har fördelarna av att bero på endast en mjukhetsparameter, av att approximationsfelen till konventionella motsvarigheter är försumbara i praktiska sammanhang, och av en allmän konsistens mellan härledda funktioner. Numeriska tester genomförda i illustrativt syfte visar att slät dose-at-volume fungerar bättre än kvadratiska straff i bivillkor och att släta DVH-funktioner i vissa fall har betydlig fördel över konventionella sådana. Resultaten av detta arbete har med framgång applicerats på lexikografisk optimering inom fluensoptimering.
|
Page generated in 0.1111 seconds