241 |
Three Essays on MisintermediationFeng, Guo 19 July 2012 (has links)
No description available.
|
242 |
Efficient 𝐻₂-Based Parametric Model Reduction via Greedy SearchCooper, Jon Carl 19 January 2021 (has links)
Dynamical systems are mathematical models of physical phenomena widely used throughout the world today. When a dynamical system is too large to effectively use, we turn to model reduction to obtain a smaller dynamical system that preserves the behavior of the original. In many cases these models depend on one or more parameters other than time, which leads to the field of parametric model reduction.
Constructing a parametric reduced-order model (ROM) is not an easy task, and for very large parametric systems it can be difficult to know how well a ROM models the original system, since this usually involves many computations with the full-order system, which is precisely what we want to avoid. Building off of efficient 𝐻-infinity approximations, we develop a greedy algorithm for efficiently modeling large-scale parametric dynamical systems in an 𝐻₂-sense.
We demonstrate the effectiveness of this greedy search on a fluid problem, a mechanics problem, and a thermal problem. We also investigate Bayesian optimization for solving the optimization subproblem, and end with extending this algorithm to work with MIMO systems. / Master of Science / In the past century, mathematical modeling and simulation has become the third pillar of scientific discovery and understanding, alongside theory and experimentation. Mathematical models are used every day, and are essential to modern engineering problems. Some of these mathematical models depend on quantities other than just time, parameters such as the viscosity of a fluid or the strength of a spring. These models can sometimes become so large and complicated that it can take a very long time to run simulations with the models. In such a case, we use parametric model reduction to come up with a much smaller and faster model that behaves like the original model. But when these large models vary highly with the parameters, it can also become very expensive to reduce these models accurately.
Algorithms already exist for quickly computing reduced-order models (ROMs) with respect to one measure of how "good" the ROM is. In this thesis we develop an algorithm for quickly computing the ROM with respect to a different measure - one that is more closely tied to how the models are simulated.
|
243 |
Realistic Motion Estimation Using AccelerometersXie, Liguang 04 August 2009 (has links)
A challenging goal for both the game industry and the research community of computer graphics is the generation of 3D virtual avatars that automatically perform realistic human motions with high speed at low monetary cost. So far, full body motion estimation of human complexity remains an important open problem. We propose a realistic motion estimation framework to control the animation of 3D avatars. Instead of relying on a motion capture device as the control signal, we use low-cost and ubiquitously available 3D accelerometer sensors. The framework is developed in a data-driven fashion, which includes two phases: model learning from an existing high quality motion database, and motion synthesis from the control signal. In the phase of model learning, we built a high quality motion model of less complexity that learned from a large motion capture database. Then, by taking the 3D accelerometer sensor signal as input, we were able to synthesize high-quality motion from the motion model we learned.
In this thesis, we present two different techniques for model learning and motion synthesis, respectively. Linear and nonlinear reduction techniques for data dimensionality are applied to search for the proper low dimensional representation of motion data. Two motion synthesis methods, interpolation and optimization, are compared using the 3D acceleration signals with high noise. We evaluate the result visually compared to the real video and quantitatively compared to the ground truth motion. The system performs well, which makes it available to a wide range of interactive applications, such as character control in 3D virtual environments and occupational training. / Master of Science
|
244 |
A Motion Graph Approach for Interactive 3D Animation using Low-cost SensorsKumar, Mithilesh 14 August 2008 (has links)
Interactive 3D animation of human figures is very common in video games, animation studios and virtual environments. However, it is difficult to produce full body animation that looks realistic enough to be comparable to studio quality human motion data. The commercial motion capture systems are expensive and not suitable for capture in everyday environments. Real-time requirements tend to reduce quality of animation. We present a motion graph based framework to produce high quality motion sequences in real-time using a set of inertial sensor based controllers. The user's action generates signals from the controllers that provide constraints to select appropriate sequence of motions from a structured database of human motions, namely \emph{motion graph}. Our local search algorithm utilizes noise prone and rapidly varying input sensor signals for querying a large database in real-time. The ability to waive the controllers for producing high quality animation provides a simple 3D user interface that is intuitive to use. The proposed framework is low cost and easy to setup. / Master of Science
|
245 |
Galerkin Projections Between Finite Element SpacesThompson, Ross Anthony 17 June 2015 (has links)
Adaptive mesh refinement schemes are used to find accurate low-dimensional approximating spaces when solving elliptic PDEs with Galerkin finite element methods. For nonlinear PDEs, solving the nonlinear problem with Newton's method requires an initial guess of the solution on a refined space, which can be found by interpolating the solution from a previous refinement. Improving the accuracy of the representation of the converged solution computed on a coarse mesh for use as an initial guess on the refined mesh may reduce the number of Newton iterations required for convergence. In this thesis, we present an algorithm to compute an orthogonal L^2 projection between two dimensional finite element spaces constructed from a triangulation of the domain. Furthermore, we present numerical studies that investigate the efficiency of using this algorithm to solve various nonlinear elliptic boundary value problems. / Master of Science
|
246 |
Mathematical Software for Multiobjective Optimization ProblemsChang, Tyler Hunter 15 June 2020 (has links)
In this thesis, two distinct problems in data-driven computational science are considered. The main problem of interest is the multiobjective optimization problem, where the tradeoff surface (called the Pareto front) between multiple conflicting objectives must be approximated in order to identify designs that balance real-world tradeoffs. In order to solve multiobjective optimization problems that are derived from computationally expensive blackbox functions, such as engineering design optimization problems, several methodologies are combined, including surrogate modeling, trust region methods, and adaptive weighting. The result is a numerical software package that finds approximately Pareto optimal solutions that are evenly distributed across the Pareto front, using minimal cost function evaluations. The second problem of interest is the closely related problem of multivariate interpolation, where an unknown response surface representing an underlying phenomenon is approximated by finding a function that exactly matches available data. To solve the interpolation problem, a novel algorithm is proposed for computing only a sparse subset of the elements in the Delaunay triangulation, as needed to compute the Delaunay interpolant. For high-dimensional data, this reduces the time and space complexity of Delaunay interpolation from exponential time to polynomial time in practice. For each of the above problems, both serial and parallel implementations are described. Additionally, both solutions are demonstrated on real-world problems in computer system performance modeling. / Doctor of Philosophy / Science and engineering are full of multiobjective tradeoff problems. For example, a portfolio manager may seek to build a financial portfolio with low risk, high return rates, and minimal transaction fees; an aircraft engineer may seek a design that maximizes lift, minimizes drag force, and minimizes aircraft weight; a chemist may seek a catalyst with low viscosity, low production costs, and high effective yield; or a computational scientist may seek to fit a numerical model that minimizes the fit error while also minimizing a regularization term that leverages domain knowledge. Often, these criteria are conflicting, meaning that improved performance by one criterion must be at the expense of decreased performance in another criterion. The solution to a multiobjective optimization problem allows decision makers to balance the inherent tradeoff between conflicting objectives. A related problem is the multivariate interpolation problem, where the goal is to predict the outcome of an event based on a database of past observations, while exactly matching all observations in that database. Multivariate interpolation problems are equally as prevalent and impactful as multiobjective optimization problems. For example, a pharmaceutical company may seek a prediction for the costs and effects of a proposed drug; an aerospace engineer may seek a prediction for the lift and drag of a new aircraft design; or a search engine may seek a prediction for the classification of an unlabeled image. Delaunay interpolation offers a unique solution to this problem, backed by decades of rigorous theory and analytical error bounds, but does not scale to high-dimensional "big data" problems. In this thesis, novel algorithms and software are proposed for solving both of these extremely difficult problems.
|
247 |
The Complete Pick Property and Reproducing Kernel Hilbert SpacesMarx, Gregory 03 January 2014 (has links)
We present two approaches towards a characterization of the complete Pick property. We first discuss the lurking isometry method used in a paper by J.A. Ball, T.T. Trent, and V. Vinnikov. They show that a nondegenerate, positive kernel has the complete Pick property if $1/k$ has one positive square. We also look at the one-point extension approach developed by P. Quiggin which leads to a sufficient and necessary condition for a positive kernel to have the complete Pick property. We conclude by connecting the two characterizations of the complete Pick property. / Master of Science
|
248 |
An improved effective method for generating 3D printable models from medical imagingRathod, Gaurav Dilip 16 November 2017 (has links)
Medical practitioners rely heavily on visualization of medical imaging to get a better understanding of the patient's anatomy. Most cancer treatment and surgery today are performed using medical imaging. Medical imaging is therefore of great importance to the medical industry.
Medical imaging continues to depend heavily on a series of 2D scans, resulting in a series of 2D photographs being displayed using light boxes and/or computer monitors. Today, these 2D images are increasingly combined into 3D solid models using software. These 3D models can be used for improved visualization and understanding of the problem at hand, including fabricating physical 3D models using additive manufacturing technologies.
Generating precise 3D solid models automatically from 2D scans is non-trivial. Geometric and/or topologic errors are common, and often costly manual editing is required to produce 3D solid models that sufficiently reflect the actual underlying human geometry. These errors arise from the ambiguity of converting from 2D data to 3D data, and also from inherent limitations of the .STL fileformat used in additive manufacturing.
This thesis proposes a new, robust method for automatically generating 3D models from 2D scanned data (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)), where the resulting 3D solid models are specifically generated for use with additive manufacturing. This new method does not rely on complicated procedures such as contour evolution and geometric spline generation, but uses volume reconstruction instead. The advantage of this approach is that the original scan data values are kept intact longer, so that the resulting surface is more accurate. This new method is demonstrated using medical CT data of the human nasal airway system, resulting in physical 3D models fabricated via additive manufacturing. / Master of Science / Medical practitioners rely heavily on medical imaging to get a better understanding of the patient’s anatomy. Most cancer treatment and surgery today are performed using medical imaging. Medical imaging is therefore of great importance to the medical industry.
Medical imaging continues to depend heavily on a series of 2D scans, resulting in a series of 2D photographs being displayed using light boxes and/or computer monitors. With additive manufacturing technologies (also known as 3D printing), it is now possible to fabricate real-size physical 3D models of the human anatomy. These physical models enable surgeons to practice ahead of time, using realistic true scale model, to increase the likelihood of a successful surgery. These physical models can potentially also be used to develop organ implants that are tailored specifically to each patient’s anatomy.
Generating precise 3D solid models automatically from 2D scans is non-trivial. Automated processing often causes geometric and topological (logical) errors, while manual editing is frequently too labor intensisve and time consuming to be considered practical solution.
This thesis proposes a new, robust method for automatically generating 3D models from 2D scanned data (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)), where the resulting 3D solid models are specifically generated for use with additive manufacturing. The advantage of this proposed method is that the resulting fabricated surfaces are more accurate.
|
249 |
A comparison of kansa and hermitian RBF interpolation techniques for the solution of convection-diffusion problemsRodriguez, Erik 01 January 2010 (has links)
Mesh free modeling techniques are a promising alternative to traditional meshed methods for solving computational fluid dynamics problems. These techniques aim to solve for the field variable using solely the values of nodes and therefore do not require the generation of a mesh. This results in a process that can be much more reliably automated and is therefore attractive. Radial basis functions (RBFs) are one type of "meshless" method that has shown considerable growth in the past 50 years. Using these RBFs to directly solve a partial differential equation is known as Kansa's method and has been used to successfully solve many flow problems. The problem with Kansa's method is that there is no formal guarantee that its solution matrix will be non-singular. More recently, an expansion on Kansa's method was proposed that incorporates the boundary and PDE operators into the solution of the field variable. This method, known as Hermitian method, has been shown to be non-singular provided certain nodal criteria are met. This work aims to perform a comparison between Kansa and Hermitian methods to aid in future selection of a method. These two methods were used to solve steady and transient one-dimensional convection-diffusion problems. The methods are compared in terms of accuracy (error) and computational complexity (conditioning number) in order to evaluate overall performance. Results suggest that the Hermitian method does slightly outperform Kansa method at the cost of a more ill-conditioned collocation matrix.
|
250 |
Interpolation libre et opérateurs de ToeplitzHartmann, Andreas 14 December 2005 (has links) (PDF)
Les travaux présentés dans cette habilitation sont articulés autour d'un thème fédérateur : interpolation. Le cas le plus classique consiste à déterminer la trace d'un ensemble de fonctions sur un sous ensemble du domaine de définition commun de notre ensemble de fonctions intial. En particulier les aspects suivants seront étudiés.<br /><br />1) Interpolation simple : interpolation des valeurs en des points ;<br /><br />2) Interpolation généralisée : p.ex. interpolation des dérivées, interpolation sur des points proches, interpolation tangentielle, etc. ;<br /><br />3) Interpolation classique : l'interpolation est définie à partir d'un espace des traces déterminé a priori ;<br /><br />4) Interpolation libre : l'interpolation est définie à partir d'une propriété de la trace (à savoir d'être un idéal d'ordre) ;<br /><br />5) Interpolation libre et fonctions extrémales : caractérisation de l'interpolation en termes de fonctions extrémales ;<br /><br />6) Interpolation libre et opérateurs de Toeplitz.<br /><br />Le dernier point nous éloignera un peu des problèmes d'interpolation. Même s'il existe un lien étroit entre les problèmes d'interpolation libre (en particulier dans les espaces de type Paley-Wiener ou plus généralement les espaces modèles, voir Section 4.1), nous allons nous intéresser de plus près à certaines propriétés des opérateurs de Toeplitz qui se révèlent importantes dans le contexte de l'interpolation. Cependant, notre étude sera menée détachée du contexte de l'interpolation. Ce sera l'occasion de rencontrer à nouveau des fonctions extrémales. Nous allons en effet étudier les fonctions extrémales des noyaux d'opérateurs de Toeplitz (supposés non triviaux). Celles-ci s'avèrent posséder beaucoup de propriétés intéressantes.<br /><br />Une remarque concernant les techniques utilisées. Les problèmes d'interpolation étant abordés dans des situations très variées (espaces de Hilbert et de Banach comme par exemple Bergman et Hardy, algèbres de Fréchet, et même des espaces vectoriels qui ne sont pas topologiques ; interpolation classique, libre et généralisée) nécessitent des méthodes très difféerentes. Par ailleurs, les problèmes connexes sont motivés par des problèmes d'interpolation mais ils sont considérés dans un contexte déconnecté de l'interpolation. Nous verrons ainsi de l'analyse complexe classique (espaces de Hardy, factorisation de Riesz-Nevanlinna, mesures de Carleson, majorantes harmoniques) et harmonique (toujours présente dans le contexte de l'interpolation et du sampling), de la géométrie des espaces de Banach (bases, bases inconditionnelles, espaces d'interpolation, indices de Boyd), de l'analyse fonctionnelle (principes variationnels, certains aspects topologiques) et convexe (Lemme de Minkowski-Farkas) en passant par la théorie des opérateurs (Théorème du relèvement du commutant, sous-espaces invariants), ainsi que de l'analyse complexe d'une et plusieurs variables (méthodes du d-bar) jusqu'aux espaces de de Branges-Rovnyak.
|
Page generated in 0.0279 seconds