• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 14
  • 8
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 78
  • 21
  • 20
  • 19
  • 16
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Development of hybrid explicit/implicit and adaptive h and p refinement for the finite element time domain method

Srisukh, Yudhapoom 06 January 2006 (has links)
No description available.
22

A comparison of kansa and hermitian RBF interpolation techniques for the solution of convection-diffusion problems

Rodriguez, Erik 01 January 2010 (has links)
Mesh free modeling techniques are a promising alternative to traditional meshed methods for solving computational fluid dynamics problems. These techniques aim to solve for the field variable using solely the values of nodes and therefore do not require the generation of a mesh. This results in a process that can be much more reliably automated and is therefore attractive. Radial basis functions (RBFs) are one type of "meshless" method that has shown considerable growth in the past 50 years. Using these RBFs to directly solve a partial differential equation is known as Kansa's method and has been used to successfully solve many flow problems. The problem with Kansa's method is that there is no formal guarantee that its solution matrix will be non-singular. More recently, an expansion on Kansa's method was proposed that incorporates the boundary and PDE operators into the solution of the field variable. This method, known as Hermitian method, has been shown to be non-singular provided certain nodal criteria are met. This work aims to perform a comparison between Kansa and Hermitian methods to aid in future selection of a method. These two methods were used to solve steady and transient one-dimensional convection-diffusion problems. The methods are compared in terms of accuracy (error) and computational complexity (conditioning number) in order to evaluate overall performance. Results suggest that the Hermitian method does slightly outperform Kansa method at the cost of a more ill-conditioned collocation matrix.
23

A Multiscale Meshless Method for Simulating Cardiovascular Flows

Beggs, Kyle 01 January 2024 (has links) (PDF)
The rapid increase in computational power over the last decade has unlocked the possibility of providing patient-specific healthcare via simulation and data assimilation. In the past 2 decades, computational approaches to simulating cardiovascular flows have advanced significantly due to intense research and adoption of methods in medical device companies. A significant source of friction in porting these tools to the hospital and getting in the hands of surgeons is due to the expertise required to handle the geometry pre-processing and meshing of models. Meshless meth- ods reduce the amount of corner cases which makes it easier to develop robust tools surgeons need. To accurately simulate modifications to a region of vasculature as in surgical planning, the entire system must be modeled. Unfortunately, this is computationally prohibitive even on to- day’s machines. To circumvent this issue, the Radial-Basis Function Finite Difference (RBF-FD) method for solution of the higher-dimensional (2D/3D) region of interest is tightly-coupled to a 0D Lumped-Parameter Model (LPM) for solution of the peripheral circulation. The incompress- ible flow equations are updated by an explicit time-marching scheme based on a pressure-velocity correction algorithm. The inlets and outlets of the domain are tightly coupled with the LPM which contains elements that draw from a fluid-electrical analogy such as resistors, capacitors, and in- ductors that represent the viscous resistance, vessel compliance, and flow inertia, respectively. The localized RBF meshless approach is well-suited for modeling complicated non-Newtonian hemo- dynamics due to ease of spatial discretization, ease of addition of multi-physics interactions such as fluid-structure interaction of the vessel wall, and ease of parallelization for fast computations. This work introduces the tight coupling of meshless methods and LPMs for fast and accurate hemody- namic simulations. The results show the efficacy of the method to be used in building robust tools to inform surgical decisions and further development is motivated.
24

Novel methods for species distribution mapping including spatial models in complex regions

Scott-Hayward, Lindesay Alexandra Sarah January 2013 (has links)
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment. The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets. Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting. There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises. Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the Scottish network of Nature Conservation Marine Protected Areas.
25

Predictor development for controlling real-time applications over the Internet

Kommaraju, Mallik 25 April 2007 (has links)
Over the past decade there has been a growing demand for interactive multimedia applications deployed over public IP networks. To achieve acceptable Quality of Ser- vice (QoS) without significantly modifying the existing infrastructure, the end-to-end applications need to optimize their behavior and adapt according to network char- acteristics. Most existing application optimization techniques are based on reactive strategies, i.e. reacting to occurrences of congestion. We propose the use of predic- tive control to address the problem in an anticipatory manner. This research deals with developing models to predict end-to-end single flow characteristics of Wide Area Networks (WANs). A novel signal, in the form of single flow packet accumulation, is proposed for feedback purposes. This thesis presents a variety of effective predictors for the above signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse Basis Functions (SBF). The study consists of three sections. We first develop time- series models to predict the accumulation signal. Since encoder bit-rate is the most logical and generic control input, a statistical analysis is conducted to analyze the effect of input bit-rate on end-to-end delay and the accumulation signal. Finally, models are developed using this bit-rate as an input to predict the resulting accu- mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR) along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu- racy with increasing accumulation levels showed AR to be better in some cases. The study on effect of bit-rate revealed that bit-rate may not be a good control input on all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF were used to develop models to predict the accumulation signal using bit-rate as a modeling input. ARX and RBF models were found to give comparable accuracy, with RBF being slightly better.
26

Bayesian numerical analysis : global optimization and other applications

Fowkes, Jaroslav Mrazek January 2011 (has links)
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
27

Modes de représentation pour l'éclairage en synthèse d'images

Pacanowski, Romain 25 September 2009 (has links)
En synthèse d'images, le principal calcul à effectuer pour générer une image a été formalisé dans une équation appelée équation du rendu [Kajiya1986]. Cette équation est la intègre la conservation de l'énergie dans le transport de la lumière. Elle stipule que l'énergie lumineuse renvoyée, par les objets d'une scène, dans une direction donnée est égale à la somme de l'énergie émise et réfléchie par ceux-ci. De plus, l'énergie réfléchie par un élément de surface est définie comme la convolution de l'éclairement incident avec une fonction de réflectance. Cette dernière modélise le matériau (au sens physique) de l'objet et joue le rôle d'un filtre directionnel et énergétique dans l'équation du rendu, simulant ainsi la manière dont la surface se comporte vis-à-vis d'une réflexion. Dans ce mémoire de thèse, nous introduisons de nouvelles représentations pour la fonction de réflectance ainsi que pour la représentation de l'éclairement incident. Dans la première partie de ce mémoire, nous proposons deux nouveaux modèles pour représenter la fonction de réflectance. Le premier modèle s'inscrit dans une démarche artistique et est destiné à faciliter la création et l'édition des reflets spéculaires. Son principe est de laisser l'utilisateur peindre et esquisser les caractéristiques (forme, couleur, gradient et texture) du reflet spéculaire dans un plan de dessin paramétrisé en fonction de la direction de la réflexion miroir de la lumière. Le but du second modèle est de représenter de manière compacte et efficace les mesures des matériaux isotropes. Pour ce faire, nous introduisons une nouvelle représentation à base de polynômes rationnels. Les coefficients de ces derniers sont obtenus à l'aide d'un processus d'approximation qui garantit une solution optimale au sens de la convergence. Dans la seconde partie de ce mémoire, nous introduisons une nouvelle représentation volumétrique pour l'éclairement indirect représenté directionnellement à l'aide de vecteurs d'irradiance. Nous montrons que notre représentation est compacte et robuste aux variations géométriques et qu'elle peut être utilisée comme système de cache pour du rendu temps réel ou non, ainsi que dans le cadre de la transmission progressive des données (\textit{streaming}). Enfin, nous proposons deux types de modifications de l'éclairement incident afin de mettre en valeur les détails et les formes d'une surface. Le première modification consiste à perturber les directions de l'éclairement incident tandis que la seconde consiste à en modifier l'intensité. / In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source.
28

A Radial Basis Function Approach to Financial Time Series Analysis

Hutchinson, James M. 01 December 1993 (has links)
Nonlinear multivariate statistical techniques on fast computers offer the potential to capture more of the dynamics of the high dimensional, noisy systems underlying financial markets than traditional models, while making fewer restrictive assumptions. This thesis presents a collection of practical techniques to address important estimation and confidence issues for Radial Basis Function networks arising from such a data driven approach, including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data mining'' problem. Novel applications in the finance area are described, including customized, adaptive option pricing and stock price prediction.
29

Calibration of Flush Air Data Sensing Systems Using Surrogate Modeling Techniques

January 2011 (has links)
In this work the problem of calibrating Flush Air Data Sensing (FADS) has been addressed. The inverse problem of extracting freestream wind speed and angle of attack from pressure measurements has been solved. The aim of this work was to develop machine learning and statistical tools to optimize design and calibration of FADS systems. Experimental and Computational Fluid Dynamics (EFD and CFD) solve the forward problem of determining the pressure distribution given the wind velocity profile and bluff body geometry. In this work three ways are presented in which machine learning techniques can improve calibration of FADS systems. First, a scattered data approximation scheme, called Sequential Function Approximation (SFA) that successfully solved the current inverse problem was developed. The proposed scheme is a greedy and self-adaptive technique that constructs reliable and robust estimates without any user-interaction. Wind speed and direction prediction algorithms were developed for two FADS problems. One where pressure sensors are installed on a surface vessel and the other where sensors are installed on the Runway Assisted Landing Site (RALS) control tower. Second, a Tikhonov regularization based data-model fusion technique with SFA was developed to fuse low fidelity CFD solutions with noisy and sparse wind tunnel data. The purpose of this data model fusion approach was to obtain high fidelity, smooth and noiseless flow field solutions by using only a few discrete experimental measurements and a low fidelity numerical solution. This physics based regularization technique gave better flow field solutions compared to smoothness based solutions when wind tunnel data is sparse and incomplete. Third, a sequential design strategy was developed with SFA using Active Learning techniques from the machine learning theory and Optimal Design of Experiments from statistics for regression and classification problems. Uncertainty Sampling was used with SFA to demonstrate the effectiveness of active learning versus passive learning on a cavity flow classification problem. A sequential G-optimal design procedure was also developed with SFA for regression problems. The effectiveness of this approach was demonstrated on a simulated problem and the above mentioned FADS problem.
30

Surface reconstruction using variational interpolation

Joseph Lawrence, Maryruth Pradeepa 24 November 2005
Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. <p> Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer.

Page generated in 0.0956 seconds