• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 14
  • 8
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 76
  • 21
  • 20
  • 19
  • 16
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

M-ary orthogonal modulation using wavelet basis functions

Pan, Xiaoyun January 2000 (has links)
No description available.
22

Development of hybrid explicit/implicit and adaptive h and p refinement for the finite element time domain method

Srisukh, Yudhapoom 06 January 2006 (has links)
No description available.
23

Novel methods for species distribution mapping including spatial models in complex regions

Scott-Hayward, Lindesay Alexandra Sarah January 2013 (has links)
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment. The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets. Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting. There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises. Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the Scottish network of Nature Conservation Marine Protected Areas.
24

Predictor development for controlling real-time applications over the Internet

Kommaraju, Mallik 25 April 2007 (has links)
Over the past decade there has been a growing demand for interactive multimedia applications deployed over public IP networks. To achieve acceptable Quality of Ser- vice (QoS) without significantly modifying the existing infrastructure, the end-to-end applications need to optimize their behavior and adapt according to network char- acteristics. Most existing application optimization techniques are based on reactive strategies, i.e. reacting to occurrences of congestion. We propose the use of predic- tive control to address the problem in an anticipatory manner. This research deals with developing models to predict end-to-end single flow characteristics of Wide Area Networks (WANs). A novel signal, in the form of single flow packet accumulation, is proposed for feedback purposes. This thesis presents a variety of effective predictors for the above signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse Basis Functions (SBF). The study consists of three sections. We first develop time- series models to predict the accumulation signal. Since encoder bit-rate is the most logical and generic control input, a statistical analysis is conducted to analyze the effect of input bit-rate on end-to-end delay and the accumulation signal. Finally, models are developed using this bit-rate as an input to predict the resulting accu- mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR) along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu- racy with increasing accumulation levels showed AR to be better in some cases. The study on effect of bit-rate revealed that bit-rate may not be a good control input on all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF were used to develop models to predict the accumulation signal using bit-rate as a modeling input. ARX and RBF models were found to give comparable accuracy, with RBF being slightly better.
25

Bayesian numerical analysis : global optimization and other applications

Fowkes, Jaroslav Mrazek January 2011 (has links)
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
26

Modes de représentation pour l'éclairage en synthèse d'images

Pacanowski, Romain 25 September 2009 (has links)
En synthèse d'images, le principal calcul à effectuer pour générer une image a été formalisé dans une équation appelée équation du rendu [Kajiya1986]. Cette équation est la intègre la conservation de l'énergie dans le transport de la lumière. Elle stipule que l'énergie lumineuse renvoyée, par les objets d'une scène, dans une direction donnée est égale à la somme de l'énergie émise et réfléchie par ceux-ci. De plus, l'énergie réfléchie par un élément de surface est définie comme la convolution de l'éclairement incident avec une fonction de réflectance. Cette dernière modélise le matériau (au sens physique) de l'objet et joue le rôle d'un filtre directionnel et énergétique dans l'équation du rendu, simulant ainsi la manière dont la surface se comporte vis-à-vis d'une réflexion. Dans ce mémoire de thèse, nous introduisons de nouvelles représentations pour la fonction de réflectance ainsi que pour la représentation de l'éclairement incident. Dans la première partie de ce mémoire, nous proposons deux nouveaux modèles pour représenter la fonction de réflectance. Le premier modèle s'inscrit dans une démarche artistique et est destiné à faciliter la création et l'édition des reflets spéculaires. Son principe est de laisser l'utilisateur peindre et esquisser les caractéristiques (forme, couleur, gradient et texture) du reflet spéculaire dans un plan de dessin paramétrisé en fonction de la direction de la réflexion miroir de la lumière. Le but du second modèle est de représenter de manière compacte et efficace les mesures des matériaux isotropes. Pour ce faire, nous introduisons une nouvelle représentation à base de polynômes rationnels. Les coefficients de ces derniers sont obtenus à l'aide d'un processus d'approximation qui garantit une solution optimale au sens de la convergence. Dans la seconde partie de ce mémoire, nous introduisons une nouvelle représentation volumétrique pour l'éclairement indirect représenté directionnellement à l'aide de vecteurs d'irradiance. Nous montrons que notre représentation est compacte et robuste aux variations géométriques et qu'elle peut être utilisée comme système de cache pour du rendu temps réel ou non, ainsi que dans le cadre de la transmission progressive des données (\textit{streaming}). Enfin, nous proposons deux types de modifications de l'éclairement incident afin de mettre en valeur les détails et les formes d'une surface. Le première modification consiste à perturber les directions de l'éclairement incident tandis que la seconde consiste à en modifier l'intensité. / In image synthesis, the main computation involved to generate an image is characterized by an equation named rendering equation [Kajiya1986]. This equation represents the law of energy conservation. It stipulates that the light emanating from the scene objects is the sum of the emitted energy and the reflected energy. Moreover, the reflected energy at a surface point is defined as the convolution of the incoming lighting with a reflectance function. The reflectance function models the object material and represents, in the rendering equation, a directional and energetic filter that describes the surface behavior regarding the reflection. In this thesis, we introduce new representations for the reflectance function and the incoming lighting. In the first part of this thesis, we propose two new models for the reflectance function. The first model is targeted for artists to help them create and edit highlights. Our main idea is to let the user paint and sketch highlight characteristics (shape, color, gradient and texture) in a plane parametrized by the incident lighting direction. The second model is designed to represent efficiently isotropic material data. To achieve this result, we introduce a new representation of the reflectance function that uses rational polynomials. Their coefficients are computed using a fitting process that guarantees an optimal solution regarding convergence. In the second part of this thesis, we introduce a new volumetric structure for indirect illumination that is directionally represented with irradiance vector. We show that our representation is compact and robust to geometric variations, that it can be used as caching system for interactive and offline rendering and that it can also be transmitted with streaming techniques. Finally, we introduce two modifications of the incoming lighting to improve the shape depiction of a surface. The first modification consists in warping the incoming light directions whereas the second one consists in scaling the intensity of each light source.
27

A Radial Basis Function Approach to Financial Time Series Analysis

Hutchinson, James M. 01 December 1993 (has links)
Nonlinear multivariate statistical techniques on fast computers offer the potential to capture more of the dynamics of the high dimensional, noisy systems underlying financial markets than traditional models, while making fewer restrictive assumptions. This thesis presents a collection of practical techniques to address important estimation and confidence issues for Radial Basis Function networks arising from such a data driven approach, including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data mining'' problem. Novel applications in the finance area are described, including customized, adaptive option pricing and stock price prediction.
28

Calibration of Flush Air Data Sensing Systems Using Surrogate Modeling Techniques

January 2011 (has links)
In this work the problem of calibrating Flush Air Data Sensing (FADS) has been addressed. The inverse problem of extracting freestream wind speed and angle of attack from pressure measurements has been solved. The aim of this work was to develop machine learning and statistical tools to optimize design and calibration of FADS systems. Experimental and Computational Fluid Dynamics (EFD and CFD) solve the forward problem of determining the pressure distribution given the wind velocity profile and bluff body geometry. In this work three ways are presented in which machine learning techniques can improve calibration of FADS systems. First, a scattered data approximation scheme, called Sequential Function Approximation (SFA) that successfully solved the current inverse problem was developed. The proposed scheme is a greedy and self-adaptive technique that constructs reliable and robust estimates without any user-interaction. Wind speed and direction prediction algorithms were developed for two FADS problems. One where pressure sensors are installed on a surface vessel and the other where sensors are installed on the Runway Assisted Landing Site (RALS) control tower. Second, a Tikhonov regularization based data-model fusion technique with SFA was developed to fuse low fidelity CFD solutions with noisy and sparse wind tunnel data. The purpose of this data model fusion approach was to obtain high fidelity, smooth and noiseless flow field solutions by using only a few discrete experimental measurements and a low fidelity numerical solution. This physics based regularization technique gave better flow field solutions compared to smoothness based solutions when wind tunnel data is sparse and incomplete. Third, a sequential design strategy was developed with SFA using Active Learning techniques from the machine learning theory and Optimal Design of Experiments from statistics for regression and classification problems. Uncertainty Sampling was used with SFA to demonstrate the effectiveness of active learning versus passive learning on a cavity flow classification problem. A sequential G-optimal design procedure was also developed with SFA for regression problems. The effectiveness of this approach was demonstrated on a simulated problem and the above mentioned FADS problem.
29

Surface reconstruction using variational interpolation

Joseph Lawrence, Maryruth Pradeepa 24 November 2005
Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. <p> Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer.
30

3D Model of Fuel Tank for System Simulation : A methodology for combining CAD models with simulation tools

Wikström, Jonas January 2011 (has links)
Engineering aircraft systems is a complex task. Therefore models and computer simulations are needed to test functions and behaviors of non existing systems, reduce testing time and cost, reduce the risk involved and to detect problems early which reduce the amount of implementation errors. At the section Vehicle Simulation and Thermal Analysis at Saab Aeronautics in Linköping every basic aircraft system is designed and simulated, for example the fuel system. Currently 2-dimensional rectangular blocks are used in the simulation model to represent the fuel tanks. However, this is too simplistic to allow a more detailed analysis. The model needs to be extended with a more complex description of the tank geometry in order to get a more accurate model. This report explains the different steps in the developed methodology for combining 3-dimensional geometry models of any fuel tank created in CATIA with dynamic simulation of the fuel system in Dymola. The new 3-dimensional representation of the tank in Dymola should be able to calculate fuel surface location during simulation of a maneuvering aircraft.  The first step of the methodology is to create a solid model of the fuel contents in the tank. Then the area of validity for the model has to be specified, in this step all possible orientations of the fuel acceleration vector within the area of validity is generated. All these orientations are used in the automated volume analysis in CATIA. For each orientation CATIA splits the fuel body in a specified number of volumes and records the volume, the location of the fuel surface and the location of the center of gravity. This recorded data is then approximated with the use of radial basis functions implemented in MATLAB. In MATLAB a surrogate model is created which are then implemented in Dymola. In this way any fuel surface location and center of gravity can be calculated in an efficient way based on the orientation of the fuel acceleration vector and the amount of fuel. The new 3-dimensional tank model is simulated in Dymola and the results are compared with measures from the model in CATIA and with the results from the simulation of the old 2-dimensional tank model. The results shows that the 3-dimensional tank gives a better approximation of reality and that there is a big improvement compared with the 2-dimensional tank model. The downside is that it takes approximately 24 hours to develop this model. / Att utveckla ett nytt flygplanssystem är en väldigt komplicerad arbetsuppgift. Därför används modeller och simuleringar för att testa icke befintliga system, minska utvecklingstiden och kostnaderna, begränsa riskerna samt upptäcka problem tidigt och på så sätt minska andelen implementerade fel. Vid sektionen Vehicle Simulation and Thermal Analysis på Saab Aeronautics i Linköping designas och simuleras varje grundflygplanssystem, ett av dessa system är bränslesystemet. För närvarande används 2-dimensionella rätblock i simuleringsmodellen för att representera bränsletankarna, vilket är en väldigt grov approximation. För att kunna utföra mer detaljerade analyser behöver modellerna utökas med en bättre geometrisk beskrivning av bränsletankarna. Denna rapport går igenom de olika stegen i den framtagna metodiken för att kombinera 3- dimensionella tankmodeller skapade i CATIA med dynamisk simulering av bränslesystemet i Dymola. Den nya 3-dimensionella representationen av en tank i Dymola bör kunna beräkna bränsleytans läge under en simulering av ett manövrerande flygplan. Första steget i metodiken är att skapa en solid modell av bränslet som finns i tanken. Därefter specificeras modellens giltighetsområde och alla tänkbara riktningar hos accelerationsvektorn som påverkar bränslet genereras, dessa används sedan i den automatiserade volymanalysen i CATIA.  För varje riktning delar CATIA upp bränslemodellen i ett bestämt antal delar och registrerar volymen, bränsleytans läge samt tyngdpunktens position för varje del. Med hjälp av radiala basfunktioner som har implementerats i MATLAB approximeras dessa data och en surrogatmodell tas fram, denna implementeras sedan i Dymola. På så sätt kan bränsleytans och tyngdpunktens läge beräknas på ett effektivt sätt, baserat på riktningen hos bränslets accelerationsvektor samt mängden bränsle i tanken. Den nya 3-dimensionella tankmodellen simuleras i Dymola och resultaten jämförs med mätningar utförda i CATIA samt med resultaten från den gamla simuleringsmodellen. Resultaten visar att den 3-dimensionella tankmodellen ger en mycket bättre representation av verkligheten och att det är en stor förbättring jämfört med den 2-dimensionella representationen. Nackdelen är att det tar ungefär 24 timmar att få fram denna 3-dimensionella representation.

Page generated in 0.1142 seconds