• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • Tagged with
  • 18
  • 18
  • 11
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

GPU Based Scattered Data Modeling

Vinjarapu, Saranya S. 16 May 2012 (has links)
No description available.
12

Real-Time Visualizations of Ocean Data Collected by the NORUS Glider

Medina, Daniel M 01 June 2010 (has links) (PDF)
Scientific visualization computer applications generate visual representations of large and complex sets of science data. These types of applications allow scientists to gain greater knowledge and insight into their data. For example, the visualization of environmental data is of particular interest to biologists when trying to understand how complex variables interact. Modern robotics and sensors have expanded the ability to collect environmental data, thus, the size and variety of these data-sets have likewise grown. Oftentimes, the collected data are deposited into files and databases where they sit in their separate and unique formats. Without easy to use visualization tools, it is difficult to understand and interpret the information within these data-sets. NORUS, the North America-Norway educational program, has a scientific focus on how climate-induced changes impact the living resources and ecosystems in the Arctic. In order to obtain the necessary science data, the NORUS program utilizes the Slocum Glider, a form of Autonomous Underwater Vehicle (AUV). This thesis aims to create a compelling, efficient, and easy to use interactive system for visualizing large sets of science data collected by the Slocum Glider. This goal is obtained through the implementation of various methods taken from scientific visualization, real time rendering, and scattered data interpolation. Methods include visualizations of the surrounding terrain, the ability to map various science data to glyphs, control over color mapping, scattered data interpolation and interactive camera control.
13

Neural Networks Satisfying Stone-weiestrass Theorem And Approximating

Thakkar, Pinal 01 January 2004 (has links)
Neural networks are an attempt to build computer networks called artificial neurons, which imitate the activities of the human brain. Its origin dates back to 1943 when neurophysiologist Warren Me Cello and logician Walter Pits produced the first artificial neuron. Since then there has been tremendous development of neural networks and their applications to pattern and optical character recognition, speech processing, time series prediction, image processing and scattered data approximation. Since it has been shown that neural nets can approximate all but pathological functions, Neil Cotter considered neural network architecture based on Stone-Weierstrass Theorem. Using exponential functions, polynomials, rational functions and Boolean functions one can follow the method given by Cotter to obtain neural networks, which can approximate bounded measurable functions. Another problem of current research in computer graphics is to construct curves and surfaces from scattered spatial points by using B-Splines and NURBS or Bezier surfaces. Hoffman and Varady used Kohonen neural networks to construct appropriate grids. This thesis is concerned with two types of neural networks viz. those which satisfy the conditions of the Stone-Weierstrass theorem and Kohonen neural networks. We have used self-organizing maps for scattered data approximation. Neural network Tool Box from MATLAB is used to develop the required grids for approximating scattered data in one and two dimensions.
14

Applications of Generic Interpolants In the Investigation and Visualization of Approximate Solutions of PDEs on Coarse Unstructured Meshes

Goldani Moghaddam, Hassan 12 August 2010 (has links)
In scientific computing, it is very common to visualize the approximate solution obtained by a numerical PDE solver by drawing surface or contour plots of all or some components of the associated approximate solutions. These plots are used to investigate the behavior of the solution and to display important properties or characteristics of the approximate solutions. In this thesis, we consider techniques for drawing such contour plots for the solution of two and three dimensional PDEs. We first present three fast contouring algorithms in two dimensions over an underlying unstructured mesh. Unlike standard contouring algorithms, our algorithms do not require a fine structured approximation. We assume that the underlying PDE solver generates approximations at some scattered data points in the domain of interest. We then generate a piecewise cubic polynomial interpolant (PCI) which approximates the solution of a PDE at off-mesh points based on the DEI (Differential Equation Interpolant) approach. The DEI approach assumes that accurate approximations to the solution and first-order derivatives exist at a set of discrete mesh points. The extra information required to uniquely define the associated piecewise polynomial is determined based on almost satisfying the PDE at a set of collocation points. In the process of generating contour plots, the PCI is used whenever we need an accurate approximation at a point inside the domain. The direct extension of the both DEI-based interpolant and the contouring algorithm to three dimensions is also investigated. The use of the DEI-based interpolant we introduce for visualization can also be used to develop effective Adaptive Mesh Refinement (AMR) techniques and global error estimates. In particular, we introduce and investigate four AMR techniques along with a hybrid mesh refinement technique. Our interest is in investigating how well such a `generic' mesh selection strategy, based on properties of the problem alone, can perform compared with a special-purpose strategy that is designed for a specific PDE method. We also introduce an \`{a} posteriori global error estimator by introducing the solution of a companion PDE defined in terms of the associated PCI.
15

Applications of Generic Interpolants In the Investigation and Visualization of Approximate Solutions of PDEs on Coarse Unstructured Meshes

Goldani Moghaddam, Hassan 12 August 2010 (has links)
In scientific computing, it is very common to visualize the approximate solution obtained by a numerical PDE solver by drawing surface or contour plots of all or some components of the associated approximate solutions. These plots are used to investigate the behavior of the solution and to display important properties or characteristics of the approximate solutions. In this thesis, we consider techniques for drawing such contour plots for the solution of two and three dimensional PDEs. We first present three fast contouring algorithms in two dimensions over an underlying unstructured mesh. Unlike standard contouring algorithms, our algorithms do not require a fine structured approximation. We assume that the underlying PDE solver generates approximations at some scattered data points in the domain of interest. We then generate a piecewise cubic polynomial interpolant (PCI) which approximates the solution of a PDE at off-mesh points based on the DEI (Differential Equation Interpolant) approach. The DEI approach assumes that accurate approximations to the solution and first-order derivatives exist at a set of discrete mesh points. The extra information required to uniquely define the associated piecewise polynomial is determined based on almost satisfying the PDE at a set of collocation points. In the process of generating contour plots, the PCI is used whenever we need an accurate approximation at a point inside the domain. The direct extension of the both DEI-based interpolant and the contouring algorithm to three dimensions is also investigated. The use of the DEI-based interpolant we introduce for visualization can also be used to develop effective Adaptive Mesh Refinement (AMR) techniques and global error estimates. In particular, we introduce and investigate four AMR techniques along with a hybrid mesh refinement technique. Our interest is in investigating how well such a `generic' mesh selection strategy, based on properties of the problem alone, can perform compared with a special-purpose strategy that is designed for a specific PDE method. We also introduce an \`{a} posteriori global error estimator by introducing the solution of a companion PDE defined in terms of the associated PCI.
16

Radial basis function interpolation

Du Toit, Wilna 03 1900 (has links)
Thesis (MSc (Applied Mathematics))--Stellenbosch University, 2008. / A popular method for interpolating multidimensional scattered data is using radial basis functions. In this thesis we present the basic theory of radial basis function interpolation and also regard the solvability and stability of the method. Solving the interpolant directly has a high computational cost for large datasets, hence using numerical methods to approximate the interpolant is necessary. We consider some recent numerical algorithms. Software to implement radial basis function interpolation and to display the 3D interpolants obtained, is developed. We present results obtained from using our implementation for radial basis functions on GIS and 3D face data as well as an image warping application.
17

Numerische Methoden zur Analyse hochdimensionaler Daten / Numerical Methods for Analyzing High-Dimensional Data

Heinen, Dennis 01 July 2014 (has links)
Diese Dissertation beschäftigt sich mit zwei der wesentlichen Herausforderungen, welche bei der Bearbeitung großer Datensätze auftreten, der Dimensionsreduktion und der Datenentstörung. Der erste Teil dieser Dissertation liefert eine Zusammenfassung über Dimensionsreduktion. Ziel der Dimensionsreduktion ist eine sinnvolle niedrigdimensionale Darstellung eines vorliegenden hochdimensionalen Datensatzes. Insbesondere diskutieren und vergleichen wir bewährte Methoden des Manifold-Learning. Die zentrale Annahme des Manifold-Learning ist, dass der hochdimensionale Datensatz (approximativ) auf einer niedrigdimensionalen Mannigfaltigkeit liegt. Störungen im Datensatz sind bei allen Dimensionsreduktionsmethoden hinderlich. Der zweite Teil dieser Dissertation stellt eine neue Entstörungsmethode für hochdimensionale Daten vor, eine Wavelet-Shrinkage-Methode für die Glättung verrauschter Abtastwerte einer zugrundeliegenden multivariaten stückweise stetigen Funktion, wobei die Abtastpunkte gestreut sein können. Die Methode stellt eine Verallgemeinerung und Weiterentwicklung der für die Bildkompression eingeführten "Easy Path Wavelet Transform" (EPWT) dar. Grundlage ist eine eindimensionale Wavelet-Transformation entlang (adaptiv) zu konstruierender Pfade durch die Abtastpunkte. Wesentlich für den Erfolg der Methode sind passende adaptive Pfadkonstruktionen. Diese Dissertation beinhaltet weiterhin eine kurze Diskussion der theoretischen Eigenschaften von Wavelets entlang von Pfaden sowie numerische Resultate und schließt mit möglichen Modifikationen der Entstörungsmethode.
18

Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables / Approximation and estimation of integral operators : applications to the restoration of images degraded by spatially varying blurs

Escande, Paul 26 September 2016 (has links)
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes. / The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues.

Page generated in 0.074 seconds