• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 32
  • 32
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Coupled spring equations

Fay, TH, Graham, SD January 2003 (has links)
Coupled spring equations for modelling the motion of two springs with weights attached, hung in series from the ceiling are described. For the linear model using Hooke’s Law, the motion of each weight is described by a fourthorder linear differential equation. A nonlinear model is also described and damping and external forcing are considered. The model has many features that permit the meaningful introduction of many concepts including: accuracy of numerical algorithms, dependence on parameters and initial conditions, phase and synchronization, periodicity, beats, linear and nonlinear resonance, limit cycles, harmonic and subharmonic solutions. These solutions produce a wide variety of interesting motions and the model is suitable for study as a computer laboratory project in a beginning course on differential equations or as an individual or a small-groupundergraduate research project.
2

Algorithms for trigonometric polynomial and rational approximation

Javed, Mohsin January 2016 (has links)
This thesis presents new numerical algorithms for approximating functions by trigonometric polynomials and trigonometric rational functions. We begin by reviewing trigonometric polynomial interpolation and the barycentric formula for trigonometric polynomial interpolation in Chapter 1. Another feature of this chapter is the use of the complex plane, contour integrals and phase portraits for visualising various properties and relationships between periodic functions and their Laurent and trigonometric series. We also derive a periodic analogue of the Hermite integral formula which enables us to analyze interpolation error using contour integrals. We have not been able to find such a formula in the literature. Chapter 2 discusses trigonometric rational interpolation and trigonometric linearized rational least-squares approximations. To our knowledge, this is the first attempt to numerically solve these problems. The contribution of this chapter is presented in the form of a robust algorithm for computing trigonometric rational interpolants of prescribed numerator and denominator degrees at an arbitrary grid of interpolation points. The algorithm can also be used to compute trigonometric linearized rational least-squares and trigonometric polynomial least-squares approximations. Chapter 3 deals with the problem of trigonometric minimax approximation of functions, first in a space of trigonometric polynomials and then in a set of trigonometric rational functions. The contribution of this chapter is presented in the form of an algorithm, which to our knowledge, is the first description of a Remez-like algorithm to numerically compute trigonometric minimax polynomial and rational approximations. Our algorithm also uses trigonometric barycentric interpolation and Chebyshev-eigenvalue based root finding. Chapter 4 discusses the Fourier-Padé (called trigonometric Padé) approximation of a function. We review two existing approaches to the problem, both of which are based on rational approximations of a Laurent series. We present a numerical algorithm with examples and compute various type (m, n) trigonometric Padé approximants.
3

Numerical algorithms for three dimensional computational fluid dynamic problems

Mora Acosta, Josue 20 December 2001 (has links)
The target of this work is to contribute to the enhancement of numerical methods for the simulation of complex thermal systems. Frequently, the factor that limits the accuracy of the simulations is the computing power: accurate simulations of complex devices require fine three-dimensional discretizations and the solution of large linear equation systems.Their efficient solution is one of the central aspects of this work. Low-cost parallel computers, for instance, PC clusters, are used to do so. The main bottle-neck of these computers is the notwork, that is too slow compared with their floating-point performance.Before considering linear solution algorithms, an overview of the mathematical models used and discretization techniques in staggered cartesian and cylindrical meshes is provided. The governing Navier-Stokes equations are solved using an implicit finite control volume method. Pressure-velocity coupling is solved with segregated approaches such as SIMPLEC.Different algorithms for the solution of the linear equation systems are reviewed: from incomplete factorizations such as MSIP, Krylov solvers such as BICGSTAB and GMRESR to acceleration techniques such as the Algebraic Multi Grid and the Multi Resolution Analysis with wavelts. Special attention is paid to preconditioned Krylov solvers for their application to parallel CFD problems.The fundamentals of parallel computing in distributed memory computers as well as implemetation details of these algorithms in combination with the domain decomposition method are given. Two different distributed memory computers, a Cray T3E and a PC cluster are used for several performance measures, including network throughput, performance of algebraic subroutines that affect to the overall efficiency of algorithms, and the solver performance. These measures are addressed to show the capabilities and drawbacks of parallel solvers for several processors and their partitioning configurations for a problem model.Finally, in order to illustrate the potential of the different techniques presented, a three-dimensional CFD problem is solved using a PC cluster. The numerical results obtained are validated by comparison with other authors. The speedup up to 12 processors is measured. An analysis of the computing time shows that, as expected, most of the computational effort is due to the pressure-correction equation,here solved with BiCGSTAB. The computing time algorithm , for different problem sizes, is compared with Schur-Complement and Multigrid. / El trabajo de tesis se centra en la solución numérica de las ecuaciones de navier-Stokes en regimen transitorio, tridimensional y laminar. Los algoritmos utilizados son del tipo segregado (SIMPLEC)y se basan en el uso de técnicas de volumenes finitos, con mallas estructurales del tipo staggered y discretizaciones temporales implícitas. En este contexto, el pricipal, problema son los elevados tiempos de cálculo de las simulaciones, que en buena parte se deben a la solución de los sistemas de ecuaciones lineales. Se hace una revisión de diferentes métodos utilizados típicamente en ordenadores secuenciales: GMRES, BICGSTAB, ACM, MSPIP. A fin de reducir los tiempos de cálculo se emplean ordenadores paralelos de memoria distribuida, basados en la agrupacion de ordenadores personales convencionales (PC clusters). Por lo que respecta a la potencia de cálculo por procesador, estos sistemas son comparables a los ordenadores paralelos de memoria distribuida convencionales (como el Cray T3E) siendo, su principal problema la baja capacidad de comunicación (elevada latencia, bajo ancho de banda). Este punto condiciona toda la estrategia computacional, obligando a reducir al máximo el número y el tamaño de los mensajes intercambiados. Este aspecto se cuantifica detalladamente en la tesis, realizando medidas de tiempos de cálculo en ambos ordenadores para diversas operaciones críticas para los algoritmos lineales. Tambien se miden y comparan los tiempos de cálculo y speed ups obtenidos en la solución de los sistemas lineales con diferentes algoritmos paralelos (Jacobi, MSIP, GMRES, BICGSTAB) y para diferentes tamaños de malla. Finalmente, se utilizan las técnicas anteriores para resolver el caso denominado driven cavity, en situacionies tridimensionales y con numeros de Reynolds de hasta 8000. Los resultados obtenidos se utilizan para validar los códigos desarrollados, en base a resultados de otros códigos y también se basa en la comparación con resultados experimentales procedentes de la bibliografía. Se utilizan hasta 12 procesadores, obteniendose spped ups de hasta 9.7 en el cluster de PCs. Se analizan los tiempos de cálculo de cada fase del código, señalandose areas para futuras mejoras. Se comparan los tiempos de cálculo con los algoritmos implementados en otros trabajos. La conclusión final es que los clusters de PCs son una plataforma de gran potencia en los cálculos de dinámica de fluidos computacional.
4

Mathematical Software for Multiobjective Optimization Problems

Chang, Tyler Hunter 15 June 2020 (has links)
In this thesis, two distinct problems in data-driven computational science are considered. The main problem of interest is the multiobjective optimization problem, where the tradeoff surface (called the Pareto front) between multiple conflicting objectives must be approximated in order to identify designs that balance real-world tradeoffs. In order to solve multiobjective optimization problems that are derived from computationally expensive blackbox functions, such as engineering design optimization problems, several methodologies are combined, including surrogate modeling, trust region methods, and adaptive weighting. The result is a numerical software package that finds approximately Pareto optimal solutions that are evenly distributed across the Pareto front, using minimal cost function evaluations. The second problem of interest is the closely related problem of multivariate interpolation, where an unknown response surface representing an underlying phenomenon is approximated by finding a function that exactly matches available data. To solve the interpolation problem, a novel algorithm is proposed for computing only a sparse subset of the elements in the Delaunay triangulation, as needed to compute the Delaunay interpolant. For high-dimensional data, this reduces the time and space complexity of Delaunay interpolation from exponential time to polynomial time in practice. For each of the above problems, both serial and parallel implementations are described. Additionally, both solutions are demonstrated on real-world problems in computer system performance modeling. / Doctor of Philosophy / Science and engineering are full of multiobjective tradeoff problems. For example, a portfolio manager may seek to build a financial portfolio with low risk, high return rates, and minimal transaction fees; an aircraft engineer may seek a design that maximizes lift, minimizes drag force, and minimizes aircraft weight; a chemist may seek a catalyst with low viscosity, low production costs, and high effective yield; or a computational scientist may seek to fit a numerical model that minimizes the fit error while also minimizing a regularization term that leverages domain knowledge. Often, these criteria are conflicting, meaning that improved performance by one criterion must be at the expense of decreased performance in another criterion. The solution to a multiobjective optimization problem allows decision makers to balance the inherent tradeoff between conflicting objectives. A related problem is the multivariate interpolation problem, where the goal is to predict the outcome of an event based on a database of past observations, while exactly matching all observations in that database. Multivariate interpolation problems are equally as prevalent and impactful as multiobjective optimization problems. For example, a pharmaceutical company may seek a prediction for the costs and effects of a proposed drug; an aerospace engineer may seek a prediction for the lift and drag of a new aircraft design; or a search engine may seek a prediction for the classification of an unlabeled image. Delaunay interpolation offers a unique solution to this problem, backed by decades of rigorous theory and analytical error bounds, but does not scale to high-dimensional "big data" problems. In this thesis, novel algorithms and software are proposed for solving both of these extremely difficult problems.
5

Sobre a escolha da relaxação e ordenação das projeções no método de Kaczmarz com ênfase em implementações altamente paralelas e aplicações em reconstrução tomográfica / On the choice of relaxation and ordering of projections in Kaczmarz method with emphasis on highly prallel implementations and applications in tomographic reconstruction

Estácio, Leonardo Bravo 16 May 2014 (has links)
O método de Kaczmarz é um algoritmo iterativo que soluciona sistemas lineares do tipo Ax = b através de projeções sobre hiperplanos bastante usado em aplicações que envolvem a Tomografia Computadorizada. Recentemente voltou a ser destaque após a publicação de uma versão aleatória apresentada por Strohmer e Vershynin em 2009 a qual foi provada possuir taxa de convergência esperada exponencial. Posteriormente, Eldar e Needell em 2011 sugeriram uma versão modificada do algoritmo de Strohmer e Vershynin, na qual a cada iteração é selecionada a projeção ótima a partir de um conjunto aleatório, utilizando para isto o lema de Johnson-Lindenstrauss. Nenhum dos artigos mencionados apresenta uma técnica para a escolha do parâmetro de relaxação, entretanto, a seleção apropriada deste parâmetro pode ter uma influência substancial na velocidade do método. Neste trabalho apresentamos uma metodologia para a escolha do parâmetro de relaxação, bem como implementações paralelas do algoritmo de Kaczmarz utilizando as ideias de Eldar e Needell. Nossa metodologia para seleção do parâmetro utiliza uma nova generalização dos resultados de Strohmer e Vershynin que agora leva em consideração o parâmetro λ de relaxação e, a partir daí, obtemos uma estimativa da taxa de convergência como função de λ. Escolhemos então, para uso no algoritmo, aquele que otimiza esta estimativa. A paralelização dos métodos foi realizada através da plataforma CUDA e se mostrou muito promissora, pois conseguimos, através dela, um ganho significativo na velocidade de convergência / The Kaczmarz method is an iterative algorithm for finding the solution of a system of linear equations Ax = b by projecting onto the hyperplanes widely used in applications involving Computerized Tomography. It has been recently highlighted after the publication of a random version presented by Strohmer and Vershynin in 2009 that yields probably exponential convergence in expectation. Thereafter, Eldar and Needell in 2011 suggested a modified version of Strohmer and Vershynin algorithm, which at each iteration selects the optimal projection from a random set making use of the Johnson-Lindenstrauss lemma. None of the mentioned articles presents a technique for choosing the relaxation parameter, however, the proper selection of this parameter can achieve a substantial gain on the speed of the method. In this project we present a methodology for finding the relaxation parameter, as well as parallel implementations of Kacmarzs Algorithm using the ideas of Eldar and Needell. Our methodology for parameter selection uses a new generalization on Strohmer and Vershynins results which now regards the relaxation parameter λ. Thenceforward, we obtain an estimate of the convergence rate as a function of λ. Then we use this estimate in the algorithm the optimizer of this estimate. The parallelization of the methods has been implemented through the CUDA platform and appears to be very promising, since it delivers substantial gain in the convergence speed
6

Bibliotheken zur Entwicklung paralleler Algorithmen - Basisroutinen für Kommunikation und Grafik

Pester, Matthias 04 April 2006 (has links) (PDF)
The purpose of this paper is to supply a summary of library subroutines and functions for parallel MIMD computers. The subroutines have been developed and continously extended at the University of Chemnitz since the end of the eighties. In detail, they are concerned with vector operations, inter-processor communication and simple graphic output to workstations. One of the most valuable features is the machine-independence of the communication subroutines proposed in this paper for a hypercube topology of the parallel processors (excepting a kernel of only two primitive system-dependend operations). They were implemented and tested for different hardware and operating systems including PARIX for transputers and PowerPC, nCube, PVM, MPI. The vector subroutines are optimized by the use of C language and unrolled loops (BLAS1-like). Hardware-optimized BLAS1 routines may be integrated. The paper includes hints for programmers how to use the libraries with both Fortran and C programs.
7

Visualization Tools for 2D and 3D Finite Element Programs - User's Manual

Pester, Matthias 04 April 2006 (has links) (PDF)
This paper deals with the visualization of numerical results as a very convenient method to understand and evaluate a solution which has been calculated as a set of millions of numerical values. One of the central research fields of the Chemnitz SFB 393 is the analysis of parallel numerical algorithms for large systems of linear equations arising from differential equations (e.g. in solid and fluid mechanics). Solving large problems on massively parallel computers makes it more and more impossible to store numerical data from the distributed memory of the parallel computer to the disk for later postprocessing. However, the developer of algorithms is interested in an on-line response of his algorithms. Both visual and numerical response of the running program may be evaluated by the user for a decision how to switch or adjust interactively certain parameters that may influence the solution process. The paper gives a survey of current programmer and user interfaces that are used in our various 2D and 3D parallel finite element programs for the visualization of the solution.
8

Multi-time Scales Stochastic Dynamic Processes: Modeling, Methods, Algorithms, Analysis, and Applications

Pedjeu, Jean-Claude 01 January 2012 (has links)
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model is formulated and it leads to a system of multi-time scale stochastic differential equations. The classical Picard-Lindel\"{o}f successive approximations scheme is expended to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this generates to a problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations. To illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are exhibited. Without loss in generality, the modeling and analysis of three time-scale fractional stochastic differential equations is followed by the development of the numerical algorithm for multi-time scale dynamic equations. The development of numerical algorithm is based on the idea if numerical integration in the context of the notion of multi-time scale integration. The multi-time scale approach is applied to explore the study of higher order stochastic differential equations (HOSDE) is presented. This study utilizes the variation of constant parameter technique to develop a method for finding closed form solution processes of classes of HOSDE. Then then probability distribution of the solution processes in the context of the second order equations is investigated.
9

Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

January 2016 (has links)
abstract: We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin’s procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N^2) to O(N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
10

Sobre a escolha da relaxação e ordenação das projeções no método de Kaczmarz com ênfase em implementações altamente paralelas e aplicações em reconstrução tomográfica / On the choice of relaxation and ordering of projections in Kaczmarz method with emphasis on highly prallel implementations and applications in tomographic reconstruction

Leonardo Bravo Estácio 16 May 2014 (has links)
O método de Kaczmarz é um algoritmo iterativo que soluciona sistemas lineares do tipo Ax = b através de projeções sobre hiperplanos bastante usado em aplicações que envolvem a Tomografia Computadorizada. Recentemente voltou a ser destaque após a publicação de uma versão aleatória apresentada por Strohmer e Vershynin em 2009 a qual foi provada possuir taxa de convergência esperada exponencial. Posteriormente, Eldar e Needell em 2011 sugeriram uma versão modificada do algoritmo de Strohmer e Vershynin, na qual a cada iteração é selecionada a projeção ótima a partir de um conjunto aleatório, utilizando para isto o lema de Johnson-Lindenstrauss. Nenhum dos artigos mencionados apresenta uma técnica para a escolha do parâmetro de relaxação, entretanto, a seleção apropriada deste parâmetro pode ter uma influência substancial na velocidade do método. Neste trabalho apresentamos uma metodologia para a escolha do parâmetro de relaxação, bem como implementações paralelas do algoritmo de Kaczmarz utilizando as ideias de Eldar e Needell. Nossa metodologia para seleção do parâmetro utiliza uma nova generalização dos resultados de Strohmer e Vershynin que agora leva em consideração o parâmetro λ de relaxação e, a partir daí, obtemos uma estimativa da taxa de convergência como função de λ. Escolhemos então, para uso no algoritmo, aquele que otimiza esta estimativa. A paralelização dos métodos foi realizada através da plataforma CUDA e se mostrou muito promissora, pois conseguimos, através dela, um ganho significativo na velocidade de convergência / The Kaczmarz method is an iterative algorithm for finding the solution of a system of linear equations Ax = b by projecting onto the hyperplanes widely used in applications involving Computerized Tomography. It has been recently highlighted after the publication of a random version presented by Strohmer and Vershynin in 2009 that yields probably exponential convergence in expectation. Thereafter, Eldar and Needell in 2011 suggested a modified version of Strohmer and Vershynin algorithm, which at each iteration selects the optimal projection from a random set making use of the Johnson-Lindenstrauss lemma. None of the mentioned articles presents a technique for choosing the relaxation parameter, however, the proper selection of this parameter can achieve a substantial gain on the speed of the method. In this project we present a methodology for finding the relaxation parameter, as well as parallel implementations of Kacmarzs Algorithm using the ideas of Eldar and Needell. Our methodology for parameter selection uses a new generalization on Strohmer and Vershynins results which now regards the relaxation parameter λ. Thenceforward, we obtain an estimate of the convergence rate as a function of λ. Then we use this estimate in the algorithm the optimizer of this estimate. The parallelization of the methods has been implemented through the CUDA platform and appears to be very promising, since it delivers substantial gain in the convergence speed

Page generated in 0.0612 seconds