• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 553
  • 32
  • Tagged with
  • 585
  • 585
  • 585
  • 45
  • 37
  • 36
  • 33
  • 31
  • 30
  • 29
  • 29
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Towards an adaptive solver for high-dimensional PDE problems on clusters of multicore processors

Gustafsson, Magnus January 2012 (has links)
Accurate numerical simulation of time-dependent phenomena in many spatial dimensions is a challenging computational task apparent in a vast range of application areas, for instance quantum dynamics, financial mathematics, systems biology and plasma physics. Particularly problematic is that the number of unknowns in the governing equations (the number of grid points) grows exponentially with the number of spatial dimensions introduced, often referred to as the curse of dimensionality. This limits the range of problems that we can solve, since the computational effort and requirements on memory storage directly depend on the number of unknowns for which to solve the equations. In order to push the limit of tractable problems, we are developing an implementation framework, HAParaNDA, for high-dimensional PDE-problems. By using high-order accurate schemes and adaptive mesh refinement (AMR) in space, we aim at reducing the number of grid points used in the discretization, thereby enabling the solution of larger and higher-dimensional problems. Within the framework, we use structured grids for spatial discretization and a block-decomposition of the spatial domain for parallelization and load balancing. For integration in time, we use exponential integration, although the framework allows the flexibility of other integrators to be implemented as well. Exponential integrators using the Lanzcos or the Arnoldi algorithm has proven a succesful and efficient approach for large problems. Using a truncation of the Magnus expansion, we can attain high levels of accuracy in the solution. As an example application, we have implemented a solver for the time-dependent Schrödinger equation using this framework. We provide scaling results for small and medium sized clusters of multicore nodes, and show that the solver fulfills the expected rate of convergence. / eSSENCE / UPMARC
292

Cut finite element methods for incompressibleflows with unfitted interfaces

Holmberg, Carl January 2018 (has links)
Problems with time-evolving domains are frequently occurring in computationalfluid dynamics and many other fields of science and engineering.Unfitted methods, where the computational mesh does not conform to thegeometry, are of great interest for handling such problems, since they removethe burden of mesh generation. We work towards the goal of developingan unfitted solver for Navier-Stokes equations on time-evolving domainsby developing and presenting cut finite element (CutFEM) splitting methodsfor solving Navier-Stokes equations. These CutFEM splitting methodsuse Nitsche’s method for incorporating boundary conditions and employpatch-based ghost penalty stabilization of the cut elements to achieve stabilityand optimal order error estimates. Numerical benchmarks are used toverify the methods and implementations. The methods are tested against aproblem with known analytical solution, the Taylor-Green vortex, and alsocompared to the classical Deutsche Forschungsgemeinschaft (DFG) benchmarkproblem with channel flow around a cylinder. For both benchmarks,the methods was shown to be stable when satisfying the parabolic Courant–Friedrichs–Lewy (CFL) condition, and to produce optimal convergencerates.
293

Parallelism and efficiency in discrete-event simulation

Bauer, Pavol January 2015 (has links)
Discrete-event models depict systems where a discrete state is repeatedly altered by instantaneous changes in time, the events of the model. Such models have gained popularity in fields such as Computational Systems Biology or Computational Epidemiology due to the high modeling flexibility and the possibility to easily combine stochastic and deterministic dynamics. However, the system size of modern discrete-event models is growing and/or they need to be simulated at long time periods. Thus, efficient simulation algorithms are required, as well as the possibility to harness the compute potential of modern multicore computers. Due to the sequential design of simulators, parallelization of discrete event simulations is not trivial. This thesis discusses event-based modeling and sensitivity analysis and also examines ways to increase the efficiency of discrete-event simulations and to scale models involving deterministic and stochastic spatial dynamics on a large number of processor cores. / UPMARC / eSSENCE
294

Quasi-Arithmetic Filters for Topology Optimization / Kvasiaritmetiska filter för topologioptimering

Hägg, Linus January 2016 (has links)
Topology optimization is a framework for finding the optimal layout of material within a given region of space. In material distribution topology optimization, a material indicator function determines the material state at each point within the design domain. It is well known that naive formulations of continuous material distribution topology optimization problems often lack solutions. To obtain numerical solutions, the continuous problem is approximated by a finite-dimensional problem. The finite-dimensional approximation is typically obtained by partitioning the design domain into a finite number of elements and assigning to each element a design variable that determines the material state of that element. Although the finite-dimensional problem generally is solvable, a sequence of solutions corresponding to ever finer partitions of the design domain may not converge; that is, the optimized designs may exhibit mesh-dependence. Filtering procedures are amongst the most popular methods used to handle the existence issue related to the continuous problem as well as the mesh-dependence related to the finite-dimensional approximation. Over the years, a variety of filters for topology optimization have been presented. To harmonize the use and analysis of filters within the field of topology optimization, we introduce the class of fW-mean filters that is based on the weighted quasi-arithmetic mean, also known as the weighted generalized f-mean, over some neighborhoods. We also define the class of generalized fW-mean filters that contains the vast majority of filters for topology optimization. In particular, the class of generalized fW-mean filters includes the fW-mean filters, as well as the projected fW-mean filters that are formed by adding a projection step to the fW-mean filters. If the design variables are located in a regular grid, uniform weights are used within each neighborhood, and equal sized polytope shaped neighborhoods are used, then a cascade of generalized fW-mean filters can be applied with a computational complexity that is linear in the number of design variables. Detailed algorithms for octagonal shaped neighborhoods in 2D and rhombicuboctahedron shaped neighborhoods in 3D are provided. The theoretically obtained computational complexity of the algorithm for octagonal shaped neighborhoods in 2D has been numerically verified. By using the same type of algorithm as for filtering, the additional computational complexity for computing derivatives needed in gradient based optimization is also linear in the number of design variables. To exemplify the use of generalized fW-mean filters in topology optimization, we consider minimization of compliance (maximization of global stiffness) of linearly elastic continuum bodies. We establish the existence of solutions to a version of the continuous minimal compliance problem when a cascade of projected continuous fW-mean filters is included in the formulation. Bourdin's classical existence result for the linear density filter is a partial case of this general theorem for projected continuous fW-mean filters. Inspired by the works of Svanberg & Svärd and Sigmund, we introduce the harmonic open-close filter, which is a cascade of four fW-mean filters. We present large-scale numerical experiments indicating that, for minimal compliance problems, the harmonic open-close filter produces almost binary designs, provides independent size control on both material and void regions, and yields mesh-independent designs.
295

Skew-symmetric matrix pencils : stratification theory and tools

Dmytryshyn, Andrii January 2014 (has links)
Investigating the properties, explaining, and predicting the behaviour of a physical system described by a system (matrix) pencil often require the understanding of how canonical structure information of the system pencil may change, e.g., how eigenvalues coalesce or split apart, due to perturbations in the matrix pencil elements. Often these system pencils have different block-partitioning and / or symmetries. We study changes of the congruence canonical form of a complex skew-symmetric matrix pencil under small perturbations. The problem of computing the congruence canonical form is known to be ill-posed: both the canonical form and the reduction transformation depend discontinuously on the entries of a pencil. Thus it is important to know the canonical forms of all such pencils that are close to the investigated pencil. One way to investigate this problem is to construct the stratification of orbits and bundles of the pencils. To be precise, for any problem dimension we construct the closure hierarchy graph for congruence orbits or bundles. Each node (vertex) of the graph represents an orbit (or a bundle) and each edge represents the cover/closure relation. Such a relation means that there is a path from one node to another node if and only if a skew-symmetric matrix pencil corresponding to the first node can be transformed by an arbitrarily small perturbation to a skew-symmetric matrix pencil corresponding to the second node. From the graph it is straightforward to identify more degenerate and more generic nearby canonical structures. A necessary (but not sufficient) condition for one orbit being in the closure of another is that the first orbit has larger codimension than the second one. Therefore we compute the codimensions of the congruence orbits (or bundles). It is done via the solutions of an associated homogeneous system of matrix equations. The complete stratification is done by proving the relation between equivalence and congruence for the skew-symmetric matrix pencils. This relation allows us to use the known result about the stratifications of general matrix pencils (under strict equivalence) in order to stratify skew-symmetric matrix pencils under congruence. Matlab functions to work with skew-symmetric matrix pencils and a number of other types of symmetries for matrices and matrix pencils are developed and included in the Matrix Canonical Structure (MCS) Toolbox.
296

A Drucker-Prager model for elastic contact with friction / A Drucker-Prager model for elastic contact with friction

wu, yunxian, wang, yiyun January 2011 (has links)
In mumerical contact simulations with friction, the simple Coloumb law is usually employed. Standard plasticity models are difficult to use since the balance enforced on the contact surface typically only involves balance of traction vectors, and does not use the full stress tensor on the interface. In this work we describe an approach that allows for the use of the stress tensor, thus opening up the possibility of using more advanced plasticity models. We exemplify this approach by implementing the Drucker-Prager pressure sensitive plasticity model.
297

Applying Autonomous Methods for Signal Analysis and Correction with Applications in the ship Industry

EL OUARDI, Abdelghafour January 2018 (has links)
The manufacturing and transportation industries generate a large amount of data sets which are often of inconsistent quality. The goal of this project is to find the mathematical principles of a system which learns automatically the essential statistical and analytical properties of datasets in order to detect and correct certain classes of faults in real time.
298

Evaluation of SPH for hydrodynamic modeling,using DualSPHysics

Eriksson, Jonas January 2018 (has links)
Computational methods are always being invented, improved and adjusted to newkinds of problems, this is a constant process happening all the time. The studyevaluates a method called Smoothed Particle Hydrodynamics (SPH) for modelingon fluid flows around ship hulls. This has been done mainly using a open sourcecode called DualSPHysics. The SPH method has been applied to complex problemsas well as simple problems for comparison to well known phenomena. It is aearly study of the method and aimed at discovering how to proceed when studyingthe method in the future. The results seem promising especially when computationsare made using Graphics Processing Units (GPU) for calculations. The codeDualSPHysics used in the study shows promise but might be in need of some morefunctions before being practically applicable for simulation of ship hulls.
299

Leveraging multicore processors for scientific computing

Tillenius, Martin January 2012 (has links)
This thesis deals with how to develop scientific computing software that runs efficiently on multicore processors. The goal is to find building blocks and programming models that increase the productivity and reduce the probability of programming errors when developing parallel software. In our search for new building blocks, we evaluate the use of hardware transactional memory for constructing atomic floating point operations. Using benchmark applications from scientific computing, we show in which situations this achieves better performance than other approaches. Driven by the needs of scientific computing applications, we develop a programming model and implement it as a reusable library. The library provides a run-time system for executing tasks on multicore architectures, with efficient and user-friendly management of dependencies. Our results from scientific computing benchmarks show excellent scaling up to at least 64 cores. We also investigate how the execution time depend on the task granularity, and build a model for the performance of the task library. / UPMARC / eSSENCE
300

Parallel algorithms and implementations for genetic analysis of quantitative traits

Jayawardena, Mahen January 2007 (has links)
Many important traits in plants, animals and humans are quantitative, and most such traits are generally believed to be regulated by multiple genetic loci. Standard computational tools for analysis of quantitative traits use linear regression models for relating the observed phenotypes to the genetic composition of individuals in a population. However, using these tools to simultaneously search for multiple genetic loci is very computationally demanding. The main reason for this is the complex nature of the optimization landscape for the multidimensional global optimization problems that must be solved. This thesis describes parallel algorithms and implementation techniques for such optimization problems. The new computational tools will eventually enable genetic analysis exploiting new classes of multidimensional statistical models, potentially resulting in interesting results in genetics. We first describe how the algorithm used for global optimization in the standard, serial software is parallelized and implemented on a grid system. Then, we also describe a parallelized version of the more elaborate global optimization algorithm DIRECT and show how this can be deployed on grid systems and other loosely-coupled architectures. The parallel DIRECT scheme is further developed to exploit both coarse-grained parallelism in grid or clusters as well as fine-grained, tightly-coupled parallelism in multi-core nodes. The results show that excellent speedup and performance can be archived on grid systems and clusters, even when using a tightly-coupled algorithms such as DIRECT. Finally, a pilot implementation of a grid portal providing a graphical front-end for our code is implemented. After some further development, this portal can be utilized by geneticists for performing multidimensional genetic analysis of quantitative traits on a regular basis.

Page generated in 0.0995 seconds