• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 26
  • 25
  • 23
  • 22
  • 21
  • 19
  • 18
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients

Kouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
22

Investigation Into Adaptive Structure In Software-embedded Products From Cybernetic Perspective

Yurdakul, Ertugrul Emin 01 May 2007 (has links) (PDF)
This study investigates the concept of adaptivity in relation to the evolution of software and hence software embedded products. Whilst laying out the benefits of adaptivity in products, it discusses the potential future threats engendered by the actual change observed in the functionality principles of adaptive products. The discussion is based upon cybernetic theory which defines control technology in the 20th century anew. Accordingly, literature survey on cybernetic theory, evolution of software from conventional to adaptive structure is presented. The changes in the functionality principles of adaptive systems and the similarities that these changes show with living autonomous systems is also investigated. The roles of product and user are redefined in relation to changing control mechanisms. Then, the new direction that the conventional product-user relationship has taken with adaptive products is examined. Finally, the potential future threats this new direction might bring is discussed with the help of two control conflict situations.
23

Adaptive modeling of plate structures

Bohinc, Uroš 05 May 2011 (has links) (PDF)
The primary goal of the thesis is to provide some answers to the questions related to the key steps in the process of adaptive modeling of plates. Since the adaptivity depends on reliable error estimates, a large part of the thesis is related to the derivation of computational procedures for discretization error estimates as well as model error estimates. A practical comparison of some of the established discretization error estimates is made. Special attention is paid to what is called equilibrated residuum method, which has a potential to be used both for discretization error and model error estimates. It should be emphasized that the model error estimates are quite hard to obtain, in contrast to the discretization error estimates. The concept of model adaptivity for plates is in this work implemented on the basis of equilibrated residuum method and hierarchic family of plate finite element models.The finite elements used in the thesis range from thin plate finite elements to thick plate finite elements. The latter are based on a newly derived higher order plate theory, which includes through the thickness stretching. The model error is estimated by local element-wise computations. As all the finite elements, representing the chosen plate mathematical models, are re-derived in order to share the same interpolation bases, the difference between the local computations can be attributed mainly to the model error. This choice of finite elements enables effective computation of the model error estimate and improves the robustness of the adaptive modeling. Thus the discretization error can be computed by an independent procedure.Many numerical examples are provided as an illustration of performance of the derived plate elements, the derived discretization error procedures and the derived modeling error procedure. Since the basic goal of modeling in engineering is to produce an effective model, which will produce the most accurate results with the minimum input data, the need for the adaptive modeling will always be present. In this view, the present work is a contribution to the final goal of the finite element modeling of plate structures: a fully automatic adaptive procedure for the construction of an optimal computational model (an optimal finite element mesh and an optimal choice of a plate model for each element of the mesh) for a given plate structure.
24

Adaptive Spline-based Finite Element Method with Application to Phase-field Models of Biomembranes

Jiang, Wen January 2015 (has links)
<p>Interfaces play a dominant role in governing the response of many biological systems and they pose many challenges to traditional finite element. For sharp-interface model, traditional finite element methods necessitate the finite element mesh to align with surfaces of discontinuities. Diffuse-interface model replaces the sharp interface with continuous variations of an order parameter resulting in significant computational effort. To overcome these difficulties, we focus on developing a computationally efficient spline-based finite element method for interface problems.</p><p>A key challenge while employing B-spline basis functions in finite-element methods is the robust imposition of Dirichlet boundary conditions. We begin by examining weak enforcement of such conditions for B-spline basis functions, with application to both second- and fourth-order problems based on Nitsche's approach. The use of spline-based finite elements is further examined along with a Nitsche technique for enforcing constraints on an embedded interface. We show that how the choice of weights and stabilization parameters in the Nitsche consistency terms has a great influence on the accuracy and robustness of the method. In the presence of curved interface, to obtain optimal rates of convergence we employ a hierarchical local refinement approach to improve the geometrical representation of interface. </p><p>In multiple dimensions, a spline basis is obtained as a tensor product of the one-dimensional basis. This necessitates a rectangular grid that cannot be refined locally in regions of embedded interfaces. To address this issue, we develop an adaptive spline-based finite element method that employs hierarchical refinement and coarsening techniques. The process of refinement and coarsening guarantees linear independence and remains the regularity of the basis functions. We further propose an efficient data transfer algorithm during both refinement and coarsening which yields to accurate results.</p><p>The adaptive approach is applied to vesicle modeling which allows three-dimensional simulation to proceed efficiently. In this work, we employ a continuum approach to model the evolution of microdomains on the surface of Giant Unilamellar Vesicles. The chemical energy is described by a Cahn-Hilliard type density functional that characterizes the line energy between domains of different species. The generalized Canham-Helfrich-Evans model provides a description of the mechanical energy of the vesicle membrane. This coupled model is cast in a diffuse-interface form using the phase-field framework. The effect of coupling is seen through several numerical examples of domain formation coupled to vesicle shape changes.</p> / Dissertation
25

Finite Element Methods for Interface Problems with Mesh Adaptivity

Zhang, Ziyu January 2015 (has links)
<p>This dissertation addresses interface problems simulated with the finite element method (FEM) with mesh adaptivity. More specifically, we concentrate on the strategies that adaptively modify the mesh and the associated data transfer issues. </p><p>In finite element simulations there often arises the need to change the mesh and continue the simulation on a new mesh. Analysts encounter such an issue when they adaptively refine the mesh to reduce the computational cost, smooth distorted elements to improve system conditioning, or introduce new surfaces and change the domain in simulations of fracture problems. In such circumstances, the transfer of data from the old mesh to the new one is of crucial importance, especially for nonlinear problems. We are concerned in this work with contact problems with adaptive re-meshing and fracture problems modeled with the eXtended finite element method (X-FEM). For the former ones, the transfer of surface data is built upon the technique of parallel transport, and the error of such a transfer strategy is investigated through classic benchmark tests. A transfer scheme based on a least squares problem is also proposed to transfer the bulk data when nearly incompressible hyperelastic materials are employed. For the latter type of problems, we facilitate the transfer of internal variables by making partial elements utilize the same quadrature points from the uncut parent elements and meanwhile adjusting the quadrature weights via the solution of moment fitting equations. The proposed scheme helps avoid the complicated remapping procedure of internal variables between two different sets of quadrature points. A number of numerical examples are presented to demonstrate the robustness and accuracy of our proposed approaches.</p><p>Another renowned technique to simulate fracture problems is based upon the phase-field formulation, where a set of coupled mechanics and phase-field equations are solved via FEM without modeling crack geometries. However, losing the ability to model distinct surfaces in the phase-field formulation has drawbacks, such as difficulties simulating contact on crack surfaces and poorly-conditioned stiffness matrices. On the other hand, using the pure X-FEM in fracture simulations mandates the calculation of the direction and increment of crack surfaces at each step, introducing intricacies of tracing crack evolution. Thus, we propose combining phase-field and X-FEM approaches to utilize their individual benefits based on a novel medial-axis algorithm. Consequently, we can still capture complex crack geometries while having crack surfaces explicitly modeled by modifying the mesh with the X-FEM.</p> / Dissertation
26

Developing a modular extendable tool for Serious Games content creation : Combining existing techniques with a focus on narrative generation and player adaptivity

Declercq, Julian January 2018 (has links)
A large part of any game development process consists of content creation, which costs both time and effort. Procedural generation techniques exist to help narrative generation, but they are scattered and require extensive manual labour to set up. On top of that, Serious Games content created with these techniques tend to be uninteresting and lack variety which can ultimately lead to the Serious Games missing their intended purpose. This paper delivers a prototype for a modular tool that aims to solve these problems by combining existing narrative generation techniques with common sense database knowledge and player adaptivity techniques. The prototype tool implements Ceptre as a core module for the generation of stories and ConceptNet as a commonsense knowledge database. Two studies have been conducted with content created by the tool. One study tested if generation rules created by commonsense can be used to flesh out stories, while the other one evaluated if adapted stories yield better scores. The results of the first test state that adding rules retrieved through common sense knowledge did not improve story quality, but they can however be used to extend stories without compromising story quality. It also shows that ideally, an extensive natural language processing module should be used to present the stories rather than a basic implementation. The statistically insignificant result of the second test was potentially caused by the compromises taken when conducting the test. Reconduction of this test using real game data, rather than data from the compromised personality test, might be preferable.
27

ASBJOIN: uma estratÃgia adaptativa para consultas envolvendo operadores de junÃÃo em Linked data / ASBJOIN: an adaptive strategy for queries involving join operators on Linked date

Macedo Sousa Maia 31 October 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Motivado pelo sucesso de Linked Data e impulsionado pelo crescimento do nÃmero de fontes de dados em formato RDF disponÃveis na Web, novos desafios para processamento de consultas estÃo emergindo, especialmente em configuraÃÃes distribuÃdas. No ambiente de Linked Data, à possÃvel executar consultas federadas, as quais envolvem junÃÃes de dados fornecidos por mÃltiplas fontes. O termo consulta federada à usado quando queremos prover soluÃÃes baseadas em informaÃÃes obtidas de diferentes fontes. Nesse sentido, a concepÃÃo de novos algoritmos e estratÃgias adaptativas para a execuÃÃo de junÃÃes de forma eficiente constitui um desafio importante. Nesse trabalho, apresentamos uma soluÃÃo para a execuÃÃo adaptativa de operaÃÃes de junÃÃes de dados em consultas federadas. A execuÃÃo da operaÃÃo de junÃÃo adaptativa entre informaÃÃes contidas em fontes de dados distribuÃdas baseia-se em estatÃsticas, que sÃo coletadas em tempo de execuÃÃo. Uma informaÃÃo estatÃstica sobre uma determinada fontes seria, por exemplo, o tempo decorrido (Elapsed Time) para obter algum resultado. Para obter as informaÃÃes estatÃsticas atualizadas, usamos uma estratÃgia que coleta essas informaÃÃes durante a execuÃÃo da consulta e,logo apÃs, sÃo armazenadas em uma base de dados local, na qual denominamos como catÃlogo de informaÃÃes estatÃsticas. / Motivated by the success of Linked Data and driven by the growing number of data sources into RDF files available on the web, new challenges for query processing are emerging, especially in distributed settings. These environments allow distributed execution of federated queries, which involve joining data provided by multiple sources, which are often unstable. In this sense, the design of new algorithms and adaptive strategies for efficiently implementing joins is a major challenge. In this paper, we present a solution to the adaptive joins execution in federated queries. The adaptative context of distributed data sources is based on statistics that are collected at runtime. For this, we use a module that updates the information in the catalog as the query is executed. The module works in parallel with the query processor.
28

Méthodes d'enrichissement pour les problèmes de type Navier-Stokes

Krust, Arnaud 31 October 2012 (has links)
Ce travail se place dans le contexte des problèmes de fluides présentant une couche limite. Nous explorons l'usage de méthodes éléments finis enrichies pour ce type de problèmes. En particulier, nous présentons un algorithme nouveau d'enrichissement adaptatif, où les fonctions d 'enrichissement sont construites sans connaissance a priori sur la solution. Nous comparons cette approche à l'adaptation de degré polynômial et à l'adaptation de maillage. Nous montrons qu'elle peut-être plus compétitive que la première et qu'elle peut être utilisée efficacement comme complément à laseconde. Des expérimentations numériques sont menées sur des problèmes 2D scalaires (advection -diffusion, Burgers) et de Navier-Stokes. / We are interested in fluid dynamics problems with a boundary layer. We investigate enriched finite elements methods for this kind of problems. A point of interest is the new adaptive enrichment algorithm that we propose, where enrichment functions are built without a priori knowledge on the solution. This approach is compared to both p-adaptivity and h-adaptivity. We show that it can replace the former with profit, and is a good complement to the latter. Numerical experiments are shown on scalar problems (advection-diffusion) and Navier-Stokes equations.
29

Energy-Aware Data Management on NUMA Architectures

Kissinger, Thomas 29 May 2017 (has links) (PDF)
The ever-increasing need for more computing and data processing power demands for a continuous and rapid growth of power-hungry data center capacities all over the world. As a first study in 2008 revealed, energy consumption of such data centers is becoming a critical problem, since their power consumption is about to double every 5 years. However, a recently (2016) released follow-up study points out that this threatening trend was dramatically throttled within the past years, due to the increased energy efficiency actions taken by data center operators. Furthermore, the authors of the study emphasize that making and keeping data centers energy-efficient is a continuous task, because more and more computing power is demanded from the same or an even lower energy budget, and that this threatening energy consumption trend will resume as soon as energy efficiency research efforts and its market adoption are reduced. An important class of applications running in data centers are data management systems, which are a fundamental component of nearly every application stack. While those systems were traditionally designed as disk-based databases that are optimized for keeping disk accesses as low a possible, modern state-of-the-art database systems are main memory-centric and store the entire data pool in the main memory, which replaces the disk as main bottleneck. To scale up such in-memory database systems, non-uniform memory access (NUMA) hardware architectures are employed that face a decreased bandwidth and an increased latency when accessing remote memory compared to the local memory. In this thesis, we investigate energy awareness aspects of large scale-up NUMA systems in the context of in-memory data management systems. To do so, we pick up the idea of a fine-grained data-oriented architecture and improve the concept in a way that it keeps pace with increased absolute performance numbers of a pure in-memory DBMS and scales up on NUMA systems in the large scale. To achieve this goal, we design and build ERIS, the first scale-up in-memory data management system that is designed from scratch to implement a data-oriented architecture. With the help of the ERIS platform, we explore our novel core concept for energy awareness, which is Energy Awareness by Adaptivity. The concept describes that software and especially database systems have to quickly respond to environmental changes (i.e., workload changes) by adapting themselves to enter a state of low energy consumption. We present the hierarchically organized Energy-Control Loop (ECL), which is a reactive control loop and provides two concrete implementations of our Energy Awareness by Adaptivity concept, namely the hardware-centric Resource Adaptivity and the software-centric Storage Adaptivity. Finally, we will give an exhaustive evaluation regarding the scalability of ERIS as well as our adaptivity facilities.
30

Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization

Nadal Soriano, Enrique 14 February 2014 (has links)
More and more challenging designs are required everyday in today¿s industries. The traditional trial and error procedure commonly used for mechanical parts design is not valid any more since it slows down the design process and yields suboptimal designs. For structural components, one alternative consists in using shape optimization processes which provide optimal solutions. However, these techniques require a high computational effort and require extremely efficient and robust Finite Element (FE) programs. FE software companies are aware that their current commercial products must improve in this sense and devote considerable resources to improve their codes. In this work we propose to use the Cartesian Grid Finite Element Method, cgFEM as a tool for efficient and robust numerical analysis. The cgFEM methodology developed in this thesis uses the synergy of a variety of techniques to achieve this purpose, but the two main ingredients are the use of Cartesian FE grids independent of the geometry of the component to be analyzed and an efficient hierarchical data structure. These two features provide to the cgFEM technology the necessary requirements to increase the efficiency of the cgFEM code with respect to commercial FE codes. As indicated in [1, 2], in order to guarantee the convergence of a structural shape optimization process we need to control the error of each geometry analyzed. In this sense the cgFEM code also incorporates the appropriate error estimators. These error estimators are specifically adapted to the cgFEM framework to further increase its efficiency. This work introduces a solution recovery technique, denoted as SPR-CD, that in combination with the Zienkiewicz and Zhu error estimator [3] provides very accurate error measures of the FE solution. Additionally, we have also developed error estimators and numerical bounds in Quantities of Interest based on the SPR-CD technique to allow for an efficient control of the quality of the numerical solution. Regarding error estimation, we also present three new upper error bounding techniques for the error in energy norm of the FE solution, based on recovery processes. Furthermore, this work also presents an error estimation procedure to control the quality of the recovered solution in stresses provided by the SPR-CD technique. Since the recovered stress field is commonly more accurate and has a higher convergence rate than the FE solution, we propose to substitute the raw FE solution by the recovered solution to decrease the computational cost of the numerical analysis. All these improvements are reflected by the numerical examples of structural shape optimization problems presented in this thesis. These numerical analysis clearly show the improved behavior of the cgFEM technology over the classical FE implementations commonly used in industry. / Cada d'¿a dise¿nos m'as complejos son requeridos por las industrias actuales. Para el dise¿no de nuevos componentes, los procesos tradicionales de prueba y error usados com'unmente ya no son v'alidos ya que ralentizan el proceso y dan lugar a dise¿nos sub-'optimos. Para componentes estructurales, una alternativa consiste en usar procesos de optimizaci'on de forma estructural los cuales dan como resultado dise¿nos 'optimos. Sin embargo, estas t'ecnicas requieren un alto coste computacional y tambi'en programas de Elementos Finitos (EF) extremadamente eficientes y robustos. Las compa¿n'¿as de programas de EF son conocedoras de que sus programas comerciales necesitan ser mejorados en este sentido y destinan importantes cantidades de recursos para mejorar sus c'odigos. En este trabajo proponemos usar el M'etodo de Elementos Finitos basado en mallados Cartesianos (cgFEM) como una herramienta eficiente y robusta para el an'alisis num'erico. La metodolog'¿a cgFEM desarrollada en esta tesis usa la sinergia entre varias t'ecnicas para lograr este prop'osito, cuyos dos ingredientes principales son el uso de los mallados Cartesianos de EF independientes de la geometr'¿a del componente que va a ser analizado y una eficiente estructura jer'arquica de datos. Estas dos caracter'¿sticas confieren a la tecnolog'¿a cgFEM de los requisitos necesarios para aumentar la eficiencia del c'odigo cgFEM con respecto a c'odigos comerciales. Como se indica en [1, 2], para garantizar la convergencia del proceso de optimizaci'on de forma estructural se necesita controlar el error en cada geometr'¿a analizada. En este sentido el c'odigo cgFEM tambi'en incorpora los apropiados estimadores de error. Estos estimadores de error han sido espec'¿ficamente adaptados al entorno cgFEM para aumentar su eficiencia. En esta tesis se introduce un proceso de recuperaci'on de la soluci'on, llamado SPR-CD, que en combinaci'on con el estimador de error de Zienkiewicz y Zhu [3], da como resultado medidas muy precisas del error de la soluci'on de EF. Adicionalmente, tambi'en se han desarrollado estimadores de error y cotas num'ericas en Magnitudes de Inter'es basadas en la t'ecnica SPR-CD para permitir un eficiente control de la calidad de la soluci'on num'erica. Respecto a la estimaci'on de error, tambi'en se presenta un proceso de estimaci'on de error para controlar la calidad del campo de tensiones recuperado obtenido mediante la t'ecnica SPR-CD. Ya que el campo recuperado es por lo general m'as preciso y tiene un mayor orden de convergencia que la soluci'on de EF, se propone sustituir la soluci'on de EF por la soluci'on recuperada para disminuir as'¿ el coste computacional del an'alisis num'erico. Todas estas mejoras se han reflejado en esta tesis mediante ejemplos num'ericos de problemas de optimizaci'on de forma estructural. Los resultados num'ericos muestran claramente un mejor comportamiento de la tecnolog'¿a cgFEM con respecto a implementaciones cl'asicas de EF com'unmente usadas en la industria. / Nadal Soriano, E. (2014). Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35620 / TESIS

Page generated in 0.0565 seconds