• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 26
  • 25
  • 23
  • 22
  • 21
  • 19
  • 18
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients

Kouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
22

Investigation Into Adaptive Structure In Software-embedded Products From Cybernetic Perspective

Yurdakul, Ertugrul Emin 01 May 2007 (has links) (PDF)
This study investigates the concept of adaptivity in relation to the evolution of software and hence software embedded products. Whilst laying out the benefits of adaptivity in products, it discusses the potential future threats engendered by the actual change observed in the functionality principles of adaptive products. The discussion is based upon cybernetic theory which defines control technology in the 20th century anew. Accordingly, literature survey on cybernetic theory, evolution of software from conventional to adaptive structure is presented. The changes in the functionality principles of adaptive systems and the similarities that these changes show with living autonomous systems is also investigated. The roles of product and user are redefined in relation to changing control mechanisms. Then, the new direction that the conventional product-user relationship has taken with adaptive products is examined. Finally, the potential future threats this new direction might bring is discussed with the help of two control conflict situations.
23

Adaptive modeling of plate structures

Bohinc, Uroš 05 May 2011 (has links) (PDF)
The primary goal of the thesis is to provide some answers to the questions related to the key steps in the process of adaptive modeling of plates. Since the adaptivity depends on reliable error estimates, a large part of the thesis is related to the derivation of computational procedures for discretization error estimates as well as model error estimates. A practical comparison of some of the established discretization error estimates is made. Special attention is paid to what is called equilibrated residuum method, which has a potential to be used both for discretization error and model error estimates. It should be emphasized that the model error estimates are quite hard to obtain, in contrast to the discretization error estimates. The concept of model adaptivity for plates is in this work implemented on the basis of equilibrated residuum method and hierarchic family of plate finite element models.The finite elements used in the thesis range from thin plate finite elements to thick plate finite elements. The latter are based on a newly derived higher order plate theory, which includes through the thickness stretching. The model error is estimated by local element-wise computations. As all the finite elements, representing the chosen plate mathematical models, are re-derived in order to share the same interpolation bases, the difference between the local computations can be attributed mainly to the model error. This choice of finite elements enables effective computation of the model error estimate and improves the robustness of the adaptive modeling. Thus the discretization error can be computed by an independent procedure.Many numerical examples are provided as an illustration of performance of the derived plate elements, the derived discretization error procedures and the derived modeling error procedure. Since the basic goal of modeling in engineering is to produce an effective model, which will produce the most accurate results with the minimum input data, the need for the adaptive modeling will always be present. In this view, the present work is a contribution to the final goal of the finite element modeling of plate structures: a fully automatic adaptive procedure for the construction of an optimal computational model (an optimal finite element mesh and an optimal choice of a plate model for each element of the mesh) for a given plate structure.
24

Adaptive Spline-based Finite Element Method with Application to Phase-field Models of Biomembranes

Jiang, Wen January 2015 (has links)
<p>Interfaces play a dominant role in governing the response of many biological systems and they pose many challenges to traditional finite element. For sharp-interface model, traditional finite element methods necessitate the finite element mesh to align with surfaces of discontinuities. Diffuse-interface model replaces the sharp interface with continuous variations of an order parameter resulting in significant computational effort. To overcome these difficulties, we focus on developing a computationally efficient spline-based finite element method for interface problems.</p><p>A key challenge while employing B-spline basis functions in finite-element methods is the robust imposition of Dirichlet boundary conditions. We begin by examining weak enforcement of such conditions for B-spline basis functions, with application to both second- and fourth-order problems based on Nitsche's approach. The use of spline-based finite elements is further examined along with a Nitsche technique for enforcing constraints on an embedded interface. We show that how the choice of weights and stabilization parameters in the Nitsche consistency terms has a great influence on the accuracy and robustness of the method. In the presence of curved interface, to obtain optimal rates of convergence we employ a hierarchical local refinement approach to improve the geometrical representation of interface. </p><p>In multiple dimensions, a spline basis is obtained as a tensor product of the one-dimensional basis. This necessitates a rectangular grid that cannot be refined locally in regions of embedded interfaces. To address this issue, we develop an adaptive spline-based finite element method that employs hierarchical refinement and coarsening techniques. The process of refinement and coarsening guarantees linear independence and remains the regularity of the basis functions. We further propose an efficient data transfer algorithm during both refinement and coarsening which yields to accurate results.</p><p>The adaptive approach is applied to vesicle modeling which allows three-dimensional simulation to proceed efficiently. In this work, we employ a continuum approach to model the evolution of microdomains on the surface of Giant Unilamellar Vesicles. The chemical energy is described by a Cahn-Hilliard type density functional that characterizes the line energy between domains of different species. The generalized Canham-Helfrich-Evans model provides a description of the mechanical energy of the vesicle membrane. This coupled model is cast in a diffuse-interface form using the phase-field framework. The effect of coupling is seen through several numerical examples of domain formation coupled to vesicle shape changes.</p> / Dissertation
25

Finite Element Methods for Interface Problems with Mesh Adaptivity

Zhang, Ziyu January 2015 (has links)
<p>This dissertation addresses interface problems simulated with the finite element method (FEM) with mesh adaptivity. More specifically, we concentrate on the strategies that adaptively modify the mesh and the associated data transfer issues. </p><p>In finite element simulations there often arises the need to change the mesh and continue the simulation on a new mesh. Analysts encounter such an issue when they adaptively refine the mesh to reduce the computational cost, smooth distorted elements to improve system conditioning, or introduce new surfaces and change the domain in simulations of fracture problems. In such circumstances, the transfer of data from the old mesh to the new one is of crucial importance, especially for nonlinear problems. We are concerned in this work with contact problems with adaptive re-meshing and fracture problems modeled with the eXtended finite element method (X-FEM). For the former ones, the transfer of surface data is built upon the technique of parallel transport, and the error of such a transfer strategy is investigated through classic benchmark tests. A transfer scheme based on a least squares problem is also proposed to transfer the bulk data when nearly incompressible hyperelastic materials are employed. For the latter type of problems, we facilitate the transfer of internal variables by making partial elements utilize the same quadrature points from the uncut parent elements and meanwhile adjusting the quadrature weights via the solution of moment fitting equations. The proposed scheme helps avoid the complicated remapping procedure of internal variables between two different sets of quadrature points. A number of numerical examples are presented to demonstrate the robustness and accuracy of our proposed approaches.</p><p>Another renowned technique to simulate fracture problems is based upon the phase-field formulation, where a set of coupled mechanics and phase-field equations are solved via FEM without modeling crack geometries. However, losing the ability to model distinct surfaces in the phase-field formulation has drawbacks, such as difficulties simulating contact on crack surfaces and poorly-conditioned stiffness matrices. On the other hand, using the pure X-FEM in fracture simulations mandates the calculation of the direction and increment of crack surfaces at each step, introducing intricacies of tracing crack evolution. Thus, we propose combining phase-field and X-FEM approaches to utilize their individual benefits based on a novel medial-axis algorithm. Consequently, we can still capture complex crack geometries while having crack surfaces explicitly modeled by modifying the mesh with the X-FEM.</p> / Dissertation
26

Developing a modular extendable tool for Serious Games content creation : Combining existing techniques with a focus on narrative generation and player adaptivity

Declercq, Julian January 2018 (has links)
A large part of any game development process consists of content creation, which costs both time and effort. Procedural generation techniques exist to help narrative generation, but they are scattered and require extensive manual labour to set up. On top of that, Serious Games content created with these techniques tend to be uninteresting and lack variety which can ultimately lead to the Serious Games missing their intended purpose. This paper delivers a prototype for a modular tool that aims to solve these problems by combining existing narrative generation techniques with common sense database knowledge and player adaptivity techniques. The prototype tool implements Ceptre as a core module for the generation of stories and ConceptNet as a commonsense knowledge database. Two studies have been conducted with content created by the tool. One study tested if generation rules created by commonsense can be used to flesh out stories, while the other one evaluated if adapted stories yield better scores. The results of the first test state that adding rules retrieved through common sense knowledge did not improve story quality, but they can however be used to extend stories without compromising story quality. It also shows that ideally, an extensive natural language processing module should be used to present the stories rather than a basic implementation. The statistically insignificant result of the second test was potentially caused by the compromises taken when conducting the test. Reconduction of this test using real game data, rather than data from the compromised personality test, might be preferable.
27

ASBJOIN: uma estratÃgia adaptativa para consultas envolvendo operadores de junÃÃo em Linked data / ASBJOIN: an adaptive strategy for queries involving join operators on Linked date

Macedo Sousa Maia 31 October 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Motivado pelo sucesso de Linked Data e impulsionado pelo crescimento do nÃmero de fontes de dados em formato RDF disponÃveis na Web, novos desafios para processamento de consultas estÃo emergindo, especialmente em configuraÃÃes distribuÃdas. No ambiente de Linked Data, à possÃvel executar consultas federadas, as quais envolvem junÃÃes de dados fornecidos por mÃltiplas fontes. O termo consulta federada à usado quando queremos prover soluÃÃes baseadas em informaÃÃes obtidas de diferentes fontes. Nesse sentido, a concepÃÃo de novos algoritmos e estratÃgias adaptativas para a execuÃÃo de junÃÃes de forma eficiente constitui um desafio importante. Nesse trabalho, apresentamos uma soluÃÃo para a execuÃÃo adaptativa de operaÃÃes de junÃÃes de dados em consultas federadas. A execuÃÃo da operaÃÃo de junÃÃo adaptativa entre informaÃÃes contidas em fontes de dados distribuÃdas baseia-se em estatÃsticas, que sÃo coletadas em tempo de execuÃÃo. Uma informaÃÃo estatÃstica sobre uma determinada fontes seria, por exemplo, o tempo decorrido (Elapsed Time) para obter algum resultado. Para obter as informaÃÃes estatÃsticas atualizadas, usamos uma estratÃgia que coleta essas informaÃÃes durante a execuÃÃo da consulta e,logo apÃs, sÃo armazenadas em uma base de dados local, na qual denominamos como catÃlogo de informaÃÃes estatÃsticas. / Motivated by the success of Linked Data and driven by the growing number of data sources into RDF files available on the web, new challenges for query processing are emerging, especially in distributed settings. These environments allow distributed execution of federated queries, which involve joining data provided by multiple sources, which are often unstable. In this sense, the design of new algorithms and adaptive strategies for efficiently implementing joins is a major challenge. In this paper, we present a solution to the adaptive joins execution in federated queries. The adaptative context of distributed data sources is based on statistics that are collected at runtime. For this, we use a module that updates the information in the catalog as the query is executed. The module works in parallel with the query processor.
28

Méthodes d'enrichissement pour les problèmes de type Navier-Stokes

Krust, Arnaud 31 October 2012 (has links)
Ce travail se place dans le contexte des problèmes de fluides présentant une couche limite. Nous explorons l'usage de méthodes éléments finis enrichies pour ce type de problèmes. En particulier, nous présentons un algorithme nouveau d'enrichissement adaptatif, où les fonctions d 'enrichissement sont construites sans connaissance a priori sur la solution. Nous comparons cette approche à l'adaptation de degré polynômial et à l'adaptation de maillage. Nous montrons qu'elle peut-être plus compétitive que la première et qu'elle peut être utilisée efficacement comme complément à laseconde. Des expérimentations numériques sont menées sur des problèmes 2D scalaires (advection -diffusion, Burgers) et de Navier-Stokes. / We are interested in fluid dynamics problems with a boundary layer. We investigate enriched finite elements methods for this kind of problems. A point of interest is the new adaptive enrichment algorithm that we propose, where enrichment functions are built without a priori knowledge on the solution. This approach is compared to both p-adaptivity and h-adaptivity. We show that it can replace the former with profit, and is a good complement to the latter. Numerical experiments are shown on scalar problems (advection-diffusion) and Navier-Stokes equations.
29

Energy-Aware Data Management on NUMA Architectures

Kissinger, Thomas 29 May 2017 (has links) (PDF)
The ever-increasing need for more computing and data processing power demands for a continuous and rapid growth of power-hungry data center capacities all over the world. As a first study in 2008 revealed, energy consumption of such data centers is becoming a critical problem, since their power consumption is about to double every 5 years. However, a recently (2016) released follow-up study points out that this threatening trend was dramatically throttled within the past years, due to the increased energy efficiency actions taken by data center operators. Furthermore, the authors of the study emphasize that making and keeping data centers energy-efficient is a continuous task, because more and more computing power is demanded from the same or an even lower energy budget, and that this threatening energy consumption trend will resume as soon as energy efficiency research efforts and its market adoption are reduced. An important class of applications running in data centers are data management systems, which are a fundamental component of nearly every application stack. While those systems were traditionally designed as disk-based databases that are optimized for keeping disk accesses as low a possible, modern state-of-the-art database systems are main memory-centric and store the entire data pool in the main memory, which replaces the disk as main bottleneck. To scale up such in-memory database systems, non-uniform memory access (NUMA) hardware architectures are employed that face a decreased bandwidth and an increased latency when accessing remote memory compared to the local memory. In this thesis, we investigate energy awareness aspects of large scale-up NUMA systems in the context of in-memory data management systems. To do so, we pick up the idea of a fine-grained data-oriented architecture and improve the concept in a way that it keeps pace with increased absolute performance numbers of a pure in-memory DBMS and scales up on NUMA systems in the large scale. To achieve this goal, we design and build ERIS, the first scale-up in-memory data management system that is designed from scratch to implement a data-oriented architecture. With the help of the ERIS platform, we explore our novel core concept for energy awareness, which is Energy Awareness by Adaptivity. The concept describes that software and especially database systems have to quickly respond to environmental changes (i.e., workload changes) by adapting themselves to enter a state of low energy consumption. We present the hierarchically organized Energy-Control Loop (ECL), which is a reactive control loop and provides two concrete implementations of our Energy Awareness by Adaptivity concept, namely the hardware-centric Resource Adaptivity and the software-centric Storage Adaptivity. Finally, we will give an exhaustive evaluation regarding the scalability of ERIS as well as our adaptivity facilities.
30

Model adaptivnog web baziranog sistema za učenje / Model of adaptive web based learning system

Brtka Eleonora 20 October 2015 (has links)
<p>Disertacija se bavi problematikom adaptivnih web-baziranih sistema u oblasti e-učenja. Definisan je model sistema čije su osnovne komponente: učenik, učitelj i obučavajući materijali. Model je pro&scaron;iriv i domenski nezavistan. Istražena je interakcija između komponenti modela, pre svega između učenika i obučavajućih materijala. Razvijen je modul za procenu usagla&scaron;enosti potreba učenika sa jedne strane i sadržaja obučavajućih materijala sa druge strane. Kori&scaron;ćene su mere udaljenosti odnosno sličnosti, na taj način postignuta je delimična adaptibilnost modela. Adaptibilnost modela pro&scaron;irena je modulom koji koristi Ako Onda pravila generisana od strane sistema baziranog na Teoriji grubih skupova. Pravila u Ako Onda formi procenjuju uticaj obučavajućih materijala na učenika i shodno proceni vr&scaron;e adaptaciju. Model je implementiran, testiran i kori&scaron;ćen za vr&scaron;enje eksperimenata na test skupu obučavajućih materijala i učenika. Pokazano je na koji način se vr&scaron;i adaptacija u okviru kori&scaron;ćenog sistema.</p> / <p>The dissertation deals with the problem of adaptive Web-based systems in the field of e-learning. The model whose basic components are: the student, the teacher and the learning materials is defined. The model is extensible and domain independent. The interaction between the components of the model is examined, especially among students and learning materials. Module for the conformity assessment between needs of students and the content of the learning materials is developed. The distance measures or similarity measures are used, thus is achieved a partial adaptability of the model. Adaptability of the model was extended by module that uses If Then rules generated by the system based on the Rough sets theory. If Then rules are used to estimate the impact of learning materials to students and after that, is performed the adaptation. The model was implemented, tested and used to carry out experiments on the test set of learning materials and students. It is shown how the adjustments are done.</p>

Page generated in 0.0441 seconds