• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 26
  • 25
  • 23
  • 22
  • 21
  • 19
  • 18
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Adaptierbare und adaptive Fragebögen für virtuelle Organisationen

Lorz, Alexander 15 September 2010 (has links)
Die vorliegende Dissertation präsentiert neue wissenschaftliche Konzepte und Lösungen zur Erstellung, Durchführung und Auswertung von Befragungen, die sich einfacher an unterschiedliche Nutzungsszenarien anpassen lassen und für den Einsatz in virtuellen Organisationen besser geeignet sind als herkömmliche Online-Befragungen. Die dabei berücksichtigten Adaptionsaspekte umfassen Inhalt und Umfang der Befragung, die Umsetzung in unterschiedliche Präsentationsmedien, -formate und Befragungsmodi sowie das adaptive Verhalten während der Interaktion. Eine wesentliche Grundlage bildet die inhaltsorientierte Beschreibung adaptiver und adaptierbarer Befragungen durch die hier vorgeschlagene deklarative Beschreibungssprache AXSML. Diese berücksichtigt insbesondere die Wechselwirkungen der unterschiedlichen Adaptionsaspekte in Verbindung mit der Forderung nach einer medien- und modusübergreifenden Vergleichbarkeit der Ergebnisse multimodaler Befragungen. Für diese Beschreibungssprache werden Transformationsregeln vorgestellt, die eine adäquate Umsetzung einer Befragung in verschiedene Präsentationsmedien und Befragungsformen ermöglichen. Eine damit einhergehende inhaltliche Anpassung an das Einsatzszenario erfolgt automatisiert und erfordert keine speziellen Fachkenntnisse auf dem Gebiet des Befragungsdesigns. Die Auswertung der Befragungsrückläufe wird ebenfalls deklarativ beschrieben, berücksichtigt adaptionsbedingte Fehlwerte und erlaubt die Nutzung verschiedenster Berechnungsmodelle zur Aggregation der Rücklaufdaten. Da Erstellung und Wartung adaptiver und adaptierbarer Befragungen sehr komplex sind, werden Konzepte und Lösungen zur Unterstützung des Autorenprozesses vorgestellt, die den notwendigen Aufwand reduzieren. Um die gleichzeitige Durchführung einer großen Zahl von Untersuchungen in vielen unterschiedlichen Teams und die Anpassung der Befragung durch Nicht-Fachexperten zu gewährleisten, wurde eine IT-Stützung des Befragungsprozesses konzipiert und umgesetzt, welche den Anforderungen an die organisatorische Einbindung der Befragung in virtuellen Unternehmen gerecht wird.
52

Adaptive Energy-Control for In-Memory Database Systems

Kissinger, Thomas, Habich, Dirk, Lehner, Wolfgang 30 May 2022 (has links)
The ever-increasing demand for scalable database systems is limited by their energy consumption, which is one of the major challenges in research today. While existing approaches mainly focused on transaction-oriented disk-based database systems, we are investigating and optimizing the energy consumption and performance of data-oriented scale-up in-memory database systems that make heavy use of the main power consumers, which are processors and main memory. We give an in-depth energy analysis of a current mainstream server system and show that modern processors provide a rich set of energy-control features, but lack the capability of controlling them appropriately, because of missing application-specific knowledge. Thus, we propose the Energy-Control Loop (ECL) as an DBMS-integrated approach for adaptive energy-control on scale-up in-memory database systems that obeys a query latency limit as a soft constraint and actively optimizes energy efficiency and performance of the DBMS. The ECL relies on adaptive workload-dependent energy profiles that are continuously maintained at runtime. In our evaluation, we observed energy savings ranging from 20% to 40% for a real-world load profile.
53

A quasicontinuum approach towards mechanical simulations of periodic lattice structures

Chen, Li 16 November 2020 (has links) (PDF)
Thanks to the advancement of additive manufacturing, periodic metallic lattice structures are gaining more and more attention. A major attraction of them is that their design can be tailored to specific applications by changing the basic repetitive pattern of the lattice, called the unit cell. This may involve the selection of optimal strut diameters and orientations, as well as the connectivity and strut lengths. Numerical simulation plays a vital role in understanding the mechanical behavior of metallic lattices and it enables the optimization of design parameters. However, conventional numerical modeling strategies in which each strut is represented by one or more beam finite elements yield prohibitively time­ consuming simulations for metallic lattices in engineering­ scale applications. The reasons are that millions of struts are involved, as well as that geometrical and material nonlinearities at the strut level need to be incorporated. The aim of this thesis is the development of multi­scale quasicontinuum (QC) frameworks to substantially reduce the simulation time of nonlinear mechanical models of metallic lattices. For this purpose, this thesis generalizes the QC method by a multi­-field interpolation enabling amongst others the representation of varying diameters in the struts’ axial directions (as a consequence of the manufacturing process). The efficiency is further increased by a new adaptive scheme that automatically adjusts the model reduction whilst controlling the (elastic or elastoplastic) model’s accuracy. The capabilities of the proposed methodology are demonstrated using numerical examples, such as indentation tests and scratch tests, in which the lattice is modeled using geometrically nonlinear elastic and elastoplastic beam finite elements. They show that the multi­scale framework combines a high accuracy with substantial model reduction that are out of reach of direct numerical simulations. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
54

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
<p>Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis.</p> / <p>Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen.</p> / DySCAS
55

Optimisation des Systèmes Partiellement Observables dans les Réseaux Sans-fil : Théorie des jeux, Auto-adaptation et Apprentissage / Optimization of Partially Observable Systems in Wireless Networks : Game Theory, Self-adaptivity and Learning

Habachi, Oussama 28 September 2012 (has links)
La dernière décennie a vu l'émergence d'Internet et l'apparition des applications multimédia qui requièrent de plus en plus de bande passante, ainsi que des utilisateurs qui exigent une meilleure qualité de service. Dans cette perspective, beaucoup de travaux ont été effectués pour améliorer l'utilisation du spectre sans fil.Le sujet de ma thèse de doctorat porte sur l'application de la théorie des jeux, la théorie des files d'attente et l'apprentissage dans les réseaux sans fil,en particulier dans des environnements partiellement observables. Nous considérons différentes couches du modèle OSI. En effet, nous étudions l'accès opportuniste au spectre sans fil à la couche MAC en utilisant la technologie des radios cognitifs (CR). Par la suite, nous nous concentrons sur le contrôle de congestion à la couche transport, et nous développons des mécanismes de contrôle de congestion pour le protocole TCP. / Since delay-sensitive and bandwidth-intense multimedia applications have emerged in the Internet, the demand for network resources has seen a steady increase during the last decade. Specifically, wireless networks have become pervasive and highly populated.These motivations are behind the problems considered in this dissertation.The topic of my PhD is about the application of game theory, queueing theory and learning techniques in wireless networks under some QoS constraints, especially in partially observable environments.We consider different layers of the protocol stack. In fact, we study the Opportunistic Spectrum Access (OSA) at the Medium Access Control (MAC) layer through Cognitive Radio (CR) approaches.Thereafter, we focus on the congestion control at the transport layer, and we develop some congestion control mechanisms under the TCP protocol.The roadmap of the research is as follows. Firstly, we focus on the MAC layer, and we seek for optimal OSA strategies in CR networks. We consider that Secondary Users (SUs) take advantage of opportunities in licensed channels while ensuring a minimum level of QoS. In fact, SUs have the possibility to sense and access licensed channels, or to transmit their packets using a dedicated access (like 3G). Therefore, a SU has two conflicting goals: seeking for opportunities in licensed channels, but spending energy for sensing those channels, or transmitting over the dedicated channel without sensing, but with higher transmission delay. We model the slotted and the non-slotted systems using a queueing framework. Thereafter, we analyze the non-cooperative behavior of SUs, and we prove the existence of a Nash equilibrium (NE) strategy. Moreover, we measure the gap of performance between the centralized and the decentralized systems using the Price of Anarchy (PoA).Even if the OSA at the MAC layer was deeply investigated in the last decade, the performance of SUs, such as energy consumption or Quality of Service (QoS) guarantee, was somehow ignored. Therefore, we study the OSA taking into account energy consumption and delay. We consider, first, one SU that access opportunistically licensed channels, or transmit its packets through a dedicated channel. Due to the partial spectrum sensing, the state of the spectrum is partially observable. Therefore, we use the Partially Observable Markov Decision Process (POMDP) framework to design an optimal OSA policy for SUs. Specifically, we derive some structural properties of the value function, and we prove that the optimal OSA policy has a threshold structure.Thereafter, we extend the model to the context of multiple SUs. We study the non-cooperative behavior of SUs and we prove the existence of a NE. Moreover, we highlight a paradox in this situation: more opportunities in the licensed spectrum may lead to worst performances for SUs. Thereafter, we focus on the study of spectrum management issues. In fact, we introduce a spectrum manager to the model, and we analyze the hierarchical game between the network manager and SUs.Finally, we focus on the transport layer and we study the congestion control for wireless networks under some QoS and Quality of Experience (QoE) constraints. Firstly, we propose a congestion control algorithm that takes into account applications' parameters and multimedia quality. In fact, we consider that network users maximize their expected multimedia quality by choosing the congestion control strategy. Since users ignore the congestion status at bottleneck links, we use a POMDP framework to determine the optimal congestion control strategy.Thereafter, we consider a subjective measure of the multimedia quality, and we propose a QoE-based congestion control algorithm. This algorithm bases on QoE feedbacks from receivers in order to adapt the congestion window size. Note that the proposed algorithms are designed based on some learning methods in order to face the complexity of solving POMDP problems.
56

Advanced Numerical Modelling of Discontinuities in Coupled Boundary Value Problems / Numerische Modellierung von Diskontinuitäten in Gekoppelten Randwertproblemen

Kästner, Markus 18 August 2016 (has links) (PDF)
Industrial development processes as well as research in physics, materials and engineering science rely on computer modelling and simulation techniques today. With increasing computer power, computations are carried out on multiple scales and involve the analysis of coupled problems. In this work, continuum modelling is therefore applied at different scales in order to facilitate a prediction of the effective material or structural behaviour based on the local morphology and the properties of the individual constituents. This provides valueable insight into the structure-property relations which are of interest for any design process. In order to obtain reasonable predictions for the effective behaviour, numerical models which capture the essential fine scale features are required. In this context, the efficient representation of discontinuities as they arise at, e.g. material interfaces or cracks, becomes more important than in purely phenomenological macroscopic approaches. In this work, two different approaches to the modelling of discontinuities are discussed: (i) a sharp interface representation which requires the localisation of interfaces by the mesh topology. Since many interesting macroscopic phenomena are related to the temporal evolution of certain microscopic features, (ii) diffuse interface models which regularise the interface in terms of an additional field variable and therefore avoid topological mesh updates are considered as an alternative. With the two combinations (i) Extended Finite Elemente Method (XFEM) + sharp interface model, and (ii) Isogeometric Analysis (IGA) + diffuse interface model, two fundamentally different approaches to the modelling of discontinuities are investigated in this work. XFEM reduces the continuity of the approximation by introducing suitable enrichment functions according to the discontinuity to be modelled. Instead, diffuse models regularise the interface which in many cases requires even an increased continuity that is provided by the spline-based approximation. To further increase the efficiency of isogeometric discretisations of diffuse interfaces, adaptive mesh refinement and coarsening techniques based on hierarchical splines are presented. The adaptive meshes are found to reduce the number of degrees of freedom required for a certain accuracy of the approximation significantly. Selected discretisation techniques are applied to solve a coupled magneto-mechanical problem for particulate microstructures of Magnetorheological Elastomers (MRE). In combination with a computational homogenisation approach, these microscopic models allow for the prediction of the effective coupled magneto-mechanical response of MRE. Moreover, finite element models of generic MRE microstructures are coupled with a BEM domain that represents the surrounding free space in order to take into account finite sample geometries. The macroscopic behaviour is analysed in terms of actuation stresses, magnetostrictive deformations, and magnetorheological effects. The results obtained for different microstructures and various loadings have been found to be in qualitative agreement with experiments on MRE as well as analytical results. / Industrielle Entwicklungsprozesse und die Forschung in Physik, Material- und Ingenieurwissenschaft greifen in einem immer stärkeren Umfang auf rechnergestützte Modellierungs- und Simulationsverfahren zurück. Die ständig steigende Rechenleistung ermöglicht dabei auch die Analyse mehrskaliger und gekoppelter Probleme. In dieser Arbeit kommt daher ein kontinuumsmechanischer Modellierungsansatz auf verschiedenen Skalen zum Einsatz. Das Ziel der Berechnungen ist dabei die Vorhersage des effektiven Material- bzw. Strukturverhaltens auf der Grundlage der lokalen Werkstoffstruktur und der Eigenschafen der konstitutiven Bestandteile. Derartige Simulationen liefern interessante Aussagen zu den Struktur-Eigenschaftsbeziehungen, deren Verständnis entscheidend für das Material- und Strukturdesign ist. Um aussagekräftige Vorhersagen des effektiven Verhaltens zu erhalten, sind numerische Modelle erforderlich, die wesentliche Eigenschaften der lokalen Materialstruktur abbilden. Dabei kommt der effizienten Modellierung von Diskontinuitäten, beispielsweise Materialgrenzen oder Rissen, eine deutlich größere Bedeutung zu als bei einer makroskopischen Betrachtung. In der vorliegenden Arbeit werden zwei unterschiedliche Modellierungsansätze für Unstetigkeiten diskutiert: (i) eine scharfe Abbildung, die üblicherweise konforme Berechnungsnetze erfordert. Da eine Evolution der Mikrostruktur bei einer derartigen Modellierung eine Topologieänderung bzw. eine aufwendige Neuvernetzung nach sich zieht, werden alternativ (ii) diffuse Modelle, die eine zusätzliche Feldvariable zur Regularisierung der Grenzfläche verwenden, betrachtet. Mit der Kombination von (i) Erweiterter Finite-Elemente-Methode (XFEM) + scharfem Grenzflächenmodell sowie (ii) Isogeometrischer Analyse (IGA) + diffuser Grenzflächenmodellierung werden in der vorliegenden Arbeit zwei fundamental verschiedene Zugänge zur Modellierung von Unstetigkeiten betrachtet. Bei der Diskretisierung mit XFEM wird die Kontinuität der Approximation durch eine Anreicherung der Ansatzfunktionen gemäß der abzubildenden Unstetigkeit reduziert. Demgegenüber erfolgt bei einer diffusen Grenzflächenmodellierung eine Regularisierung. Die dazu erforderliche zusätzliche Feldvariable führt oft zu Feldgleichungen mit partiellen Ableitungen höherer Ordnung und weist in ihrem Verlauf starke Gradienten auf. Die daraus resultierenden Anforderungen an den Ansatz werden durch eine Spline-basierte Approximation erfüllt. Um die Effizienz dieser isogeometrischen Diskretisierung weiter zu erhöhen, werden auf der Grundlage hierarchischer Splines adaptive Verfeinerungs- und Vergröberungstechniken entwickelt. Ausgewählte Diskretisierungsverfahren werden zur mehrskaligen Modellierung des gekoppelten magnetomechanischen Verhaltens von Magnetorheologischen Elastomeren (MRE) angewendet. In Kombination mit numerischen Homogenisierungsverfahren, ermöglichen die Mikrostrukturmodelle eine Vorhersage des effektiven magnetomechanischen Verhaltens von MRE. Außerderm wurden Verfahren zur Kopplung von FE-Modellen der MRE-Mikrostruktur mit einem Randelement-Modell der Umgebung vorgestellt. Mit Hilfe der entwickelten Verfahren kann das Verhalten von MRE in Form von Aktuatorspannungen, magnetostriktiven Deformationen und magnetischen Steifigkeitsänderungen vorhergesagt werden. Im Gegensatz zu zahlreichen anderen Modellierungsansätzen, stimmen die mit den hier vorgestellten Methoden für unterschiedliche Mikrostrukturen erzielten Vorhersagen sowohl mit analytischen als auch experimentellen Ergebnissen überein.
57

Métodos sem malha e método dos elementos finitos generalizados em análise não-linear de estruturas / Meshless Methods and Generalized Finite Element Method in Structural Nonlinear Analysis

Barros, Felício Bruzzi 27 March 2002 (has links)
O Método dos Elementos Finitos Generalizados, MEFG, compartilha importantes características dos métodos sem malha. As funções de aproximação do MEFG, atreladas aos pontos nodais, são enriquecidas de modo análogo ao refinamento p realizado no Método das Nuvens hp. Por outro lado, por empregar uma malha de elementos para construir as funções partição da unidade, ele também pode ser entendido como uma forma não convencional do Método dos Elementos Finitos. Neste trabalho, ambas as interpretações são consideradas. Os métodos sem malha, particularmente o Método de Galerkin Livre de Elementos e o Método das Nuvens hp, são introduzidos com o propósito de estabelecer os conceitos fundamentais para a descrição do MEFG. Na seqüência, apresentam-se aplicações numéricas em análise linear e evidenciam-se características que tornam o MEFG interessante para a simulação da propagação de descontinuidades. Após discutir os modelos de dano adotados para representar o comportamento não-linear do material, são introduzidos exemplos de aplicação, inicialmente do Método das Nuvens hp e depois do MEFG, na análise de estruturas de concreto. Os resultados obtidos servem de argumento para a implementação de um procedimento p-adaptativo, particularmente com o MEFG. Propõe-se, então a adaptação do Método dos Resíduos em Elementos Equilibrados à formulação do MEFG. Com vistas ao seu emprego em problemas não-lineares, algumas modificações são introduzidas à formulação do estimador. Mostra-se que a medida obtida para representar o erro, apesar de fundamentada em diversas hipóteses nem sempre possíveis de serem satisfeitas, ainda assim viabiliza a análise não-linear p-adaptativa. Ao final, são enumeradas propostas para a aplicação do MEFG em problemas caracterizados pela propagação de defeitos / The Generalized Finite Element Method, GFEM, shares several features with the so called meshless methods. The approximation functions used in the GFEM are associated with nodal points like in meshless methods. In addition, the enrichment of the approximation spaces can be done in the same fashion as in the meshless hp-Cloud method. On the other hand, the partition of unity used in the GFEM is provided by Lagrangian finite element shape functions. Therefore, this method can also be understood as a variation of the Finite Element Method. Indeed, both interpretations of the GFEM are valid and give unique insights into the method. The meshless character of the GFEM justified the investigation of meshless methods in this work. Among them, the Element Free Galerkin Method and the hp-Cloud Method are described aiming to introduce key concepts of the GFEM formulation. Following that, several linear problems are solved using these three methods. Such linear analysis demonstrates several features of the GFEM and its suitability to simulate propagating discontinuities. Next, damage models employed to model the nonlinear behavior of concrete structures are discussed and numerical analysis using the hp-Cloud Method and the GFEM are presented. The results motivate the implementation of a p-adaptive procedure tailored to the GFEM. The technique adopted is the Equilibrated Element Residual Method. The estimator is modified to take into account nonlinear peculiarities of the problems considered. The hypotheses assumed in the definition of the error measure are sometimes violated. Nonetheless, it is shown that the proposed error indicator is effective for the class of p-adaptive nonlinear analysis investigated. Finally, several suggestions are enumerated considering future applications of the GFEM, specially for the simulation of damage and crack propagation
58

Investigations numériques multi-échelle et multi-niveau des problèmes de contact adhésif à l'échelle microscopique / Multiscale and multilevel numerical investigation of microscopic contact problems

Du, Shuimiao 05 October 2018 (has links)
L'objectif ultime de ce travail est de fournir des méthodologies robustes et efficaces sur le plan des calculs pour la modélisation et la résolution des problèmes de contact adhésifs basés sur le potentiel de Lennard-Jones (LJ). Pour pallier les pièges théoriques et numériques du modèle LJ liés à ses caractéristiques nondéfinies et non-bornées, une méthode d'adaptativité en modèle est proposée pour résoudre le problème purement-LJ comme limite d'une séquence de problèmes multiniveaux construits de manière adaptative. Chaque membre de la séquence consiste en une partition modèle entre le modèle microscopique LJ et le modèle macroscopique de Signorini. La convergence de la méthode d'adaptativité est prouvée mathématiquement sous certaines hypothèses physiques et réalistes. D'un autre côté, la méthode asymptotique numérique (MAN) est adaptée et utilisée pour suivre avec précision les instabilités des problèmes de contact à grande échelle et souples. Les deux méthodes sont incorporées dans le cadre multiéchelle Arlequin pour obtenir une résolution précise, tout en réduisant les coûts de calcul. Dans la méthode d'adaptativité en modèle, pour capturer avec précision la localisation des zones d'intérêt (ZDI), une stratégie en deux résolutions est suggérée : une résolution macroscopique est utilisée comme une première estimation de la localisation de la ZDI. La méthode Arlequin est alors utilisée pour obtenir une résolution microscopique en superposant des modèles locaux aux modèles globaux. En outre, dans la stratégie MAN, la méthode Arlequin est utilisée pour supprimer les oscillations numériques, améliorer la précision et réduire le coût de calcul. / The ultimate goal of this work is to provide computationally efficient and robust methodologies for the modelling and solution of a class of Lennard-Jones (LJ) potential-based adhesive contact problems. To alleviate theoretical and numerical pitfalls of the LJ model related to its non-defined and nonbounded characteristics, a model-adaptivity method is proposed to solve the pure-LJ problem as the limit of a sequence of adaptively constructed multilevel problems. Each member of the sequence consists of a model partition between the microscopic LJ model and the macroscopic Signorini model. The convergence of the model-adaptivity method is proved mathematically under some physical and realistic assumptions. On the other hand, the asymptotic numerical method (ANM) is adapted to track accurately instabilities for soft contact problems. Both methods are incorporated in the Arlequin multiscale framework to achieve an accurate resolution at a reasonable computational cost. In the model-adaptivity method, to capture accurately the localization of the zones of interest (ZOI), a two-step strategy is suggested: a macroscopic resolution is used as the first guess of the ZOI localization, then the Arlequin method is used there to achieve a fine scale resolution. In the ANM strategy, the Arlequin method is also used to suppress numerical oscillations and improve accuracy.
59

hp-Adaptive Simulation and Inversion of Magnetotelluric Measurements / Simulation et inversion des mesures magnétotelluriques par hp adaptabilité

Alvarez Aramberri, Julen 18 December 2015 (has links)
La magnéto-tellurique (MT) (Cagniard 1953, Tikhonov 1950) est une technique d'exploration de la Terre basée sur des mesures de champs électromagnétiques (EM). Une source naturelle (non artificielle) harmonique en temps et située dans l'ionosphère (Weaver 1994) produit un champ EM régi par les équations de Maxwell. Les champs électromagnétiques sont enregistrés par plusieurs récepteurs placés sur la surface de la Terre. Ces mesures sont utilisées pour produire une image du sous-sol à partir d'un procédé d'inversion utilisant des méthodes numériques. Nous utilisons la méthode hp-FEM résultant d'une extension du travail de Demkowicz 2005. Nous avons développé un logiciel qui résout, pour la première fois, le problème MT avec des éléments finis auto-adaptatifs. La méthode hp-FEM permet des raffinements locaux, à la fois en taille h et en ordre p sur les éléments, ce qui est un avantage notoire puisque la combinaison de ces deux types de critères permet de mieux capter la présence de singularités, fournissant ainsi des erreurs de discrétisation faible. C'est donc une méthode très précise dont la convergence est exponentielle (Gui and Babuska 1986, Babuska and Guo 1996). En raison des défis d'implémentation encore non résolus (Demkowicz et al. 2002) et de la complexité technique des calculs hp-FEM en 3D, nous nous limitons, dans ce travail, à des calculs en 1D et 2D.Le domaine de calcul est tronqué par un matériau absorbant (Perfectly Matched Layer PML, Berenger 1994), qui est conçu pour s'adapter automatiquement aux propriétés physiques des matériaux. En particulier, il s'ajuste efficacement à l'interface air-sol, où le contraste entre la conductivité des matériaux atteint jusqu'à seize ordres de grandeur. Dans cette thèse, nous présentons également des résultats préliminaires pour la mise en place d'une technique dimensionnelle adaptative plus connue sous le nom de DAM (Dimensionally Adaptive Method (DAM)). Lorsque la distribution de la résistivité du sous-sol dépend de multiples variables spatiales, une analyse correcte de la dimensionnalité (Ledo 2005, Martí et al. 2009, Weaver and Agarwal 2000) rend parfois possible de considérer les différentes régions avec des dimensions spatiales différentes. Par exemple, il est parfois possible d’interpréter la distribution comme une formation unidimensionnelle plus quelques hétérogénéités en 2D (ou 3D). Basée sur cette interprétation, la DAM tire profit d’une telle situation. Ainsi, l'idée principale de cette méthode est d'effectuer l'adaptativité sur la dimension spatiale en commençant par un problème de faible dimension et en utilisant les résultats obtenus pour minimiser le coût des problèmes de dimension supérieure. Nous commençons l'inversion avec un modèle 1D. Les résultats de ce problème d'inversion 1D sont utilisés comme information a priori sur les modèles de dimension supérieure. Un avantage fondamental de cette approche est que nous pouvons utiliser les solutions des problèmes de dimension inférieure précédemment calculées comme composantes du terme de régularisation associé à un problème de dimension supérieure afin d'augmenter la robustesse de l'inversion. Cette thèse propose également une analyse numérique rigoureuse de divers aspects des problèmes MT. En particulier, nous avons: (a) étudié l'effet de la source, (b) effectué une analyse fréquentielle de sensibilité, (c) illustré l'augmentation du taux de convergence lorsque l'adaptativité hp est employée, (d) séparé les effets 1D et 2D dans la solution numérique et (e) exploré l'intérêt de considérer différentes variables pour effectuer l'inversion. / The magnetotelluric (MT) method is a passive exploration technique that aims at estimating the resistivity distribution of the Earth's subsurface, and therefore at providing an image of it. This process is divided into two different steps. The first one consists in recording the data. In a second step, recorded measurements are analyzed by employing numerical methods. This dissertation focuses in this second task. We provide a rigorous mathematical setting in the context of the Finite Element Method (FEM) that helps to understand the MT problem and its inversion process. In order to recover a map of the subsurface based on 2D MT measurements, we employ for the first time in Mts a multi-goal oriented self adaptive hp-Finite Element Method (FEM). We accurately solve both the full formulation as well as a secondary field formulation where the primary field is given by the solution of a 1D layered media. To truncate the computational domain, we design a Perfectly Matched Layer (PML) that automatically adapts to high-contrast material properties that appear within the subsurface and on the air-ground interface. For the inversion process, we develop a first step of a Dimensionally Adaptive Method (DAM) by considering the dimension of the problem as a variable in the inversion. Additionally, this dissertation supplies a rigorous numerical analysis for the forward and inverse problems. Regarding the forward modelization, we perform a frequency sensitivity analysis, we study the effect of the source, the convergence of the hp-adaptivity, or the effect of the PML in the computation of the electromagnetic fields and impedance. As far as the inversion is concerned, we study the impact of the selected variable for the inversion process, the different information that each mode provides,and the gains of the DAM approach.
60

Schemes and Strategies to Propagate and Analyze Uncertainties in Computational Fluid Dynamics Applications / Schémas et stratégies pour la propagation et l’analyse des incertitudes dans la simulation d’écoulements

Geraci, Gianluca 05 December 2013 (has links)
Ce manuscrit présente des contributions aux méthodes de propagation et d’analyse d’incertitude pour des applications en Mécanique des Fluides Numérique. Dans un premier temps, deux schémas numériques innovantes sont présentées: une approche de type ”Collocation”, et une autre qui est basée sur une représentation de type ”Volumes Finis” dans l’espace stochastique. Dans les deux, l’élément clé est donné par l’introduction d’une représentation de type ”Multirésolution” dans l’espace stochastique. L’objectif est à la fois de réduire le nombre de dimensions et d’appliquer un algorithme d’adaptation de maillage qui puisse être utilisé dans l’espace couplé physique/stochastique pour des problèmes non-stationnaires. Pour finir, une stratégie d’optimisation robuste est proposée, qui est basée sur une analyse de décompositionde la variance et des moments statistiques d’ordre plus élevé. Dans ce cas, l’objectif est de traiter des problèmes avec un grand nombre d’incertitudes. / In this manuscript, three main contributions are illustrated concerning the propagation and the analysis of uncertainty for computational fluid dynamics (CFD) applications. First, two novel numerical schemes are proposed : one based on a collocation approach, and the other one based on a finite volume like representation in the stochastic space. In both the approaches, the key element is the introduction of anon-linear multiresolution representation in the stochastic space. The aim is twofold : reducing the dimensionality of the discrete solution and applying a time-dependent refinement/coarsening procedure in the combined physical/stochastic space. Finally, an innovative strategy, based on variance-based analysis, is proposed for handling problems with a moderate large number of uncertainties in the context of the robust design optimization. Aiming to make more robust this novel optimization strategies, the common ANOVA-like approach is also extended to high-order central moments (up to fourth order). The new approach is more robust, with respect to the original variance-based one, since the analysis relies on new sensitivity indexes associated to a more complete statistic description.

Page generated in 0.4358 seconds