• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 83
  • 25
  • 11
  • 1
  • Tagged with
  • 238
  • 143
  • 106
  • 75
  • 74
  • 74
  • 60
  • 43
  • 26
  • 25
  • 23
  • 23
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

A computational framework for multidimensional parameter space screening of reaction-diffusion models in biology

Solomatina, Anastasia 16 March 2022 (has links)
Reaction-diffusion models have been widely successful in explaining a large variety of patterning phenomena in biology ranging from embryonic development to cancer growth and angiogenesis. Firstly proposed by Alan Turing in 1952 and applied to a simple two-component system, reaction-diffusion models describe spontaneous spatial pattern formation, driven purely by interactions of the system components and their diffusion in space. Today, access to unprecedented amounts of quantitative biological data allows us to build and test biochemically accurate reaction-diffusion models of intracellular processes. However, any increase in model complexity increases the number of unknown parameters and thus the computational cost of model analysis. To efficiently characterize the behavior and robustness of models with many unknown parameters is, therefore, a key challenge in systems biology. Here, we propose a novel computational framework for efficient high-dimensional parameter space characterization of reaction-diffusion models. The method leverages the $L_p$-Adaptation algorithm, an adaptive-proposal statistical method for approximate high-dimensional design centering and robustness estimation. Our approach is based on an oracle function, which describes for each point in parameter space whether the corresponding model fulfills given specifications. We propose specific oracles to estimate four parameter-space characteristics: bistability, instability, capability of spontaneous pattern formation, and capability of pattern maintenance. We benchmark the method and demonstrate that it allows exploring the ability of a model to undergo pattern-forming instabilities and to quantify model robustness for model selection in polynomial time with dimensionality. We present an application of the framework to reconstituted membrane domains bearing the small GTPase Rab5 and propose molecular mechanisms that potentially drive pattern formation.
152

A Genetic-Based Search for Adaptive Table Recognition in Spreadsheets

Lehner, Wolfgang, Koci, Elvis, Thiele, Maik, Romero, Oscar 22 June 2023 (has links)
Spreadsheets are very successful content generation tools, used in almost every enterprise to create a wealth of information. However, this information is often intermingled with various formatting, layout, and textual metadata, making it hard to identify and interpret the tabular payload. Previous works proposed to solve this problem by mainly using heuristics. Although fast to implement, these approaches fail to capture the high variability of user-generated spreadsheet tables. Therefore, in this paper, we propose a supervised approach that is able to adapt to arbitrary spreadsheet datasets. We use a graph model to represent the contents of a sheet, which carries layout and spatial features. Subsequently, we apply genetic-based approaches for graph partitioning, to recognize the parts of the graph corresponding to tables in the sheet. The search for tables is guided by an objective function, which is tuned to match the specific characteristics of a given dataset. We present the feasibility of this approach with an experimental evaluation, on a large, real-world spreadsheet corpus.
153

Multi-Quality Auto-Tuning by Contract Negotiation

Götz, Sebastian 17 July 2013 (has links)
A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible.
154

Analytical solution of a linear, elliptic, inhomogeneous partial differential equation in the context of a special rotationally symmetric problem of linear elasticity

Eschke, Andy January 2014 (has links)
In addition to previous publications, the paper presents the analytical solution of a special boundary value problem which arises in the context of elasticity theory for an extended constitutive law and a non-conservative symmetric ansatz. Besides deriving the general analytical solution, a specific form for linear boundary conditions is given for user convenience.
155

Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database

Kernert, David, Köhler, Frank, Lehner, Wolfgang 27 January 2023 (has links)
Large numeric matrices and multidimensional data arrays appear in many science domains, as well as in applications of financial and business warehousing. Common applications include eigenvalue determination of large matrices, which decompose into a set of linear algebra operations. With the rise of in-memory databases it is now feasible to execute these complex analytical queries directly in a relational database system without the need of transfering data out of the system and being restricted by hard disc latencies for random accesses. In this paper, we present a way to integrate linear algebra operations and large matrices as first class citizens into an in-memory database following a two-layered architectural model. The architecture consists of a logical component receiving manipulation statements and linear algebra expressions, and of a physical layer, which autonomously administrates multiple matrix storage representations. A cost-based hybrid storage representation is presented and an experimental implementation is evaluated for matrix-vector multiplications.
156

Auswirkungen der Kopplung von Strom- und Wärmemarkt auf die künftige Integration der erneuerbaren Energien und die CO2-Emissionen in Deutschland

Deac, Gerda 20 November 2020 (has links)
Die Dissertationsschrift untersucht die Interaktion zwischen Strom- und Wärmemarkt mit einem besonderen Fokus auf Wärmepumpen und Wärmenetzen. Vor dem Hintergrund des steigenden Ausbaus erneuerbarer Energien und der langfristigen Klimaziele stellt sich dabei die Frage der Wirkung, welche die Kopplung von Strom- und Wärmemarkt auf die Reduktion der CO2-Emissionen, die Energiesystemkosten und die Integration der erneuerbaren Energien hat. Zur Beantwortung der Forschungsfrage wird das lineare Optimierungsmodell Enertile um zwei Wärmemodule zur Berücksichtigung von Wärmepumpen und Wärmenetzen erweitert. Im Unterschied zu anderen Modellen wird in der Implementierung für diese Arbeit der Ausbau und der Einsatz der erneuerbaren Energien, der KWK und der weiteren fossilen Kraftwerkskapazitäten gleichzeitig optimiert, wodurch eine Analyse der Wechselwirkungen zwischen dem Ausbau erneuerbarer Energien und der Kopplung von Strom- und Wärmemarkt möglich ist. Die in dieser Arbeit vorgenommene modellgestützte Analyse zeigt die große Bedeutung der Interaktion zwischen Strom- und Wärmemarkt. Im Rahmen einer langfristigen Dekarbonisierung der Energieversorgung durch einen verstärkten Ausbau von erneuerbaren Energien ergeben sich sowohl Chancen als auch Herausforderungen für die Interaktion zwischen Strom- und Wärmemarkt. Die Modellierung der Wärmepumpen zeigt für den gesamten Zeitraum ab 2020 deutlich geringere spezifische CO2-Emissionen gegenüber der Wärmeerzeugung in modernen Gasbrennwertkesseln. Die Ergebnisse zeigen auch, dass bivalente Systeme – die kombinierte Nutzung verschiedener Wärmeerzeugungstechnologien wie beispielsweise KWK, Gasheizkessel und Elektroheizkessel – vor dem Hintergrund der Umstrukturierung des Stromsektors eine wichtige Rolle spielen. Langfristig stellt die flexible Wärmebereistellung durch elektrische Heizungstechnologien insbesondere bei hohen Anteilen erneuerbarer Energien eine kostengünstige und CO2-arme Alternative zur fossilen Wärmeerzeugung dar.:1 Einleitung 1 1.1 Ausgangslage 1 1.2 Problemstellung 3 1.3 Zielsetzung und Vorgehen 4 2 Rahmenbedingungen auf dem Strom- und Wärmemarkt in Deutschland 7 2.1 Rahmenbedingungen auf dem Strommarkt 7 2.2 Rahmenbedingungen auf dem Wärmemarkt 12 2.3 Schlussfolgerungen für diese Arbeit 16 3 Modellierung der Interaktionen von Strom- und Wärmemarkt 17 3.1 Stand der Forschung und Anforderungen an das Modell 17 3.2 Modelle zur Untersuchung von Strom- und Wärmemarkt 18 3.3 Stromsystemoptimierung Enertile 21 3.3.1 Eingangsdaten und Ergebnisse 23 3.3.2 Problemformulierung 24 3.4 Modellerweiterung zur Integration des Wärmemarktes 26 3.4.1 Wärmepumpen 26 3.4.2 Wärmenetze 32 4 Unsicherheiten in Energiesystemmodellen 42 4.1 Unsicherheiten im Rahmen dieser Arbeit 42 4.2 Methoden zum Umgang mit Unsicherheiten in Energiesystemmodellen 43 4.3 Szenarienentwicklung und Sensitivitäten 47 5 Definition von Szenarien zur Analyse der Wechselwirkungen zwischen Strom- und Wärmemarkt 50 5.1 Szenarienübersicht 50 5.2 Zentrale Annahmen 51 5.3 Strommarkt 56 5.3.1 Erneuerbare Energien 56 5.3.2 Konventionelle Kraftwerke 57 5.3.3 Stromnachfrage 59 5.4 Wärmenetze 59 5.5 Wärmepumpen 63 5.6 Sensitivitäten 65 5.7 Kritische Reflexion der Annahmen 66 6 Modellgestützte Analyse der Wechselwirkungen zwischen Strom- und Wärmemarkt 68 6.1 Einfluss auf die CO2-Emissionen 69 6.1.1 Strommarkt 69 6.1.2 Wärmepumpen 72 6.1.3 Wärmenetze 77 6.2 Entwicklung des Kraftwerksparks und des Erzeugungsmixes 82 6.2.1 Strommarkt 82 6.2.2 Wärmepumpen 95 6.2.3 Wärmenetze 106 6.2.4 Integration erneuerbarer Energien auf dem Strommarkt 128 6.3 Änderung der Systemkosten durch die Kopplung von Strom- und Wärmemarkt 131 6.3.1 Kosten der Stromerzeugung 132 6.3.2 Kosten der Wärmeerzeugung in Wärmepumpen 134 6.3.3 Kosten der Wärmeerzeugung in Wärmenetze 136 6.4 Zusammenfassung der Szenarienanalyse 140 6.4.1 Einfluss der Kopplung von Strom- und Wärmemarkt bei ambitionierten Klimaschutz 140 6.4.2 Einfluss der Kopplung von Strom- und Wärmemarkt bei mäßigem Klimaschutz 141 7 Sensitivitäten 142 7.1 Stabile Brennstoffpreise 142 7.2 Potentiale von erneuerbaren Energien 145 7.3 Isolierte Effekte von Elektroheizkesseln und KWK 148 7.3.1 Keine KWK 148 7.3.2 Keine Elektroheizkessel 150 7.4 Hohe Flexibilität der Wärmepumpen 151 7.5 Zusammenfassung Sensitivitäten 152 8 Zusammenfassung 154 8.1 Motivation und Forschungsfrage 154 8.2 Methodisches Vorgehen 154 8.3 Ergebnisse 155 8.4 Schlussfolgerungen und kritische Reflektion 156 8.4.1 Szenarienanalyse 156 8.4.2 Methodik 157 8.4.3 Ausblick 159
157

Towards Higher Precision Lattice QCD Results: Improved Scale Setting and Domain Decomposition Solvers

Straßberger, Ben 24 May 2023 (has links)
Gitter QCD strebt nach höherer Präzision. Hier untersuchen wir zwei kritische Punkte, die zur Genauigkeit von Gitter-Ergebnissen beitragen. Im ersten Teil kalibrieren wir Gitterabstände von QCD Simulationen mit 2 + 1 Arten (flavor) dynamischer Quarks. Dabei nutzen wir neue Messungen und eine mehrere Modelle für den chiralen- und Kontinuumslimes, um die Ergebnisse der 2017 durchgeführten Studie [1] zu verbessern. Der zweite Teil befasst sich mit Simulationsalgorithmen. Wir testen einen Algorithmus, der eine schnellere Lösung der Dirac-Gleichung verspricht. Wir analysieren die Anwendung des FETI-Algorithmus (Finite Element Tear and Interconnect) im Zusammenhang mit Gitter-QCD-Simulationen und vergleichen ihn mit anderen modernen Lösungsverfahren aus der Klasse der Domänendekompositionslösern. Wir untersuchen verschiedene Präkonditionierer und ihre Auswirkungen auf die Konvergenz der Lösung. / Lattice QCD simulations strive for higher precision. Here, we study two critical points in the generation of high precision lattice results. In the first part, we calibrate the lattice spacings of QCD simulation with 2 + 1 flavors of dynamical fermions. We incorporate new measurements and use additional models for the chiral and continuum extrapolations to refine the result obtained in 2017 [1]. The second part focuses on simulation algorithms. We test an algorithm which promises faster solution of the Dirac equation. We analyze the application of the Finite Element Tear and Interconnect (FETI) algorithm in the context of lattice QCD simulations and compare it to other state-of-the-art domain decomposition solvers. We examine various preconditioners and their effects on the convergence of the solution.
158

Topology-aware optimization of big sparse matrices and matrix multiplications on main-memory systems

Lehner, Wolfgang, Kernert, David, Köhler, Frank 12 January 2023 (has links)
Since data sizes of analytical applications are continuously growing, many data scientists are switching from customized micro-solutions to scalable alternatives, such as statistical and scientific databases. However, many algorithms in data mining and science are expressed in terms of linear algebra, which is barely supported by major database vendors and big data solutions. On the other side, conventional linear algebra algorithms and legacy matrix representations are often not suitable for very large matrices. We propose a strategy for large matrix processing on modern multicore systems that is based on a novel, adaptive tile matrix representation (AT MATRIX). Our solution utilizes multiple techniques inspired from database technology, such as multidimensional data partitioning, cardinality estimation, indexing, dynamic rewrites, and many more in order to optimize the execution time. Based thereon we present a matrix multiplication operator ATMULT, which outperforms alternative approaches. The aim of our solution is to overcome the burden for data scientists of selecting appropriate algorithms and matrix storage representations. We evaluated AT MATRIX together with ATMULT on several real-world and synthetic random matrices.
159

Maybe Eventually? Towards Combining Temporal and Probabilistic Description Logics and Queries: Extended Version

Koopmann, Patrick 20 June 2022 (has links)
We present some initial results on ontology-based query answering with description logic ontologies that may employ temporal and probabilistic operators on concepts and axioms. Speci_cally, we consider description logics extended with operators from linear temporal logic (LTL), as well as subjective probability operators, and an extended query language in which conjunctive queries can be combined using these operators. We first show some complexity results for the setting in which either only temporal operators or only probabilistic operators may be used, both in the ontology and in the query, and then show a 2ExpSpace lower bound for the setting in which both types of operators can be used together. / This is an extended version of an article accepted at Description Logics 2019.
160

Metric Temporal Description Logics with Interval-Rigid Names: Extended Version

Baader, Franz, Borgwardt, Stefan, Koopmann, Patrick, Ozaki, Ana, Thost, Veronika 20 June 2022 (has links)
In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.

Page generated in 0.0509 seconds