• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Obere und untere Schranken für eingeschränkte Parity-Branchingprogramme / Upper and Lower Bounds for Restricted Parity Branching Programs

Brosenne, Henrik 18 April 2006 (has links)
No description available.
2

Aspects of guaranteed error control in computations for partial differential equations

Merdon, Christian 17 September 2013 (has links)
Diese Arbeit behandelt garantierte Fehlerkontrolle für elliptische partielle Differentialgleichungen anhand des Poisson-Modellproblems, des Stokes-Problems und des Hindernisproblems. Hierzu werden garantierte obere Schranken für den Energiefehler zwischen exakter Lösung und diskreten Finite-Elemente-Approximationen erster Ordnung entwickelt. Ein verallgemeinerter Ansatz drückt den Energiefehler durch Dualnormen eines oder mehrerer Residuen aus. Hinzu kommen berechenbare Zusatzterme, wie Oszillationen der gegebenen Daten, mit expliziten Konstanten. Für die Abschätzung der Dualnormen der Residuen existieren viele verschiedene Techniken. Diese Arbeit beschäftigt sich vorrangig mit Equilibrierungsschätzern, basierend auf Raviart-Thomas-Elementen, welche effiziente garantierte obere Schranken ermöglichen. Diese Schätzer werden mit einem Postprocessing-Verfahren kombiniert, das deren Effizienz mit geringem zusätzlichen Rechenaufwand deutlich verbessert. Nichtkonforme Finite-Elemente-Methoden erzeugen zusätzlich ein Inkonsistenzresiduum, dessen Dualnorm mit Hilfe diverser konformer Approximationen abgeschätzt wird. Ein Nebenaspekt der Arbeit betrifft den expliziten residuen-basierten Fehlerschätzer, der für gewöhnlich optimale und leicht zu berechnende Verfeinerungsindikatoren für das adaptive Netzdesign liefert, aber nur schlechte garantierte obere Schranken. Eine neue Variante, die auf den equilibrierten Flüssen des Luce-Wohlmuth-Fehlerschätzers basiert, führt zu stark verbesserten Zuverlässigkeitskonstanten. Eine Vielzahl numerischer Experimente vergleicht alle implementierten Fehlerschätzer und zeigt, dass effiziente und garantierte Fehlerkontrolle in allen vorliegenden Modellproblemen möglich ist. Insbesondere zeigt ein Modellproblem, wie die Fehlerschätzer erweitert werden können, um auch auf Gebieten mit gekrümmten Rändern garantierte obere Schranken zu liefern. / This thesis studies guaranteed error control for elliptic partial differential equations on the basis of the Poisson model problem, the Stokes equations and the obstacle problem. The error control derives guaranteed upper bounds for the energy error between the exact solution and different finite element discretisations, namely conforming and nonconforming first-order approximations. The unified approach expresses the energy error by dual norms of one or more residuals plus computable extra terms, such as oscillations of the given data, with explicit constants. There exist various techniques for the estimation of the dual norms of such residuals. This thesis focuses on equilibration error estimators based on Raviart-Thomas finite elements, which permit efficient guaranteed upper bounds. The proposed postprocessing in this thesis considerably increases their efficiency at almost no additional computational costs. Nonconforming finite element methods also give rise to a nonconsistency residual that permits alternative treatment by conforming interpolations. A side aspect concerns the explicit residual-based error estimator that usually yields cheap and optimal refinement indicators for adaptive mesh refinement but not very sharp guaranteed upper bounds. A novel variant of the residual-based error estimator, based on the Luce-Wohlmuth equilibration design, leads to highly improved reliability constants. A large number of numerical experiments compares all implemented error estimators and provides evidence that efficient and guaranteed error control in the energy norm is indeed possible in all model problems under consideration. Particularly, one model problem demonstrates how to extend the error estimators for guaranteed error control on domains with curved boundary.
3

Polynomial growth of concept lattices, canonical bases and generators:

Junqueira Hadura Albano, Alexandre Luiz 24 July 2017 (has links) (PDF)
We prove that there exist three distinct, comprehensive classes of (formal) contexts with polynomially many concepts. Namely: contexts which are nowhere dense, of bounded breadth or highly convex. Already present in G. Birkhoff's classic monograph is the notion of breadth of a lattice; it equals the number of atoms of a largest boolean suborder. Even though it is natural to define the breadth of a context as being that of its concept lattice, this idea had not been exploited before. We do this and establish many equivalences. Amongst them, it is shown that the breadth of a context equals the size of its largest minimal generator, its largest contranominal-scale subcontext, as well as the Vapnik-Chervonenkis dimension of both its system of extents and of intents. The polynomiality of the aforementioned classes is proven via upper bounds (also known as majorants) for the number of maximal bipartite cliques in bipartite graphs. These are results obtained by various authors in the last decades. The fact that they yield statements about formal contexts is a reward for investigating how two established fields interact, specifically Formal Concept Analysis (FCA) and graph theory. We improve considerably the breadth bound. Such improvement is twofold: besides giving a much tighter expression, we prove that it limits the number of minimal generators. This is strictly more general than upper bounding the quantity of concepts. Indeed, it automatically implies a bound on these, as well as on the number of proper premises. A corollary is that this improved result is a bound for the number of implications in the canonical basis too. With respect to the quantity of concepts, this sharper majorant is shown to be best possible. Such fact is established by constructing contexts whose concept lattices exhibit exactly that many elements. These structures are termed, respectively, extremal contexts and extremal lattices. The usual procedure of taking the standard context allows one to work interchangeably with either one of these two extremal structures. Extremal lattices are equivalently defined as finite lattices which have as many elements as possible, under the condition that they obey two upper limits: one for its number of join-irreducibles, other for its breadth. Subsequently, these structures are characterized in two ways. Our first characterization is done using the lattice perspective. Initially, we construct extremal lattices by the iterated operation of finding smaller, extremal subsemilattices and duplicating their elements. Then, it is shown that every extremal lattice must be obtained through a recursive application of this construction principle. A byproduct of this contribution is that extremal lattices are always meet-distributive. Despite the fact that this approach is revealing, the vicinity of its findings contains unanswered combinatorial questions which are relevant. Most notably, the number of meet-irreducibles of extremal lattices escapes from control when this construction is conducted. Aiming to get a grip on the number of meet-irreducibles, we succeed at proving an alternative characterization of these structures. This second approach is based on implication logic, and exposes an interesting link between number of proper premises, pseudo-extents and concepts. A guiding idea in this scenario is to use implications to construct lattices. It turns out that constructing extremal structures with this method is simpler, in the sense that a recursive application of the construction principle is not needed. Moreover, we obtain with ease a general, explicit formula for the Whitney numbers of extremal lattices. This reveals that they are unimodal, too. Like the first, this second construction method is shown to be characteristic. A particular case of the construction is able to force - with precision - a high number of (in the sense of "exponentially many'') meet-irreducibles. Such occasional explosion of meet-irreducibles motivates a generalization of the notion of extremal lattices. This is done by means of considering a more refined partition of the class of all finite lattices. In this finer-grained setting, each extremal class consists of lattices with bounded breadth, number of join irreducibles and meet-irreducibles as well. The generalized problem of finding the maximum number of concepts reveals itself to be challenging. Instead of attempting to classify these structures completely, we pose questions inspired by Turán's seminal result in extremal combinatorics. Most prominently: do extremal lattices (in this more general sense) have the maximum permitted breadth? We show a general statement in this setting: for every choice of limits (breadth, number of join-irreducibles and meet-irreducibles), we produce some extremal lattice with the maximum permitted breadth. The tools which underpin all the intuitions in this scenario are hypergraphs and exact set covers. In a rather unexpected, but interesting turn of events, we obtain for free a simple and interesting theorem about the general existence of "rich'' subcontexts. Precisely: every context contains an object/attribute pair which, after removed, results in a context with at least half the original number of concepts.
4

Polynomial growth of concept lattices, canonical bases and generators:: extremal set theory in Formal Concept Analysis

Junqueira Hadura Albano, Alexandre Luiz 30 June 2017 (has links)
We prove that there exist three distinct, comprehensive classes of (formal) contexts with polynomially many concepts. Namely: contexts which are nowhere dense, of bounded breadth or highly convex. Already present in G. Birkhoff's classic monograph is the notion of breadth of a lattice; it equals the number of atoms of a largest boolean suborder. Even though it is natural to define the breadth of a context as being that of its concept lattice, this idea had not been exploited before. We do this and establish many equivalences. Amongst them, it is shown that the breadth of a context equals the size of its largest minimal generator, its largest contranominal-scale subcontext, as well as the Vapnik-Chervonenkis dimension of both its system of extents and of intents. The polynomiality of the aforementioned classes is proven via upper bounds (also known as majorants) for the number of maximal bipartite cliques in bipartite graphs. These are results obtained by various authors in the last decades. The fact that they yield statements about formal contexts is a reward for investigating how two established fields interact, specifically Formal Concept Analysis (FCA) and graph theory. We improve considerably the breadth bound. Such improvement is twofold: besides giving a much tighter expression, we prove that it limits the number of minimal generators. This is strictly more general than upper bounding the quantity of concepts. Indeed, it automatically implies a bound on these, as well as on the number of proper premises. A corollary is that this improved result is a bound for the number of implications in the canonical basis too. With respect to the quantity of concepts, this sharper majorant is shown to be best possible. Such fact is established by constructing contexts whose concept lattices exhibit exactly that many elements. These structures are termed, respectively, extremal contexts and extremal lattices. The usual procedure of taking the standard context allows one to work interchangeably with either one of these two extremal structures. Extremal lattices are equivalently defined as finite lattices which have as many elements as possible, under the condition that they obey two upper limits: one for its number of join-irreducibles, other for its breadth. Subsequently, these structures are characterized in two ways. Our first characterization is done using the lattice perspective. Initially, we construct extremal lattices by the iterated operation of finding smaller, extremal subsemilattices and duplicating their elements. Then, it is shown that every extremal lattice must be obtained through a recursive application of this construction principle. A byproduct of this contribution is that extremal lattices are always meet-distributive. Despite the fact that this approach is revealing, the vicinity of its findings contains unanswered combinatorial questions which are relevant. Most notably, the number of meet-irreducibles of extremal lattices escapes from control when this construction is conducted. Aiming to get a grip on the number of meet-irreducibles, we succeed at proving an alternative characterization of these structures. This second approach is based on implication logic, and exposes an interesting link between number of proper premises, pseudo-extents and concepts. A guiding idea in this scenario is to use implications to construct lattices. It turns out that constructing extremal structures with this method is simpler, in the sense that a recursive application of the construction principle is not needed. Moreover, we obtain with ease a general, explicit formula for the Whitney numbers of extremal lattices. This reveals that they are unimodal, too. Like the first, this second construction method is shown to be characteristic. A particular case of the construction is able to force - with precision - a high number of (in the sense of "exponentially many'') meet-irreducibles. Such occasional explosion of meet-irreducibles motivates a generalization of the notion of extremal lattices. This is done by means of considering a more refined partition of the class of all finite lattices. In this finer-grained setting, each extremal class consists of lattices with bounded breadth, number of join irreducibles and meet-irreducibles as well. The generalized problem of finding the maximum number of concepts reveals itself to be challenging. Instead of attempting to classify these structures completely, we pose questions inspired by Turán's seminal result in extremal combinatorics. Most prominently: do extremal lattices (in this more general sense) have the maximum permitted breadth? We show a general statement in this setting: for every choice of limits (breadth, number of join-irreducibles and meet-irreducibles), we produce some extremal lattice with the maximum permitted breadth. The tools which underpin all the intuitions in this scenario are hypergraphs and exact set covers. In a rather unexpected, but interesting turn of events, we obtain for free a simple and interesting theorem about the general existence of "rich'' subcontexts. Precisely: every context contains an object/attribute pair which, after removed, results in a context with at least half the original number of concepts.
5

Stabilised finite element approximation for degenerate convex minimisation problems

Boiger, Wolfgang Josef 19 August 2013 (has links)
Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse. / Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
6

Complexity of Normal Forms on Structures of Bounded Degree

Heimberg, Lucas 04 June 2018 (has links)
Normalformen drücken semantische Eigenschaften einer Logik durch syntaktische Restriktionen aus. Sie ermöglichen es Algorithmen, Grenzen der Ausdrucksstärke einer Logik auszunutzen. Ein Beispiel ist die Lokalität der Logik erster Stufe (FO), die impliziert, dass Graph-Eigenschaften wie Erreichbarkeit oder Zusammenhang nicht FO-definierbar sind. Gaifman-Normalformen drücken die Bedeutung einer FO-Formel als Boolesche Kombination lokaler Eigenschaften aus. Sie haben eine wichtige Rolle in Model-Checking Algorithmen für Klassen dünn besetzter Graphen, deren Laufzeit durch die Größe der auszuwertenden Formel parametrisiert ist. Es ist jedoch bekannt, dass Gaifman-Normalformen im Allgemeinen nur mit nicht-elementarem Aufwand konstruiert werden können. Dies führt zu einer enormen Parameterabhängigkeit der genannten Algorithmen. Ähnliche nicht-elementare untere Schranken sind auch für Feferman-Vaught-Zerlegungen und für die Erhaltungssätze von Lyndon, Łoś und Tarski bekannt. Diese Arbeit untersucht die Komplexität der genannten Normalformen auf Klassen von Strukturen beschränkten Grades, für welche die nicht-elementaren unteren Schranken nicht gelten. Für diese Einschränkung werden Algorithmen mit elementarer Laufzeit für die Konstruktion von Gaifman-Normalformen, Feferman-Vaught-Zerlegungen, und für die Erhaltungssätze von Lyndon, Łoś und Tarski entwickelt, die in den ersten beiden Fällen worst-case optimal sind. Wichtig hierfür sind Hanf-Normalformen. Es wird gezeigt, dass eine Erweiterung von FO durch unäre Zählquantoren genau dann Hanf-Normalformen erlaubt, wenn alle Zählquantoren ultimativ periodisch sind, und wie Hanf-Normalformen in diesen Fällen in elementarer und worst-case optimaler Zeit konstruiert werden können. Dies führt zu Model-Checking Algorithmen für solche Erweiterungen von FO sowie zu Verallgemeinerungen der Algorithmen für Feferman-Vaught-Zerlegungen und die Erhaltungssätze von Lyndon, Łoś und Tarski. / Normal forms express semantic properties of logics by means of syntactical restrictions. They allow algorithms to benefit from restrictions of the expressive power of a logic. An example is the locality of first-order logic (FO), which implies that properties like reachability or connectivity cannot be defined in FO. Gaifman's local normal form expresses the satisfaction conditions of an FO-formula by a Boolean combination of local statements. Gaifman normal form serves as a first step in fixed-parameter model-checking algorithms, parameterised by the size of the formula, on sparse graph classes. However, it is known that in general, there are non-elementary lower bounds for the costs involved in transforming a formula into Gaifman normal form. This leads to an enormous parameter-dependency of the aforementioned algorithms. Similar non-elementary lower bounds also hold for Feferman-Vaught decompositions and for the preservation theorems by Lyndon, Łoś, and Tarski. This thesis investigates the complexity of these normal forms when restricting attention to classes of structures of bounded degree, for which the non-elementary lower bounds are known to fail. Under this restriction, the thesis provides algorithms with elementary and even worst-case optimal running time for the construction of Gaifman normal form and Feferman-Vaught decompositions. For the preservation theorems, algorithmic versions with elementary running time and non-matching lower bounds are provided. Crucial for these results is the notion of Hanf normal form. It is shown that an extension of FO by unary counting quantifiers allows Hanf normal forms if, and only if, all quantifiers are ultimately periodic, and furthermore, how Hanf normal form can be computed in elementary and worst-case optimal time in these cases. This leads to model-checking algorithms for such extensions of FO and also allows generalisations of the constructions for Feferman-Vaught decompositions and preservation theorems.

Page generated in 0.4634 seconds