Spelling suggestions: "subject:"inkonsistenz"" "subject:"konsistenz""
1 |
Regularitäten der deutschen Orthographie und ihre Deregulierung eine computerbasierte diachrone Untersuchung zu ausgewählten Sonderbereichen der deutschen Rechtschreibung /Noack, Christina. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2000--Osnabrück. / Enth.: Bd. 1, Text, Bd. 2, Dokumentation.
|
2 |
Konstituion und Erosion der sowjetischen Planungsökonomie und weitere Perspektiven des postsowjetischen Wirtschaftsraums aus Sicht der Theorie der EigentumswirtschaftPayandeh, Mehrdad. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2004--Bremen.
|
3 |
Inconsistency- and Error-Tolerant Reasoning w.r.t. Optimal Repairs of EL⊥ OntologiesBaader, Franz, Kriegel, Francesco, Nuradiansyah, Adrian 12 February 2024 (has links)
Errors in knowledge bases (KBs) written in a Description Logic (DL) are usually detected when reasoning derives an inconsistency or a consequence that does not hold in the application domain modelled by the KB. Whereas classical repair approaches produce maximal subsets of the KB not implying the inconsistency or unwanted consequence, optimal repairs maximize the consequence sets. In this paper, we extend previous results on how to compute optimal repairs from the DL EL to its extension EL⊥, which in contrast to EL can express inconsistency. The problem of how to deal with inconsistency in the context of optimal repairs was addressed previously, but in a setting where the (fixed) terminological part of the KB must satisfy a restriction on cyclic dependencies. Here, we consider a setting where this restriction is not required. We also show how the notion of optimal repairs obtained this way can be used in inconsistency- and error-tolerant reasoning.
|
4 |
Modeling biological systems with Answer Set ProgrammingThiele, Sven January 2011 (has links)
Biology has made great progress in identifying and measuring the building blocks of life. The availability of high-throughput methods in molecular biology has dramatically accelerated the growth of biological knowledge for various organisms. The advancements in genomic, proteomic and metabolomic technologies allow for constructing complex models of biological systems. An increasing number of biological repositories is available on the web, incorporating thousands of biochemical reactions and genetic regulations.
Systems Biology is a recent research trend in life science, which fosters a systemic view on biology. In Systems Biology one is interested in integrating the knowledge from all these different sources into models that capture the interaction of these entities. By studying these models one wants to understand the emerging properties of the whole system, such as robustness.
However, both measurements as well as biological networks are prone to considerable incompleteness, heterogeneity and mutual inconsistency, which makes it highly non-trivial to draw biologically meaningful conclusions in an automated way. Therefore, we want to promote Answer Set Programming (ASP) as a tool for discrete modeling in Systems Biology. ASP is a declarative problem solving paradigm, in which a problem is encoded as a logic program such that its answer sets represent solutions to the problem. ASP has intrinsic features to cope with incompleteness, offers a rich modeling language and highly efficient solving technology.
We present ASP solutions, for the analysis of genetic regulatory networks, determining consistency with observed measurements and identifying minimal causes for inconsistency. We extend this approach for computing minimal repairs on model and data that restore consistency. This method allows for predicting unobserved data even in case of inconsistency.
Further, we present an ASP approach to metabolic network expansion. This approach exploits the easy characterization of reachability in ASP and its various reasoning methods, to explore the biosynthetic capabilities of metabolic reaction networks and generate hypotheses for extending the network.
Finally, we present the BioASP library, a Python library which encapsulates our ASP solutions into the imperative programming paradigm. The library allows for an easy integration of ASP solution into system rich environments, as they exist in Systems Biology. / In den letzten Jahren wurden große Fortschritte bei der Identifikation und Messung der Bausteine des Lebens gemacht. Die Verfügbarkeit von Hochdurchsatzverfahren in der Molekularbiology hat das Anwachsen unseres biologischen Wissens dramatisch beschleunigt. Durch die technische Fortschritte in Genomic, Proteomic und Metabolomic wurde die Konstruktion komplexer Modelle biologischer Systeme ermöglicht. Immer mehr biologische Datenbanken sind über das Internet verfügbar, sie enthalten tausende Daten biochemischer Reaktionen und genetischer Regulation.
System Biologie ist ein junger Forschungszweig der Biologie, der versucht Biologische Systeme in ihrer Ganzheit zu erforschen. Dabei ist man daran interessiert möglichst viel Wissen aus den unterschiedlichsten Bereichen in ein Modell zu aggregieren, welches das Zusammenwirken der verschiedensten Komponenten nachbildet. Durch das Studium derartiger Modelle erhofft man sich ein Verständnis der aufbauenden Eigenschaften, wie zum Beispiel Robustheit, des Systems zu erlangen.
Es stellt sich jedoch die Problematik, das sowohl die biologischen Modelle als auch die verfügbaren Messwerte, oft unvollständig, miteinander unvereinbar oder fehlerhaft sind. All dies macht es schwierig biologisch sinnvolle Schlussfolgerungen zu ziehen.
Daher, möchten wir in dieser Arbeit Antwortmengen Programmierung (engl. Answer Set Programming; ASP) als Werkzeug zur diskreten Modellierung system biologischer Probleme vorschlagen. ASP verfügt über eingebaute Eigenschaften zum Umgang mit unvollständiger Information, eine reichhaltige Modellierungssprache und hocheffiziente Berechnungstechniken.
Wir präsentieren ASP Lösungen zur Analyse von Netzwerken genetischer Regulierungen, zur Prüfung der Konsistenz mit gemessene Daten, und zur Identifikation von Gründen für Inkonsistenz. Diesen Ansatz erweitern wir um die Möglichkeit zur Berechnung minimaler Reparaturen an Modell und Daten, welche Konsistenz erzeugen. Mithilfe dieser Methode werden wir in die Lage versetzt, auch im Fall von Inkonsistenz, noch ungemessene Daten vorherzusagen.
Weiterhin, präsentieren wir einen ASP Ansatz zur Analyse metabolischer Netzwerke. Bei diesem Ansatz, nutzen wir zum einen aus das sich Erreichbarkeit mit ASP leicht spezifizieren lässt und das ASP mehrere mächtige Methoden zur Schlussfolgerung bereitstellt, welche sich auch kombiniert lassen. Dadurch wird es möglich die Synthese Möglichkeiten eines Metabolischen Netzwerks zu erforschen und Hypothesen für Erweiterungen des metabolischen Netzwerks zu berechnen.
Zu guter Letzt, präsentieren wir die BioASP Softwarebibliothek. Die BioASP-Bibliothek kapselt unsere ASP Lösungen in das imperative Programmierparadigma und vereinfacht eine Integration von ASP Lösungen in heterogene Betriebsumgebungen, wie sie in der System Biologie vorherrschen.
|
5 |
Verification of Data-aware Business Processes in the Presence of OntologiesSantoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging.
In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs.
We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
|
6 |
Creating and Maintaining Consistent Documents with Elucidative DevelopmentBartho, Andreas 20 September 2016 (has links) (PDF)
Software systems usually consist of multiple artefacts, such as requirements, class diagrams, or source code. Documents, such as specifications and documentation, can also be viewed as artefacts. In practice, however, writing and updating documents is often neglected because it is expensive and brings no immediate benefit. Consequently, documents are often outdated and communicate wrong information about the software. The price is paid later when a software system must be maintained and much implicit knowledge that existed at the time of the original development has been lost.
A simple way to keep documents up to date is generation. However, not all documents can be fully generated. Usually, at least some content must be written by a human author. This handwritten content is lost if the documents must be regenerated.
In this thesis, Elucidative Development is introduced. It is an approach to create documents by partial generation. Partial generation means that some parts of the document are generated whereas others are handwritten. Elucidative Development retains manually written content when the document is regenerated. An integral part of Elucidative Development is a guidance system, which informs the author about changes in the generated content and helps him update the handwritten content. / Softwaresysteme setzen sich üblicherweise aus vielen verschiedenen Artefakten zusammen, zum Beispiel Anforderungen, Klassendiagrammen oder Quellcode. Dokumente, wie zum Beispiel Spezifikationen oder Dokumentation, können auch als Artefakte betrachtet werden. In der Praxis wird aber das Schreiben und Aktualisieren von Dokumenten oft vernachlässigt, weil es zum einen teuer ist und zum anderen keinen unmittelbaren Vorteil bringt. Dokumente sind darum häufig veraltet und vermitteln falsche Informationen über die Software. Den Preis muss man später zahlen, wenn die Software gepflegt wird, weil viel von dem impliziten Wissen, das zur Zeit der Entwicklung existierte, verloren ist.
Eine einfache Möglichkeit, Dokumente aktuell zu halten, ist Generierung. Allerdings können nicht alle Dokumente generiert werden. Meist muss wenigstens ein Teil von einem Menschen geschrieben werden. Dieser handgeschriebene Inhalt geht verloren, wenn das Dokument neu generiert werden muss.
In dieser Arbeit wird das Elucidative Development vorgestellt. Dabei handelt es sich um einen Ansatz zur Dokumenterzeugung mittels partieller Generierung. Das bedeutet, dass Teile eines Dokuments generiert werden und der Rest von Hand ergänzt wird. Beim Elucidative Development bleibt der handgeschriebene Inhalt bestehen, wenn das restliche Dokument neu generiert wird. Ein integraler Bestandteil von Elucidative Development ist darüber hinaus ein Hilfesystem, das den Autor über Änderungen an generiertem Inhalt informiert und ihm hilft, den handgeschriebenen Inhalt zu aktualisieren.
|
7 |
Verification of Data-aware Business Processes in the Presence of OntologiesSantoso, Ario 13 May 2016 (has links)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging.
In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs.
We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
|
8 |
Creating and Maintaining Consistent Documents with Elucidative DevelopmentBartho, Andreas 27 May 2014 (has links)
Software systems usually consist of multiple artefacts, such as requirements, class diagrams, or source code. Documents, such as specifications and documentation, can also be viewed as artefacts. In practice, however, writing and updating documents is often neglected because it is expensive and brings no immediate benefit. Consequently, documents are often outdated and communicate wrong information about the software. The price is paid later when a software system must be maintained and much implicit knowledge that existed at the time of the original development has been lost.
A simple way to keep documents up to date is generation. However, not all documents can be fully generated. Usually, at least some content must be written by a human author. This handwritten content is lost if the documents must be regenerated.
In this thesis, Elucidative Development is introduced. It is an approach to create documents by partial generation. Partial generation means that some parts of the document are generated whereas others are handwritten. Elucidative Development retains manually written content when the document is regenerated. An integral part of Elucidative Development is a guidance system, which informs the author about changes in the generated content and helps him update the handwritten content.:1 Introduction
1.1 Contributions
1.2 Scope of the Thesis
1.3 Organisation
2 Problem Analysis and Solution Outline
2.1 Redundancy and Inconsistency
2.2 Improving Consistency with Partial Generation
2.3 Conclusion
3 Background
3.1 Grammar-Based Modularisation
3.2 Model-Driven Software Development
3.3 Round-Trip Engineering
3.4 Conclusion
4 Elucidative Development
4.1 General Idea and Running Example
4.2 Requirements of Elucidative Development
4.3 Structure and Basic Concepts of Elucidative Documents
4.4 Presentation Layer
4.5 Guidance
4.6 Conclusion
5 Model-Driven Elucidative Development
5.1 General Idea and Running Example
5.2 Requirements of Model-Driven Elucidative Development
5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development
5.4 Guidance
5.5 Conclusion
6 Extensions of Elucidative Development
6.1 Validating XML-based Elucidative Documents
6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments
6.3 Conclusion
7 Tool Support for an Elucidative Development Environment
7.1 Managing Active References
7.2 Inserting Computed Document Fragments
7.3 Caching the Computed Document Fragments
7.4 Elucidative Document Validation with Schemas
7.5 Conclusion
8 Related Work
8.1 Related Documentation Approaches
8.2 Consistency Approaches
8.3 Compound Documents
8.4 Conclusion
9 Evaluation
9.1 Creating and Maintaining the Cool Component Specification
9.2 Creating and Maintaining the UML Specification
9.3 Feasibility Studies
9.4 Conclusion
10 Conclusion / Softwaresysteme setzen sich üblicherweise aus vielen verschiedenen Artefakten zusammen, zum Beispiel Anforderungen, Klassendiagrammen oder Quellcode. Dokumente, wie zum Beispiel Spezifikationen oder Dokumentation, können auch als Artefakte betrachtet werden. In der Praxis wird aber das Schreiben und Aktualisieren von Dokumenten oft vernachlässigt, weil es zum einen teuer ist und zum anderen keinen unmittelbaren Vorteil bringt. Dokumente sind darum häufig veraltet und vermitteln falsche Informationen über die Software. Den Preis muss man später zahlen, wenn die Software gepflegt wird, weil viel von dem impliziten Wissen, das zur Zeit der Entwicklung existierte, verloren ist.
Eine einfache Möglichkeit, Dokumente aktuell zu halten, ist Generierung. Allerdings können nicht alle Dokumente generiert werden. Meist muss wenigstens ein Teil von einem Menschen geschrieben werden. Dieser handgeschriebene Inhalt geht verloren, wenn das Dokument neu generiert werden muss.
In dieser Arbeit wird das Elucidative Development vorgestellt. Dabei handelt es sich um einen Ansatz zur Dokumenterzeugung mittels partieller Generierung. Das bedeutet, dass Teile eines Dokuments generiert werden und der Rest von Hand ergänzt wird. Beim Elucidative Development bleibt der handgeschriebene Inhalt bestehen, wenn das restliche Dokument neu generiert wird. Ein integraler Bestandteil von Elucidative Development ist darüber hinaus ein Hilfesystem, das den Autor über Änderungen an generiertem Inhalt informiert und ihm hilft, den handgeschriebenen Inhalt zu aktualisieren.:1 Introduction
1.1 Contributions
1.2 Scope of the Thesis
1.3 Organisation
2 Problem Analysis and Solution Outline
2.1 Redundancy and Inconsistency
2.2 Improving Consistency with Partial Generation
2.3 Conclusion
3 Background
3.1 Grammar-Based Modularisation
3.2 Model-Driven Software Development
3.3 Round-Trip Engineering
3.4 Conclusion
4 Elucidative Development
4.1 General Idea and Running Example
4.2 Requirements of Elucidative Development
4.3 Structure and Basic Concepts of Elucidative Documents
4.4 Presentation Layer
4.5 Guidance
4.6 Conclusion
5 Model-Driven Elucidative Development
5.1 General Idea and Running Example
5.2 Requirements of Model-Driven Elucidative Development
5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development
5.4 Guidance
5.5 Conclusion
6 Extensions of Elucidative Development
6.1 Validating XML-based Elucidative Documents
6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments
6.3 Conclusion
7 Tool Support for an Elucidative Development Environment
7.1 Managing Active References
7.2 Inserting Computed Document Fragments
7.3 Caching the Computed Document Fragments
7.4 Elucidative Document Validation with Schemas
7.5 Conclusion
8 Related Work
8.1 Related Documentation Approaches
8.2 Consistency Approaches
8.3 Compound Documents
8.4 Conclusion
9 Evaluation
9.1 Creating and Maintaining the Cool Component Specification
9.2 Creating and Maintaining the UML Specification
9.3 Feasibility Studies
9.4 Conclusion
10 Conclusion
|
Page generated in 0.0682 seconds