• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 11
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 139
  • 139
  • 51
  • 36
  • 26
  • 26
  • 24
  • 22
  • 20
  • 18
  • 17
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Interactive narrative generation using computational verb theory

Smit, Marius 24 August 2010 (has links)
Interactive narrative extends traditional story-telling techniques by enabling previously passive observers to become active participants in the narrative events that unfold. A variety of approaches have attempted to construct such interactive narrative spaces and reconcile the goals of interactivity and dramatic story-telling. With the advent of the linguistic variable in 1972, a means was established for modelling natural language words and phrases mathematically and computationally. Over the past decade, the computational verb, first introduced in 1997, has been developed as a mathematical means of modelling natural language verbs in terms of dynamic systems, and vice versa. Computational verb theory extends the initial concept of the linguistic variable beyond being able to model adjectives, nouns, and passive states, into the realm of actions as denoted by natural language verbs. This thesis presents the framework and implementation of a system that generates interactive narrative spaces from narrative text. The concept of interactive narrative is introduced and recent developments in the area of interactive narrative are discussed. Secondly, a brief history of the development of the linguistic variable and the computational verb are provided. With the context of the computational verb (interactive) narrative generation (CVTNG) system presented, the underlying theoretical principles of the system are established. The CVTNG system principles are described in terms of fuzzy set, computational verb, and constraint satisfaction theory. The fuzzy set, computational verb, and constraint satisfaction principles are organised according to a CVTNG architecture. The CVTNG architecture is then described in terms of its subsystems, structures, algorithms, and interfaces. Each CVTNG system component is related to the overall design considerations and goals. A prototype of the CVTNG system is implemented and tested against a suite of natural language sentences. The behaviour and performance of the CVTNG system prototype are discussed in relation to the CVTNG system’s design principles. Results are calculated and stored as variable values that are dynamically and generically associated with representational means, specifically computer graphics, to illustrate the generation of interactive narrative spaces. Plans for future work are discussed to show the immense development potential of this application. The thesis concludes that the CVTNG system provides a solid and extendable base for the intuitive generation of interactive narrative spaces from narrative text, computational verb models, and freely associated media. Copyright / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / Unrestricted
32

Querying semistructured data based on schema matching

Bergholz, André 24 January 2000 (has links)
Daten werden noch immer groesstenteils in Dateien und nicht in Datenbanken gespeichert. Dieser Trend wird durch den Internetboom der 90er Jahre nur noch verstaerkt. Daraus ist das Forschungsgebiet der semistrukturierten Daten entstanden. Semistrukturierte Daten sind Daten, die meist in Dokumenten gespeichert sind und eine implizite und irregulaere Struktur aufweisen. HTML- oder BibTeX-Dateien oder in ASCII-Dateien gespeicherte Genomdaten sind Beispiele. Traditionelles Datenbankmanagement erfordert Design und sichert Deklarativitaet zu. Dies ist im Umfeld der semistrukturierten Daten nicht gegeben, ein flexiblerer Ansatz wird gebraucht. In dieser Arbeit wird ein neuer Ansatz des Abfragens semistrukturierter Daten praesentiert. Wir schlagen vor, semistrukturierte Daten durch eine Menge von partiellen Schemata zu beschreiben, anstatt zu versuchen, ein globales Schema zu definieren. Letzteres ist zwar geeignet, einen effizienten Zugriff auf Daten zu ermoeglichen; ein globales Schema fuer semistrukturierte Daten leidet aber zwangslaeufig an der Irregularitaet der Struktur der Daten. Wegen der vielen Ausnahmen vom intendierten Schema wird ein globales Schema schnell sehr gross und wenig repraesentativ. Damit wird dem Nutzer ein verzerrtes Bild ueber die Daten gegeben. Hingegen koennen partielle Schemata eher ein repraesentatives Bild eines Teils der Daten darstellen. Mit Hilfe statistischer Methoden kann die Guete eines partiellen Schemas bewertet werden, ebenso koennen irrelevante Teile der Datenbank identifiziert werden. Ein Datenbanksystem, das auf partiellen Schemata basiert, ist flexibler und reflektiert den Grad der Strukturierung auf vielen Ebenen. Seine Benutzbarkeit und seine Performanz steigen mit einem hoeheren Grad an Struktur und mit seiner Nutzungsdauer. Partielle Schemata koennen auf zwei Arten gewonnen werden. Erstens koennen sie durch einen Datenbankdesigner bereitgestellt werden. Es ist so gut wie unmoeglich, eine semistrukturierte Datenbank komplett zu modellieren, das Modellieren gewisser Teile ist jedoch denkbar. Zweitens koennen partielle Schemata aus Benutzeranfragen gewonnen werden, wenn nur die Anfragesprache entsprechend entworfen und definiert wird. Wir schlagen vor, eine Anfrage in einen ``Was''- und einen ``Wie''-Teil aufzuspalten. Der ``Was''-Teil wird durch partielle Schemata repraesentiert. Partielle Schemata beinhalten reiche semantische Konzepte, wie Variablendefinitionen und Pfadbeschreibungen, die an Konzepte aus Anfragesprachen angelehnt sind. Mit Variablendefinitionen koennen verschiedene Teile der Datenbank miteinander verbunden werden. Pfadbeschreibungen helfen, durch das Zulassen einer gewissen Unschaerfe, die Irregularitaet der Struktur der Daten zu verdecken. Das Finden von Stellen der Datenbank, die zu einem partiellen Schema passen, bildet die Grundlage fuer alle Arten von Anfragen. Im ``Wie''-Teil der Anfrage werden die gefundenen Stellen der Datenbank fuer die Antwort modifiziert. Dabei koennen Teile der gefundenen Entsprechungen des partiellen Schemas ausgeblendet werden oder auch die Struktur der Antwort voellig veraendert werden. Wir untersuchen die Ausdrucksstaerke unserer Anfragesprache, in dem wir einerseits die Operatoren der relationalen Algebra abbilden und andererseits das Abfragen von XML-Dokumenten demonstrieren. Wir stellen fest, dass das Finden der Entsprechungen eines Schemas (wir nennen ein partielles Schema in der Arbeit nur Schema) den aufwendigsten Teil der Anfragebearbeitung ausmacht. Wir verwenden eine weitere Abstraktionsebene, die der Constraint Satisfaction Probleme, um die Entsprechungen eines Schemas in einer Datenbank zu finden. Constraint Satisfaction Probleme bilden eine allgemeine Klasse von Suchproblemen. Fuer sie existieren bereits zahlreiche Optimierungsalgorithmen und -heuristiken. Die Grundidee besteht darin, Variablen mit zugehoerigen Domaenen einzufuehren und dann die Werte, die verschiedene Variablen gleichzeitig annehmen koennen, ueber Nebenbedingungen zu steuern. In unserem Ansatz wird das Schema in Variablen ueberfuehrt, die Domaenen werden aus der Datenbank gebildet. Nebenbedingungen ergeben sich aus den im Schema vorhandenen Praedikaten, Variablendefinitionen und Pfadbeschreibungen sowie aus der Graphstruktur des Schemas. Es werden zahlreiche Optimierungstechniken fuer Constraint Satisfaction Probleme in der Arbeit vorgestellt. Wir beweisen, dass die Entsprechungen eines Schemas in einer Datenbank ohne Suche und in polynomialer Zeit gefunden werden koennen, wenn das Schema ein Baum ist, keine Variablendefinitionen enthaelt und von der Anforderung der Injektivitaet einer Einbettung abgesehen wird. Zur Optimierung wird das Enthaltensein von Schemata herangezogen. Das Enthaltensein von Schemata kann auf zwei Weisen, je nach Richtung der Enthaltenseinsbeziehung, genutzt werden: Entweder kann der Suchraum fuer ein neues Schema reduziert werden oder es koennen die ersten passenden Stellen zu einem neuen Schema sofort praesentiert werden. Der gesamte Anfrageansatz wurde prototypisch zunaechst in einem Public-Domain Prolog System, spaeter im Constraintsystem ECLiPSe implementiert und mit Anfragen an XML-Dokumente getestet. Dabei wurden die Auswirkungen verschiedener Optimierungen getestet. Ausserdem wird eine grafische Benutzerschnittstelle zur Verfuegung gestellt. / Most of today's data is still stored in files rather than in databases. This fact has become even more evident with the growth of the World Wide Web in the 1990s. Because of that observation, the research area of semistructured data has evolved. Semistructured data is typically stored in documents and has an irregular, partial, and implicit structure. The thesis presents a new framework for querying semistructured data. Traditional database management requires design and ensures declarativity. The possibilities to design are limited in the field of semistructured data, thus, a more flexible approach is needed. We argue that semistructured data should be represented by a set of partial schemata rather than by one complete schema. Because of irregularities of the data, a complete schema would be very large and not representative. Instead, partial schemata can serve as good representations of parts of the data. While finding a complete schema turns out to be difficult, a database designer may be able to provide partial schemata for the database. Also, partial schemata can be extracted from user queries if the query language is designed appropriately. We suggest to split the notion of query into a ``What''- and a ``How''-part. Partial schemata represent the ``What''-part. They cover semantically richer concepts than database schemata traditionally do. Among these concepts are predicates, variable definitions, and path descriptions. Schemata can be used for query optimization, but they also give users hints on the content of the database. Finding the occurrences (matches) of such a schema forms the most important part of query execution. All queries of our approach, such as the focus query or the transformation query, are based on this matching. Query execution can be optimized using knowledge about containment relationships between different schemata. Our approach and the optimization techniques are conceptually modeled and implemented as a prototype on the basis of Constraint Satisfaction Problems (CSPs). CSPs form a general class of search problems for which many techniques and heuristics exist. A CSP consists of variables that have a domain associated to them. Constraints restrict the values that variables can simultaneously take. We transform the problem of finding the matches of a schema in a database to a CSP. We prove that under certain conditions the matches of a schema can be found without any search and in polynomial time. For optimization purposes the containment relationship between schemata is explored. We formulate a sufficient condition for schema containment and test it again using CSP techniques. The containment relationship can be used in two ways depending on the direction of the containment: It is either possible to reduce the search space when looking for matches of a schema, or it is possible to present the first few matches immediately without any search. Our approach has been implemented into the constraint system ECLiPSe and tested using XML documents.
33

SAT Encodings of Finite CSPs

Nguyen, Van-Hau 27 February 2015 (has links)
Boolean satisfiability (SAT) is the problem of determining whether there exists an assignment of the Boolean variables to the truth values such that a given Boolean formula evaluates to true. SAT was the first example of an NP-complete problem. Only two decades ago SAT was mainly considered as of a theoretical interest. Nowadays, the picture is very different. SAT solving becomes mature and is a successful approach for tackling a large number of applications, ranging from artificial intelligence to industrial hardware design and verification. SAT solving consists of encodings and solvers. In order to benefit from the tremendous advances in the development of solvers, one must first encode the original problems into SAT instances. These encodings should not only be easily generated, but should also be efficiently processed by SAT solvers. Furthermore, an increasing number of practical applications in computer science can be expressed as constraint satisfaction problems (CSPs). However, encoding a CSP to SAT is currently regarded as more of an art than a science, and choosing an appropriate encoding is considered as important as choosing an algorithm. Moreover, it is much easier and more efficient to benefit from highly optimized state-of-the-art SAT solvers than to develop specialized tools from scratch. Hence, finding appropriate SAT encodings of CSPs is one of the most fascinating challenges for solving problems by SAT. This thesis studies SAT encodings of CSPs and aims at: 1) conducting a comprehensively profound study of SAT encodings of CSPs by separately investigating encodings of CSP domains and constraints; 2) proposing new SAT encodings of CSP domains; 3) proposing new SAT encoding of the at-most-one constraint, which is essential for encoding CSP variables; 4) introducing the redundant encoding and the hybrid encoding that aim to benefit from both two efficient and common SAT encodings (i.e., the sparse and order encodings) by using the channeling constraint (a term used in Constraint Programming) for SAT; and 5) revealing interesting guidelines on how to choose an appropriate SAT encoding in the way that one can exploit the availability of many efficient SAT solvers to solve CSPs efficiently and effectively. Experiments show that the proposed encodings and guidelines improve the state-of-the-art SAT encodings of CSPs.
34

DATALOG WITH CONTRAINTS: A NEW ANSWER-SET PROGRAMMING FORMALISM

East, Deborah J. 01 January 2001 (has links)
Knowledge representation and search are two fundamental areas of artificial intelligence. Knowledge representation is the area of artificial intelligence which deals with capturing, in a formal language, the properties of objects and the relationships between objects. Search is a systematic examination of all possible candidate solutions to a problem that is described as a theory in some knowledge representation formalism. We compare traditional declarative programming formalisms such as PROLOG and DATALOG with answer-set programming formalisms such as logic programming with stable model semantic. In this thesis we develop an answer-set formalism we can DC. The logic of DC is based on the logic of prepositional schemata and a version of Closed World Assumption. Two important features of the DC logic is that it supports modeling of the cardinalities of sets and Horn clauses. These two features facilitate modeling of search problems. The DC system includes and implementation of a grounder and a solver. The grounder for the DC system grounds instances of problems retaining the structure of the cardinality of sets. The resulting theories are thus more concise. In addition, the solver for the DC system utilizes the structure of cardinality of sets to perform more efficient search. The second feature, Horn clauses, are used when transitive closure will eliminate the need for additional variables. The semantics of the Horn clauses are retained in the grounded theories. This also results in more concise theories. Our goal in developing DC is to provide the computer science community with a system which facilitates modeling of problems, is easy to use, is efficient and captures the class of problems in NP-search. We show experimental results comparing DC to other systems. These results show that DC is always competitive with state-of-the-art answer-set programming systems and for many problems DC is more efficient.
35

Augmenting Local Search for Satisfiability

Southey, Finnegan January 2004 (has links)
This dissertation explores approaches to the satisfiability problem, focusing on local search methods. The research endeavours to better understand how and why some local search methods are effective. At the root of this understanding are a set of metrics that characterize the behaviour of local search methods. Based on this understanding, two new local search methods are proposed and tested, the first, SDF, demonstrating the value of the insights drawn from the metrics, and the second, ESG, achieving state-of-the-art performance and generalizing the approach to arbitrary 0-1 integer linear programming problems. This generality is demonstrated by applying ESG to combinatorial auction winner determination. Further augmentations to local search are proposed and examined, exploring hybrids that incorporate aspects of backtrack search methods.
36

Exploiting parallelism in decomposition methods for constraint satisfaction

Akatov, Dmitri January 2010 (has links)
Constraint Satisfaction Problems (CSPs) are NP-complete in general, however, there are many tractable subclasses that rely on the restriction of the structure of their underlying hypergraphs. It is a well-known fact, for instance, that CSPs whose underlying hypergraph is acyclic are tractable. Trying to define “nearly acyclic” hypergraphs led to the definition of various hypergraph decomposition methods. An important member in this class is the hypertree decomposition method, introduced by Gottlob et al. It possesses the property that CSPs falling into this class can be solved efficiently, and that hypergraphs in this class can be recognized efficiently as well. Apart from polynomial tractability, complexity analysis has shown, that both afore-mentioned problems lie in the low complexity class LOGCFL and are thus moreover efficiently parallelizable. A parallel algorithm has been proposed for the “evaluation problem”, however all algorithms for the “recognition problem” presented to date are sequential. The main contribution of this dissertation is the creation of an object oriented programming library including a task scheduler which allows the parallelization of a whole range of computational problems, fulfilling certain complexity-theoretic restrictions. This library merely requires the programmer to provide the implementation of several classes and methods, representing a general alternating algorithm, while the mechanics of the task scheduler remain hidden. In particular, we use this library to create an efficient parallel algorithm, which computes hypertree decompositions of a fixed width. Another result of a more theoretical nature is the definition of a new type of decomposition method, called Balanced Decompositions. Solving CSPs of bounded balanced width and recognizing such hypergraphs is only quasi-polynomial, however still parallelizable to a certain extent. A complexity-theoretic analysis leads to the definition of a new complexity class hierarchy, called the DC-hierarchy, with the first class in this hierarchy, DC1 , precisely capturing the complexity of solving CSPs of bounded balanced width.
37

Transformations of representation in constraint satisfaction

Salamon, András Z. January 2013 (has links)
In this thesis I study constraint satisfaction problems or CSPs. These require determining whether values can be assigned to variables so that all constraints are satisfied. An important challenge is to identify tractable CSPs which can be solved efficiently. CSP instances have usually been grouped together by restricting either the allowed combinations of values, or the way the variables are allowed to interact. Such restrictions sometimes yield tractable CSPs. A weakness of this method is that it cannot explain why all-different constraints form a tractable CSP. In this common type of constraint, all variables must be assigned values that are different from each other. New techniques are therefore needed to explain why such CSPs can be solved efficiently. My main contribution is an investigation of such hybrid CSPs which cannot be defined with either one of these kinds of restrictions. The main technique I use is a transformation of a CSP instance to the microstructure representation. This represents an instance as a collection of sets, and a solution of the instance corresponds to an independent set in the clause structure. For the common case where all constraints involve only two variables, I show how the microstructure can be used to define CSPs that are tractable because their clause structures fall within classes of graphs for which an independent set of specified size can be found efficiently. Such tractable hereditary classes are defined by using the technique of excluded induced subgraphs, such as classes of graphs that contain neither odd cycles with five or more vertices, nor their complements. I also develop finer grained techniques, by allowing vertices of the microstructure representation to be assigned colours, and the variables to be ordered. I show that these techniques define a new tractable CSP that forbids an ordered vertex-coloured subgraph in the microstructure representation.
38

Matching events and activities by integrating behavioral aspects and label analysis

Baier, Thomas, Di Ciccio, Claudio, Mendling, Jan, Weske, Mathias 05 1900 (has links) (PDF)
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.
39

Automatisation de la synthèse d’architectures appliquée aux aéronefs à voilure tournante / Automated architecture synthesis applied to rotary-wing aircrafts

Hartmann, Chris 30 January 2018 (has links)
Les travaux, présentés dans ce manuscrit de thèse, s’inscrivent dans les courants de l’Ingénierie Système et de la synthèse assistée par ordinateur. Une méthodologie outillée à l’aide d’un logiciel a été développée et est détaillée. Le processus de synthèse semi-automatisé est organisé en trois grandes phases : l’extraction du besoin et sa transformation en spécification du système à concevoir, une synthèse des architectures logiques et une analyse des architectures physiques. L’extraction et la transformation du besoin est une étape manuelle dans la méthodologie proposée. Elle s’appuie grandement sur des travaux précédents du champ de l’Ingénierie Système. L’objectif de ce sous-processus est d’obtenir une représentation du système compréhensible par l’utilisateur et interprétable par le logiciel. Les parties prenantes, les situations de vie que le système va rencontrer, les besoins, les exigences et les interfaces avec l’environnement sont modélisés. La synthèse, ou génération, des architectures logiques, est le résultat de la modélisation précédente du système. Un code C++ permet la transformation du problème de synthèse en expressions mathématiques qui sont résolues à l’aide d’un solveur CSP entier. Le résultat de ce sous-processus est un ensemble de graphes, triés par famille. Ces graphes représentent toutes les architectures logiques viables vis-à-vis des connexions entre ses sous-systèmes. L’analyse des architectures physiques permet d’écrire, pour chaque architecture logique, un système d’équations physiques non-linéaires mais non différentielles pour une première étape de prédimensionnement. Ces systèmes, écrits sous la forme de problèmes d’optimisation sont ensuite résolus à l’aide d’un solveur CSP réel. Au final, les architectures sont triées suivant la valeur d’une variable d’état commune à toutes les alternatives. / The research work presented in this thesis is related to the System Engineering field and the computer aided synthesis field. A methodology realized by a newsoftware is developed and introduced. The synthesis process is semi-automated and is devided into three phases: the need extraction and its translation into system requirements, a logical architecture synthesis and a physical architecture analysis. The need extraction and its translation into system requirements are highly inspired from previous work from the System Engineering field. Nevertheless, the objective, at this step, is to provide the software and the user with a unique model understandable to both. Stakeholders, life situations, needs, requirements and interfaces with the environment are modelized. The logical architecture synthesis, or logical architecture generation, is in accordance with the models we build previoulsy. That is to say that all logical architectures are instantiations of the system requirements expressed before. A C++ code translates this model into mathematical equations solved by an integer CSP solver. The result of this part of the methodology is a set of graphs, ranked by family. These graphs are views of the logical architectures. They express all the possible links between the sub-systems of the architectures. The physical architecture analysis step is an automated equation writer. The equations are non-linear and non differential and they are written for each logical architecture generated at the previous step. They are used for a first step of pre-sizing. These systems are then solved by a CSP solver for real numbers through an optimization. At the end, all the feasible architectures are ranked according to a unique state variable that iscommon to all possible solutions.
40

Zobecňování výsledků týkajících se problému splnitelnosti podmínek na nekonečné algebry / Generalizing CSP-related results to infinite algebras

Olšák, Miroslav January 2019 (has links)
The recent research on constraint satisfaction problems (CSPs) on fixed finite templates provided useful tools for computational complexity and universal algebra. However, the research mainly focused on finite relational structures, and consequently, finite algebras. We pursue a generalization of these tools and results into the domain of infinite algebras. In particular, we show that despite the fact that the Maltsev condition s(r, a, r, e) = s(a, r, e, a) does not characterize Taylor algebras (i.e., algebras that satisfy a nontrivial idem- potent Maltsev condition) in general, as it does in the finite case, there is another strong Maltsev condition characterizing Taylor algebras, and s(r, a, r, e) = s(a, r, e, a) characterizes another interesting broad class of algebras. We also provide a (weak) Maltsev condition for SD(∧) algebras (i.e., algebras that satisfy an idem- potent Maltsev condition not satisfiable in a module). Beyond Maltsev conditions, we study loop lemmata and, in particular, reprove a well known finite loop lemma by two different general (infinite) approaches.

Page generated in 0.1697 seconds