Spelling suggestions: "subject:"partial order"" "subject:"artial order""
1 |
Structured Message TransportPooya, Shayan Unknown Date
No description available.
|
2 |
Uncountable cofinalities of automorphism groups of linear and partial ordersDroste, Manfred, Truss, John K. 14 December 2018 (has links)
We demonstrate the uncountable cofinality of the automorphism groups of various linear and partial orders. We also relate this to the ‘Bergman’ property, and discuss cases where this may fail even though the cofinality is uncountable.
|
3 |
Efficient Pattern Search in Large, Partial-Order Data SetsNichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace.
Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful.
Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution.
This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
|
4 |
Efficient Pattern Search in Large, Partial-Order Data SetsNichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace.
Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful.
Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution.
This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
|
5 |
Slice—n—Dice Algorithm Implementation in JPFNoonan, Eric S. 01 July 2014 (has links) (PDF)
This work deals with evaluating the effectiveness of a new verification algorithm called slice--n--dice. In order to evaluate the effectiveness of slice--n--dice, a vector clock POR was implemented to compare it against. The first paper contained in this work was published in ACM SIGSOFT Software Engineering Notes and discusses the implementation of the vector clock POR. The results of this paper show the vector clock POR performing better than the POR in Java Pathfinder by at least a factor of two. The second paper discusses the implementation of slice--n--dice and compares it against other verification techniques. The results show that slice--n--dice performs better than the other verification methods in terms of states explored and runtime when there is no error in the program or little thread interaction is needed in order for the error to manifest.
|
6 |
Discovering Frequent Episodes With General Partial OrdersAchar, Avinash 12 1900 (has links) (PDF)
Pattern Discovery, a popular paradigm in data mining refers to a class of techniques that try and extract some unknown or interesting patterns from data. The work carried out in this thesis concerns frequent episode mining, a popular framework within pattern discovery, with applications in alarm management, fault analysis, network reconstruction etc. The data here is in the form of a single longtime-ordered stream of events. The pattern of interest here, namely episode, is basically a set of event-types with a partial order on it. The task here is to unearth all patterns( episodes here) which have a frequency above a user-defined threshold irrespective of pattern size. Most current discovery algorithms employ a level-wise a priori-based method for mining, which basically adopts a breadth-first search strategy of the space of all episodes.
The episode literature has seen multiple ways of defining frequency with each definition having its own set of merits and demerits. The main reason for different frequencies definitions being proposed is that, in general, counting all occurrences of a set of episodes is computationally very expensive. The first part of the thesis gives a unified view of all the apriori-based discovery algorithms for serial episodes(associated with a total order)under these various frequencies. Specifically, the various existing counting algorithms can be viewed as minor modifications of each other. We also provide some novel proofs of correctness for some of the serial episode counting schemes, which in turn can be generalized to episodes with general partial orders. Our unified view helps us derive quantitative relationships between different frequencies. We also discuss all the anti-monotonicity properties satisfied by the various frequencies, a crucial information needed for the candidate generation step.
The second part of the thesis proposes discovery algorithms for episodes with general partial orders, for which no algorithms currently exist in literature. The discovery algorithm proposed is apriori-based and generalizes the existing serial and parallel (associated with a trivial order) episode algorithms. The discovery algorithm is a level-wise procedure involving the steps of candidate generation and counting a teach level. In the context of general partial orders, a major problem in a priori-based discovery is to have an efficient candidate generation scheme. We present a novel candidate generation algorithm for mining episodes with general partial orders. The counting algorithm design for general partial order episodes draws ideas from the unified view of counting for serial episodes, presented in the first part of the work. We formally show the correctness of the proposed candidate generation and counting steps for general partial orders. The proposed candidate generation algorithm is flexible enough to be able to mine in certain specialized classes of partial orders (satisfying what we call maximal sub episode property), of which, the serial and parallel class of episodes are two specific instances. Our algorithm design initially restricts itself to the class of general partial order episodes called injective episodes wherein repeated event-types are not allowed. We then generalize this to a larger class of episodes called chain episodes, where episodes can have some repeated event types. The class of chain episodes contains all (including non-injective) serial and parallel episodes and thus our method properly generalizes the existing methods for serial and parallel episode discovery. We also discuss some problems in extending our algorithms to episodes beyond the class of chain episodes. Also, we demonstrate that frequency alone is not a sufficient enough interestingness measure for episodes with unrestricted partial orders. To address this issue, we propose an additional measure called bidirectional evidence to assess interestingness which, along with frequency is found to be extremely effective in unearthing interesting patterns.
In the frequent episode framework, the choice of thresholds are most often user-defined and arbitrary. To address this issue, the last part of the work deals with assessing significance of partial order episodes in a statistical sense based on ideas from classical hypothesis testing. We declare an episode to be significant if its observed frequency in the data stream is large enough to be very unlikely, under a random i.i.d model .The key step in the significance analysis involves the mean and variance computation of the the time between successive occurrences of the pattern. This computation can be reformulated as, solving for the mean and variance of the first visit time to a particular stat e in an associated Markov chain. We use a generating function approach to solve for this mean and variance. Using this and a Gaussian approximation to the frequency random variable, we can now calculate a frequency threshold for any partial order episode, beyond which we infer it to be significant. Our significance analysis for general partial order episodes generalizes the existing significance analysis of serial episode patterns. We demonstrate on synthetic data the effectiveness of our significance thresholds.
|
7 |
A study onshop sceduling problems / Um estudo sobre escalonamento de processosZubaran, Tadeu Knewitz January 2018 (has links)
Escalonamento de processos é um tipo de problema de otimização combinatória no qual devemos alocar máquinas à tarefas por períodos específicos de tempo. A literatura contém diversos estudos propondo técnicas para resolver modelos de escalonamento de processos como o job shop e o open shop. Esses modelos permitem que os passos no processo produtivo sejam ou completamente ordenados ou sem ordenação alguma. Com o aumento da complexidade das aplicações industriais no encontramos, mais recentemente, diversos trabalhos que propõe problemas de escalonamento de processos mais gerais para modelar mais precisamente os processos produtivos. O mixed shop, group shop e partial shop são exemplos de tais modelos. Nesse trabalho nós propomos uma busca tabu iterada para o partial shop, que é um modelo geral que inclui diversos modelos mais restritivos. Os componentes novos mais importantes da técnica são o gerador de solução inicial, a vizinhança e o limite inferior para a vizinhança. Em experimentos computacionais nós conseguimos demonstrar que a heurística genérica e única é capaz de competir, e as vezes superar, as técnicas de estado de arte desenvolvidas especificamente para partial, open, mixed e group shop. Algumas vezes uma máquina é o gargalo de um processo produtivo, e é replicada. Na literatura o caso das máquinas paralelas foi incluído em diversas extensões de problemas de escalonamento de processos. Nessa tese nós também propomos uma técnica para escalonar as máquinas paralelas, sem incluí-las explicitamente na representação do problema. Nós usamos técnicas gerais para os casos sem máquinas paralelas para produzir uma busca heurística tabu rápida, e estado da arte, para o caso do job shop com máquinas paralelas. / Shop scheduling is a combinatorial optimization type of problem in which we must allocate machines to jobs for specific periods time. A set of constraints defines which schedules are valid, and we must select one that minimizes or maximizes an objective function. In this work we use the makespan, which is the time the last job finishes. The literature contains several studies proposing techniques to solve shop problems such as the job shop and open shop. These problems allow the steps of the production processes to be either fully ordered or not ordered at all. With increasing complexity and size of industrial applications we find, more recently, several works which propose more general shop problems to model the production processes more accurately. The mixed shop, group shop and partial shop are examples of such problems In this work we propose an iterated tabu search for the partial shop, which is a general problem and includes several other more restrictive shop problems. The most important novel components of the solver are the initial solution generator, the neighbourhood, and the lower bound for the neighbourhood. In computational experiments we were able to show that the general partial shop solver is able to compete with, and sometimes surpass, the state-of-the-art solvers developed specifically for the partial, open, mixed and group shops. Sometimes a machine is a bottleneck in the production process, and is replicated. In the literature the parallel machines case has being included in several extensions of shop problems. In this thesis we also propose a technique to schedule the parallel machines heuristically, without including them explicitly in the representation of the problem. We use general techniques for the non-parallel machine cases to produce a fast tabu search heuristic results for the job shop with parallel machines.
|
8 |
Space-Efficient Data Structures in the Word-RAM and Bitprobe ModelsNicholson, Patrick 06 August 2013 (has links)
This thesis studies data structures in the word-RAM and bitprobe models, with an emphasis on space efficiency. In the word-RAM model of computation the space cost of a data structure is measured in terms of the number of w-bit words stored in memory, and the cost of answering a query is measured in terms of the number of read, write, and arithmetic operations that must be performed. In the bitprobe model, like the word-RAM model, the space cost is measured in terms of the number of bits stored in memory, but the query cost is measured solely in terms of the number of bit accesses, or probes, that are performed.
First, we examine the problem of succinctly representing a partially ordered set, or poset, in the word-RAM model with word size
Theta(lg n) bits. A succinct representation of a combinatorial object is one that occupies space matching the information theoretic lower bound to within lower order terms. We show how to represent a poset on n vertices using a data structure that occupies n^2/4 + o(n^2) bits, and can answer precedence (i.e., less-than) queries in
constant time. Since the transitive closure of a directed acyclic graph is a poset, this implies that we can support reachability
queries on an arbitrary directed graph in the same space bound. As far as we are aware, this is the first representation of an arbitrary directed graph that supports reachability queries in constant time,
and stores less than n choose 2 bits. We also consider several additional query operations.
Second, we examine the problem of supporting range queries on strings
of n characters (or, equivalently, arrays of
n elements) in the word-RAM model with word size Theta(lg n) bits. We focus on the specific problem of answering range majority queries: i.e., given a range, report the
character that is the majority among those in the range, if one exists. We show that these queries can be supported in constant time
using a linear space (in words) data structure. We generalize this
result in several directions, considering various frequency thresholds, geometric variants of the problem, and dynamism. These
results are in stark contrast to recent work on the similar range mode problem, in which the query operation asks for the mode (i.e., most frequent) character in a given range. The current best data structures for the range mode problem take soft-Oh(n^(1/2)) time per query for linear space data structures.
Third, we examine the deterministic membership (or dictionary) problem in the bitprobe model. This problem asks us to store a set of n elements drawn from a universe [1,u] such that membership queries
can be always answered in t bit probes. We present several new fully explicit results for this problem, in particular for the case
when n = 2, answering an open problem posed by Radhakrishnan, Shah, and Shannigrahi [ESA 2010]. We also present a general strategy for the membership problem that can be used to solve many related fundamental problems, such as rank, counting, and emptiness queries.
Finally, we conclude with a list of open problems and avenues for future work.
|
9 |
Ranked Retrieval in Uncertain and Probabilistic DatabasesSoliman, Mohamed January 2011 (has links)
Ranking queries are widely used in data exploration, data analysis and decision
making scenarios. While most of the currently proposed ranking techniques focus
on deterministic data, several emerging applications involve data that are imprecise
or uncertain. Ranking uncertain data raises new challenges in query semantics and
processing, making conventional methods inapplicable. Furthermore, the interplay
between ranking and uncertainty models introduces new dimensions for ordering query
results that do not exist in the traditional settings.
This dissertation introduces new formulations and processing techniques for ranking queries on uncertain data. The formulations are based on marriage of traditional ranking semantics with possible worlds semantics under widely-adopted uncertainty models. In particular, we focus on studying the impact of tuple-level and attribute-level uncertainty on the semantics and processing techniques of ranking queries.
Under the tuple-level uncertainty model, we introduce a processing framework leveraging the capabilities of relational database systems to recognize and handle data
uncertainty in score-based ranking. The framework encapsulates a state space model,
and efficient search algorithms that compute query answers by lazily materializing the
necessary parts of the space. Under the attribute-level uncertainty model, we give a new probabilistic ranking model, based on partial orders, to encapsulate the space of possible rankings originating from uncertainty in attribute values. We present a set of efficient query evaluation algorithms, including sampling-based techniques based on the theory of Markov chains and Monte-Carlo method, to compute query answers.
We build on our techniques for ranking under attribute-level uncertainty to support
rank join queries on uncertain data. We show how to extend current rank join methods
to handle uncertainty in scoring attributes. We provide a pipelined query operator
implementation of uncertainty-aware rank join algorithm integrated with sampling
techniques to compute query answers.
|
10 |
Combinatorial Slice Theoryde Oliveira Oliveira, Mateus January 2013 (has links)
Slices are digraphs that can be composed together to form larger digraphs.In this thesis we introduce the foundations of a theory whose aim is to provide ways of defining and manipulating infinite families of combinatorial objects such as graphs, partial orders, logical equations etc. We give special attentionto objects that can be represented as sequences of slices. We have successfully applied our theory to obtain novel results in three fields: concurrency theory,combinatorics and logic. Some notable results are: Concurrency Theory: We prove that inclusion and emptiness of intersection of the causalbehavior of bounded Petri nets are decidable. These problems had been open for almost two decades. We introduce an algorithm to transitively reduce infinite familiesof DAGs. This algorithm allows us to operate with partial order languages defined via distinct formalisms, such as, Mazurkiewicztrace languages and message sequence chart languages. Combinatorics: For each constant z ∈ N, we define the notion of z-topological or-der for digraphs, and use it as a point of connection between the monadic second order logic of graphs and directed width measures, such as directed path-width and cycle-rank. Through this connection we establish the polynomial time solvability of a large numberof natural counting problems on digraphs admitting z-topological orderings. Logic: We introduce an ordered version of equational logic. We show thatthe validity problem for this logic is fixed parameter tractable withrespect to the depth of the proof DAG, and solvable in polynomial time with respect to several notions of width of the equations being proved. In this way we establish the polynomial time provability of equations that can be out of reach of techniques based on completion and heuristic search. / <p>QC 20131120</p>
|
Page generated in 0.0428 seconds