11 |
Covering Arrays with Row LimitFrancetic, Nevena 11 December 2012 (has links)
Covering arrays with row limit, CARLs, are a new family of combinatorial objects
which we introduce as a generalization of group divisible designs and covering arrays. In the same manner as their predecessors, CARLs have a natural application as combinatorial models for interaction test suites. A CARL(N;t,k,v:w), is an N×k array with some empty cells. A component, which is represented by a column, takes values from a v-set called the alphabet. In each row, there are exactly w non-empty cells, that is the corresponding components have an assigned value from the alphabet. The parameter w is called the row limit. Moreover, any N×t subarray contains every of v^t distinct t-tuples of alphabet symbols at least once.
This thesis is concerned with the bounds on the size and with the construction of
CARLs when the row limit w(k) is a positive integer valued function of the number
of columns, k. Here we give a lower bound, and probabilistic and algorithmic upper
bounds for any CARL. Further, we find improvements on the upper bounds when
w(k)ln(w(k)) = o(k) and when w(k) is a constant function. We also determine the
asymptotic size of CARLs when w(k) = Θ(k) and when w(k) is constant.
Next, we study constructions of CARLs. We provide two combinatorial constructions
of CARLs, which we apply to construct families of CARLs with w(k)=ck, where c<1.
Also, we construct optimal CARLs when t=2 and w=4, and prove that there exists a
constant δ, such that for any v and k≥4, an optimal CARL(2,k,v:4) differs from the lower bound by at most δ rows, with some possible exceptions.
Finally, we define a packing array with row limit, PARL(N;t,k,v:w), in the same
way as a CARL(N;t,k,v:w) with the difference that any t-tuple is contained at most
once in any N×t subarray. We find that when w(k) is a constant function, the results on
the asymptotic size of CARLs imply the results on the asymptotic size of PARLs. Also, when t=2, we consider a transformation of optimal CARLs with row limit w=3 to
optimal PARLs with w=3.
|
12 |
Covering Arrays with Row LimitFrancetic, Nevena 11 December 2012 (has links)
Covering arrays with row limit, CARLs, are a new family of combinatorial objects
which we introduce as a generalization of group divisible designs and covering arrays. In the same manner as their predecessors, CARLs have a natural application as combinatorial models for interaction test suites. A CARL(N;t,k,v:w), is an N×k array with some empty cells. A component, which is represented by a column, takes values from a v-set called the alphabet. In each row, there are exactly w non-empty cells, that is the corresponding components have an assigned value from the alphabet. The parameter w is called the row limit. Moreover, any N×t subarray contains every of v^t distinct t-tuples of alphabet symbols at least once.
This thesis is concerned with the bounds on the size and with the construction of
CARLs when the row limit w(k) is a positive integer valued function of the number
of columns, k. Here we give a lower bound, and probabilistic and algorithmic upper
bounds for any CARL. Further, we find improvements on the upper bounds when
w(k)ln(w(k)) = o(k) and when w(k) is a constant function. We also determine the
asymptotic size of CARLs when w(k) = Θ(k) and when w(k) is constant.
Next, we study constructions of CARLs. We provide two combinatorial constructions
of CARLs, which we apply to construct families of CARLs with w(k)=ck, where c<1.
Also, we construct optimal CARLs when t=2 and w=4, and prove that there exists a
constant δ, such that for any v and k≥4, an optimal CARL(2,k,v:4) differs from the lower bound by at most δ rows, with some possible exceptions.
Finally, we define a packing array with row limit, PARL(N;t,k,v:w), in the same
way as a CARL(N;t,k,v:w) with the difference that any t-tuple is contained at most
once in any N×t subarray. We find that when w(k) is a constant function, the results on
the asymptotic size of CARLs imply the results on the asymptotic size of PARLs. Also, when t=2, we consider a transformation of optimal CARLs with row limit w=3 to
optimal PARLs with w=3.
|
13 |
Covering Arrays: Algorithms and AsymptoticsJanuary 2016 (has links)
abstract: Modern software and hardware systems are composed of a large number of components. Often different components of a system interact with each other in unforeseen and undesired ways to cause failures. Covering arrays are a useful mathematical tool for testing all possible t-way interactions among the components of a system.
The two major issues concerning covering arrays are explicit construction of a covering array, and exact or approximate determination of the covering array number---the minimum size of a covering array. Although these problems have been investigated extensively for the last couple of decades, in this thesis we present significant improvements on both of these questions using tools from the probabilistic method and randomized algorithms.
First, a series of improvements is developed on the previously known upper bounds on covering array numbers. An estimate for the discrete Stein-Lovász-Johnson bound is derived and the Stein- Lovász -Johnson bound is improved upon using an alteration strategy. Then group actions on the set of symbols are explored to establish two asymptotic upper bounds on covering array numbers that are tighter than any of the presently known bounds.
Second, an algorithmic paradigm, called the two-stage framework, is introduced for covering array construction. A number of concrete algorithms from this framework are analyzed, and it is shown that they outperform current methods in the range of parameter values that are of practical relevance. In some cases, a reduction in the number of tests by more than 50% is achieved.
Third, the Lovász local lemma is applied on covering perfect hash families to obtain an upper bound on covering array numbers that is tightest of all known bounds. This bound leads to a Moser-Tardos type algorithm that employs linear algebraic computation over finite fields to construct covering arrays. In some cases, this algorithm outperforms currently used methods by more than an 80% margin.
Finally, partial covering arrays are introduced to investigate a few practically relevant relaxations of the covering requirement. Using probabilistic methods, bounds are obtained on partial covering arrays that are significantly smaller than for covering arrays. Also, randomized algorithms are provided that construct such arrays in expected polynomial time. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
|
14 |
Some Properties of Dini DerivativesPinkerton, Jane W. 08 1900 (has links)
The purpose of this paper is to derive certain of the fundamental properties of the Dini derivatives of an arbitrary real function. To this end it will be necessary to investigate the properties of the limits superior and inferior of real functions and to prove the Vitali Covering Theorem as well as a fundamental theorem on the metric density of arbitrary point sets.
|
15 |
Adding Limit Points to Bass-Serre Graphs of GroupsShumway, Alexander Jin 01 July 2018 (has links)
We give a brief overview of Bass-Serre theory and introduce a method of adding a limit point to graphs of groups. We explore a basic example of this method, and find that while the fundamental theorem of Bass-Serre theory no longer applies in this case we still recover a group action on a covering space of sorts with a subgroup isomorphic to the fundamental group of our new base space with added limit point. We also quantify how much larger the fundamental group of a graph of groups becomes after this construction, and discuss the effects of adding and identifying together such limit points in more general graphs of groups. We conclude with a theorem stating that the cokernel of the map on fundamental groups induced by collapsing an arc between two limit points contains a certain fundamental group of a double cone of graphs of groups, and we conjecture that this cokernel is isomorphic to this double cone group.
|
16 |
Decomposing polygons into r-stars or alpha-bounded subpolygonsWorman, Chris 09 August 2004 (has links)
To make computations on large data sets more efficient, algorithms will frequently divide information into smaller, more manageable, pieces. This idea, for example, forms the basis of the common algorithmic approach known as Divide and Conquer.
If we wish to use this principle in planar geometric computations, however, we may require specialized techniques for decomposing our data. This is due to the fact that the data sets are typically points, lines, regions, or polygons. This motivates algorithms that can break-up polygons into simpler pieces. Algorithms that perform such computations are said to compute polygon decompositions. There are many ways
that we can decompose a polygon, and there are also many types of polygons that we could decompose. Both applications and theoretical interest demand algorithms for a wide variety of decomposition problems.
In this thesis we study two different polygon decomposition problems. The first
problem that we study is a polygon decomposition problem that is equivalent to the Rectilinear Art Gallery problem. In this problem we seek a decomposition of a polygon into so-called r-stars. These r-stars model visibility in an orthogonal setting.
We show that we can compute a certain type of decomposition, known as a Steinercover, of a simple orthogonal polygon into r-stars in polynomial time. In the second problem, we explore the complexity of decomposing polygons into components that have an upper bound on their size. In this problem, the size of a polygon refers
to the size of its bounding-box. This problem is motivated by a polygon collision detection heuristic that approximates a polygon by its bounding-box to determine whether an exact collision detection computation should take place. We show that it is NP-complete to decide whether a polygon that contains holes can be decomposed
into a specified number of size-constrained components.
|
17 |
Decomposing polygons into r-stars or alpha-bounded subpolygonsWorman, Chris 09 August 2004
To make computations on large data sets more efficient, algorithms will frequently divide information into smaller, more manageable, pieces. This idea, for example, forms the basis of the common algorithmic approach known as Divide and Conquer.
If we wish to use this principle in planar geometric computations, however, we may require specialized techniques for decomposing our data. This is due to the fact that the data sets are typically points, lines, regions, or polygons. This motivates algorithms that can break-up polygons into simpler pieces. Algorithms that perform such computations are said to compute polygon decompositions. There are many ways
that we can decompose a polygon, and there are also many types of polygons that we could decompose. Both applications and theoretical interest demand algorithms for a wide variety of decomposition problems.
In this thesis we study two different polygon decomposition problems. The first
problem that we study is a polygon decomposition problem that is equivalent to the Rectilinear Art Gallery problem. In this problem we seek a decomposition of a polygon into so-called r-stars. These r-stars model visibility in an orthogonal setting.
We show that we can compute a certain type of decomposition, known as a Steinercover, of a simple orthogonal polygon into r-stars in polynomial time. In the second problem, we explore the complexity of decomposing polygons into components that have an upper bound on their size. In this problem, the size of a polygon refers
to the size of its bounding-box. This problem is motivated by a polygon collision detection heuristic that approximates a polygon by its bounding-box to determine whether an exact collision detection computation should take place. We show that it is NP-complete to decide whether a polygon that contains holes can be decomposed
into a specified number of size-constrained components.
|
18 |
Topological Galois theory of Riemann surfacesJanuary 2020 (has links)
archives@tulane.edu / There is a deep analogy between the theory of covering spaces and the theory offield extensions. Indeed, for many theorems about the Galois groups of field extensionsthere are analogous statements for the fundamental groups of covering spaces. Thepurpose of this thesis is to present an expository account of the connections betweenthese two useful concepts of algebra and geometry. / 1 / Dejun Zhang
|
19 |
Neural computing for minimum set covering and gate-packing problemsChang, Engder January 1993 (has links)
No description available.
|
20 |
Notes on Intersective PolynomialsGegner, Ethan 27 August 2018 (has links)
No description available.
|
Page generated in 0.0289 seconds