Spelling suggestions: "subject:"covering arrays"" "subject:"innovering arrays""
1 |
Binary Consecutive Covering ArraysGodbole, Anant P., Koutras, M. V., Milienos, F. S. 01 June 2011 (has links)
A k × n array with entries from a q-letter alphabet is called a t-covering array if each t × n submatrix contains amongst its columns each one of the gt different words of length t that can be produced by the q letters. In the present article we use a probabilistic approach based on an appropriate Markov chain embedding technique, to study a t-covering problem where, instead of looking at all possible t ×n submatrices, we consider only submatrices of dimension t ×n with its rows being consecutive rows of the original k × n array. Moreover, an exact formula is established for the probability distribution function of the random variable, which enumerates the number of deficient submatrices (i.e., submatrices with at least one missing word, amongst their columns), in the case of a k × n binary matrix (q = 2) obtained by realizing kn Bernoulli variables.
|
2 |
Generating Mixed-Level Covering Arrays of Lambda = 2 and Test PrioritizationJanuary 2015 (has links)
abstract: In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.
Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.
This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force. / Dissertation/Thesis / Masters Thesis Computer Science 2015
|
3 |
Covering Arrays: Generation and Post-optimizationJanuary 2015 (has links)
abstract: Exhaustive testing is generally infeasible except in the smallest of systems. Research
has shown that testing the interactions among fewer (up to 6) components is generally
sufficient while retaining the capability to detect up to 99% of defects. This leads to a
substantial decrease in the number of tests. Covering arrays are combinatorial objects
that guarantee that every interaction is tested at least once.
In the absence of direct constructions, forming small covering arrays is generally
an expensive computational task. Algorithms to generate covering arrays have been
extensively studied yet no single algorithm provides the smallest solution. More
recently research has been directed towards a new technique called post-optimization.
These algorithms take an existing covering array and attempt to reduce its size.
This thesis presents a new idea for post-optimization by representing covering
arrays as graphs. Some properties of these graphs are established and the results are
contrasted with existing post-optimization algorithms. The idea is then generalized to
close variants of covering arrays with surprising results which in some cases reduce
the size by 30%. Applications of the method to generation and test prioritization are
studied and some interesting results are reported. / Dissertation/Thesis / Masters Thesis Computer Science 2015
|
4 |
Covering Arrays: Algorithms and AsymptoticsJanuary 2016 (has links)
abstract: Modern software and hardware systems are composed of a large number of components. Often different components of a system interact with each other in unforeseen and undesired ways to cause failures. Covering arrays are a useful mathematical tool for testing all possible t-way interactions among the components of a system.
The two major issues concerning covering arrays are explicit construction of a covering array, and exact or approximate determination of the covering array number---the minimum size of a covering array. Although these problems have been investigated extensively for the last couple of decades, in this thesis we present significant improvements on both of these questions using tools from the probabilistic method and randomized algorithms.
First, a series of improvements is developed on the previously known upper bounds on covering array numbers. An estimate for the discrete Stein-Lovász-Johnson bound is derived and the Stein- Lovász -Johnson bound is improved upon using an alteration strategy. Then group actions on the set of symbols are explored to establish two asymptotic upper bounds on covering array numbers that are tighter than any of the presently known bounds.
Second, an algorithmic paradigm, called the two-stage framework, is introduced for covering array construction. A number of concrete algorithms from this framework are analyzed, and it is shown that they outperform current methods in the range of parameter values that are of practical relevance. In some cases, a reduction in the number of tests by more than 50% is achieved.
Third, the Lovász local lemma is applied on covering perfect hash families to obtain an upper bound on covering array numbers that is tightest of all known bounds. This bound leads to a Moser-Tardos type algorithm that employs linear algebraic computation over finite fields to construct covering arrays. In some cases, this algorithm outperforms currently used methods by more than an 80% margin.
Finally, partial covering arrays are introduced to investigate a few practically relevant relaxations of the covering requirement. Using probabilistic methods, bounds are obtained on partial covering arrays that are significantly smaller than for covering arrays. Also, randomized algorithms are provided that construct such arrays in expected polynomial time. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
|
5 |
Consecutive Covering Arrays and a New Randomness TestGodbole, A. P., Koutras, M. V., Milienos, F. S. 01 May 2010 (has links)
A k × n array with entries from an "alphabet" A = { 0, 1, ..., q - 1 } of size q is said to form a t-covering array (resp. orthogonal array) if each t × n submatrix of the array contains, among its columns, at least one (resp. exactly one) occurrence of each t-letter word from A (we must thus have n = qt for an orthogonal array to exist and n ≥ qt for a t -covering array). In this paper, we continue the agenda laid down in Godbole et al. (2009) in which the notion of consecutive covering arrays was defined and motivated; a detailed study of these arrays for the special case q = 2, has also carried out by the same authors. In the present article we use first a Markov chain embedding method to exhibit, for general values of q, the probability distribution function of the random variable W = Wk, n, t defined as the number of sets of t consecutive rows for which the submatrix in question is missing at least one word. We then use the Chen-Stein method (Arratia et al., 1989, 1990) to provide upper bounds on the total variation error incurred while approximating L (W) by a Poisson distribution Po (λ) with the same mean as W. Last but not least, the Poisson approximation is used as the basis of a new statistical test to detect run-based discrepancies in an array of q-ary data.
|
6 |
Partial Covering Arrays and a Generalized Erdo′S - Ko - Rado PropertyCarey, Particia A., Godbole, Anant P. 01 January 2010 (has links)
The classical Erdös-Ko-Rado theorem states that if k≤ ⌊n/2⌋ then the largest family of pairwise intersecting k-subsets of [n]={1,. ,n} is of size (n-1 k-1). A family of k subsets satisfying this pairwise intersecting property is called an EKR family. We generalize the EKR property and provide asymptotic lower bounds on the size of the largest family A of k-subsets of [n] that satis es the following property: For each A,B,CεA, each of the four sets A∩B∩C; A∩B∩C̃; A∩B̃∩C;Ã ∩B∩C are non-empty. This generalized EKR (GEKR) property is motivated, generalizations are suggested, and a comparison is made with fixed weight 3-covering arrays. Our techniques are probabilistic, and reminiscent of those used in (A. Godbole, D. Skipper, and R. Sunley, Comb Prob Computing 5 (1996), 105-118) and in the work of Roux, as cited in (N. J. A. Sloane, J Comb Designs 1 (1993), 51-63).
|
7 |
Covering Arrays for Some Equivalence Classes of WordsCassels, Joshua, Godbole, Anant 01 August 2019 (has links)
Covering arrays for words of length (Formula presented.) over a (Formula presented.) -letter alphabet are (Formula presented.) arrays with entries from the alphabet so that for each choice of (Formula presented.) columns, each of the (Formula presented.) (Formula presented.) -letter words appears at least once among the rows of the selected columns. We study two schemes in which all words are not considered to be different. In the first case known as partitioning hash families, words are equivalent if they induce the same partition of a (Formula presented.) element set. In the second case, words of the same weight are equivalent. In both cases, we produce logarithmic upper bounds on the minimum size (Formula presented.) of a covering array. Definitive results for (Formula presented.), as well as general results, are provided.
|
8 |
Constructing Covering Arrays using Parallel Computing and Grid ComputingAvila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole
set of cases that participate in its operation. While testing only individual values
may not be enough, exhaustive testing of all possible combinations is not always
feasible. An alternative technique to accomplish this goal is called combinato-
rial testing. Combinatorial testing is a method that can reduce cost and increase
the effectiveness of software testing for many applications. It is based on con-
structing functional test-suites of economical size, which provide coverage of the
most prevalent configurations. Covering arrays are combinatorial objects, that
have been applied to do functional tests of software components. The use of cov-
ering arrays allows to test all the interactions, of a given size, among the input
parameters using the minimum number of test cases.
For software testing, the fundamental problem is finding a covering array with
the minimum possible number of rows, thus reducing the number of tests, the
cost, and the time expended on the software testing process. Because of the
importance of the construction of (near) optimal covering arrays, much research
has been carried out in developing effective methods for constructing them. There
are several reported methods for constructing these combinatorial models, among
them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4)
metaheuristics methods.
Metaheuristic methods, particularly through the application of simulated anneal-
ing has provided the most accurate results in several instances to date. Simulated
annealing algorithm is a general-purpose stochastic optimization method that has
proved to be an effective tool for approximating globally optimal solutions to many
optimization problems. However, one of the major drawbacks of the simulated an-
nealing is the time it requires to obtain good solutions.
In this thesis, we propose the development of an improved simulated annealing
algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027
|
9 |
Covering Arrays with Row LimitFrancetic, Nevena 11 December 2012 (has links)
Covering arrays with row limit, CARLs, are a new family of combinatorial objects
which we introduce as a generalization of group divisible designs and covering arrays. In the same manner as their predecessors, CARLs have a natural application as combinatorial models for interaction test suites. A CARL(N;t,k,v:w), is an N×k array with some empty cells. A component, which is represented by a column, takes values from a v-set called the alphabet. In each row, there are exactly w non-empty cells, that is the corresponding components have an assigned value from the alphabet. The parameter w is called the row limit. Moreover, any N×t subarray contains every of v^t distinct t-tuples of alphabet symbols at least once.
This thesis is concerned with the bounds on the size and with the construction of
CARLs when the row limit w(k) is a positive integer valued function of the number
of columns, k. Here we give a lower bound, and probabilistic and algorithmic upper
bounds for any CARL. Further, we find improvements on the upper bounds when
w(k)ln(w(k)) = o(k) and when w(k) is a constant function. We also determine the
asymptotic size of CARLs when w(k) = Θ(k) and when w(k) is constant.
Next, we study constructions of CARLs. We provide two combinatorial constructions
of CARLs, which we apply to construct families of CARLs with w(k)=ck, where c<1.
Also, we construct optimal CARLs when t=2 and w=4, and prove that there exists a
constant δ, such that for any v and k≥4, an optimal CARL(2,k,v:4) differs from the lower bound by at most δ rows, with some possible exceptions.
Finally, we define a packing array with row limit, PARL(N;t,k,v:w), in the same
way as a CARL(N;t,k,v:w) with the difference that any t-tuple is contained at most
once in any N×t subarray. We find that when w(k) is a constant function, the results on
the asymptotic size of CARLs imply the results on the asymptotic size of PARLs. Also, when t=2, we consider a transformation of optimal CARLs with row limit w=3 to
optimal PARLs with w=3.
|
10 |
Covering Arrays with Row LimitFrancetic, Nevena 11 December 2012 (has links)
Covering arrays with row limit, CARLs, are a new family of combinatorial objects
which we introduce as a generalization of group divisible designs and covering arrays. In the same manner as their predecessors, CARLs have a natural application as combinatorial models for interaction test suites. A CARL(N;t,k,v:w), is an N×k array with some empty cells. A component, which is represented by a column, takes values from a v-set called the alphabet. In each row, there are exactly w non-empty cells, that is the corresponding components have an assigned value from the alphabet. The parameter w is called the row limit. Moreover, any N×t subarray contains every of v^t distinct t-tuples of alphabet symbols at least once.
This thesis is concerned with the bounds on the size and with the construction of
CARLs when the row limit w(k) is a positive integer valued function of the number
of columns, k. Here we give a lower bound, and probabilistic and algorithmic upper
bounds for any CARL. Further, we find improvements on the upper bounds when
w(k)ln(w(k)) = o(k) and when w(k) is a constant function. We also determine the
asymptotic size of CARLs when w(k) = Θ(k) and when w(k) is constant.
Next, we study constructions of CARLs. We provide two combinatorial constructions
of CARLs, which we apply to construct families of CARLs with w(k)=ck, where c<1.
Also, we construct optimal CARLs when t=2 and w=4, and prove that there exists a
constant δ, such that for any v and k≥4, an optimal CARL(2,k,v:4) differs from the lower bound by at most δ rows, with some possible exceptions.
Finally, we define a packing array with row limit, PARL(N;t,k,v:w), in the same
way as a CARL(N;t,k,v:w) with the difference that any t-tuple is contained at most
once in any N×t subarray. We find that when w(k) is a constant function, the results on
the asymptotic size of CARLs imply the results on the asymptotic size of PARLs. Also, when t=2, we consider a transformation of optimal CARLs with row limit w=3 to
optimal PARLs with w=3.
|
Page generated in 0.077 seconds