Spelling suggestions: "subject:"état"" "subject:"stat""
131 |
Efficient Reasoning Techniques for Large Scale Feature ModelsMendonca, Marcilio January 2009 (has links)
In Software Product Lines (SPLs), a feature model can be used to represent the
similarities and differences within a family of software systems. This allows
describing the systems derived from the product line as a unique combination of
the features in the model. What makes feature models particularly appealing is
the fact that the constraints in the model prevent incompatible features from
being part of the same product.
Despite the benefits of feature models, constructing and maintaining these models
can be a laborious task especially in product lines with a large number of
features and constraints. As a result, the study of automated techniques to
reason on feature models has become an important research topic in the SPL
community in recent years. Two techniques, in particular, have significant
appeal for researchers: SAT solvers and Binary Decision Diagrams (BDDs). Each
technique has been applied successfully for over four decades now to tackle
many practical combinatorial problems in various domains. Currently, several
approaches have proposed the compilation of feature models to specific logic
representations to enable the use of SAT solvers and BDDs.
In this thesis, we argue that several critical issues related to the use of SAT
solvers and BDDs have been consistently neglected. For instance, satisfiability
is a well-known NP-complete problem which means that, in theory, a SAT solver
might be unable to check the satisfiability of a feature model in a feasible
amount of time. Similarly, it is widely known that the size of BDDs can become
intractable for large models. At the same time, we currently do not know
precisely whether these are real issues when feature models, especially large
ones, are compiled to SAT and BDD representations.
Therefore, in our research we provide a significant step forward in the
state-of-the-art by examining deeply many relevant properties of the feature
modeling domain and the mechanics of SAT solvers and BDDs and the sensitive
issues related to these techniques when applied in that domain. Specifically, we
provide more accurate explanations for the space and/or time (in)tractability of
these techniques in the feature modeling domain, and enhance the algorithmic
performance of these techniques for reasoning on feature models. The
contributions of our work include the proposal of novel heuristics to reduce the
size of BDDs compiled from feature models, several insights on the construction
of efficient domain-specific reasoning algorithms for feature models, and
empirical studies to evaluate the efficiency of SAT solvers in handling very
large feature models.
|
132 |
Efficient Reasoning Techniques for Large Scale Feature ModelsMendonca, Marcilio January 2009 (has links)
In Software Product Lines (SPLs), a feature model can be used to represent the
similarities and differences within a family of software systems. This allows
describing the systems derived from the product line as a unique combination of
the features in the model. What makes feature models particularly appealing is
the fact that the constraints in the model prevent incompatible features from
being part of the same product.
Despite the benefits of feature models, constructing and maintaining these models
can be a laborious task especially in product lines with a large number of
features and constraints. As a result, the study of automated techniques to
reason on feature models has become an important research topic in the SPL
community in recent years. Two techniques, in particular, have significant
appeal for researchers: SAT solvers and Binary Decision Diagrams (BDDs). Each
technique has been applied successfully for over four decades now to tackle
many practical combinatorial problems in various domains. Currently, several
approaches have proposed the compilation of feature models to specific logic
representations to enable the use of SAT solvers and BDDs.
In this thesis, we argue that several critical issues related to the use of SAT
solvers and BDDs have been consistently neglected. For instance, satisfiability
is a well-known NP-complete problem which means that, in theory, a SAT solver
might be unable to check the satisfiability of a feature model in a feasible
amount of time. Similarly, it is widely known that the size of BDDs can become
intractable for large models. At the same time, we currently do not know
precisely whether these are real issues when feature models, especially large
ones, are compiled to SAT and BDD representations.
Therefore, in our research we provide a significant step forward in the
state-of-the-art by examining deeply many relevant properties of the feature
modeling domain and the mechanics of SAT solvers and BDDs and the sensitive
issues related to these techniques when applied in that domain. Specifically, we
provide more accurate explanations for the space and/or time (in)tractability of
these techniques in the feature modeling domain, and enhance the algorithmic
performance of these techniques for reasoning on feature models. The
contributions of our work include the proposal of novel heuristics to reduce the
size of BDDs compiled from feature models, several insights on the construction
of efficient domain-specific reasoning algorithms for feature models, and
empirical studies to evaluate the efficiency of SAT solvers in handling very
large feature models.
|
133 |
Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic ReasoningLi, Wei January 2010 (has links)
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent
two core problems in AI. Backtracking based algorithms have been applied in both
problems. In this thesis, I investigate structure-based techniques for solving real world
SAT and Bayesian networks, such as software testing and medical diagnosis instances.
When solving a SAT instance using backtracking search, a sequence of decisions
must be made as to which variable to branch on or instantiate next. Real world problems
are often amenable to a divide-and-conquer strategy where the original instance
is decomposed into independent sub-problems. Existing decomposition techniques
are based on pre-processing the static structure of the original problem. I propose
a dynamic decomposition method based on hypergraph separators. Integrating this
dynamic separator decomposition into the variable ordering of a modern SAT solver
leads to speedups on large real world SAT problems.
Encoding a Bayesian network into a CNF formula and then performing weighted
model counting is an effective method for exact probabilistic inference. I present two
encodings for improving this approach with noisy-OR and noisy-MAX relations. In
our experiments, our new encodings are more space efficient and can speed up the
previous best approaches over two orders of magnitude.
The ability to solve similar problems incrementally is critical for many probabilistic
reasoning problems. My aim is to exploit the similarity of these instances by
forwarding structural knowledge learned during the analysis of one instance to the
next instance in the sequence. I propose dynamic model counting and extend the dynamic
decomposition and caching technique to multiple runs on a series of problems
with similar structure. This allows us to perform Bayesian inference incrementally as
the evidence, parameter, and structure of the network change. Experimental results
show that my approach yields significant improvements over previous model counting
approaches on multiple challenging Bayesian network instances.
|
134 |
Systematic testing using test summaries : effective and efficient testing of relational applicationsAbdul Khalek, Shadi 31 January 2012 (has links)
This dissertation presents a novel methodology based on test
summaries, which characterize desired tests as constraints written in
a mixed imperative and declarative notation, for automated
systematic testing of relational applications, such as relational
database engines. The methodology has at its basis two novel
techniques for effective and efficient testing: (1) mixed-constraint solving, which provides systematic generation of
inputs characterized by mixed-constraints using translations among
different data domains; and (2) clustered test execution, which
optimizes execution of test suites by leveraging similarities in
execution traces of different tests using abstract-level undo
operations, which allow common segments of partial traces to be
executed only once and the execution results to be shared across those
tests.
A prototype embodiment of the methodology enables a novel approach for systematic testing of commonly used database engines, where test summaries describe (1) input SQL queries, (2) input database tables, and (3) expected output of query execution.
An experimental evaluation using the prototype demonstrates its efficacy in systematic testing of relational applications, including Oracle 11g, and finding bugs in
them. / text
|
135 |
An investigation of ethnic and gender intercept bias in the SAT's prediction of college freshman academic performanceWynne, Wesley David 28 August 2008 (has links)
Not available / text
|
136 |
Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic ReasoningLi, Wei January 2010 (has links)
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent
two core problems in AI. Backtracking based algorithms have been applied in both
problems. In this thesis, I investigate structure-based techniques for solving real world
SAT and Bayesian networks, such as software testing and medical diagnosis instances.
When solving a SAT instance using backtracking search, a sequence of decisions
must be made as to which variable to branch on or instantiate next. Real world problems
are often amenable to a divide-and-conquer strategy where the original instance
is decomposed into independent sub-problems. Existing decomposition techniques
are based on pre-processing the static structure of the original problem. I propose
a dynamic decomposition method based on hypergraph separators. Integrating this
dynamic separator decomposition into the variable ordering of a modern SAT solver
leads to speedups on large real world SAT problems.
Encoding a Bayesian network into a CNF formula and then performing weighted
model counting is an effective method for exact probabilistic inference. I present two
encodings for improving this approach with noisy-OR and noisy-MAX relations. In
our experiments, our new encodings are more space efficient and can speed up the
previous best approaches over two orders of magnitude.
The ability to solve similar problems incrementally is critical for many probabilistic
reasoning problems. My aim is to exploit the similarity of these instances by
forwarding structural knowledge learned during the analysis of one instance to the
next instance in the sequence. I propose dynamic model counting and extend the dynamic
decomposition and caching technique to multiple runs on a series of problems
with similar structure. This allows us to perform Bayesian inference incrementally as
the evidence, parameter, and structure of the network change. Experimental results
show that my approach yields significant improvements over previous model counting
approaches on multiple challenging Bayesian network instances.
|
137 |
ON SIMPLE BUT HARD RANDOM INSTANCES OF PROPOSITIONAL THEORIES AND LOGIC PROGRAMSNamasivayam, Gayathri 01 January 2011 (has links)
In the last decade, Answer Set Programming (ASP) and Satisfiability (SAT) have been used to solve combinatorial search problems and practical applications in which they arise. In each of these formalisms, a tool called a solver is used to solve problems. A solver takes as input a specification of the problem – a logic program in the case of ASP, and a CNF theory for SAT – and produces as output a solution to the problem. Designing fast solvers is important for the success of this general-purpose approach to solving search problems. Classes of instances that pose challenges to solvers can help in this task.
In this dissertation we create challenging yet simple benchmarks for existing solvers in ASP and SAT.We do so by providing models of simple logic programs as well as models of simple CNF theories. We then randomly generate logic programs as well as CNF theories from these models. Our experimental results show that computing answer sets of random logic programs as well as models of random CNF theories with carefully chosen parameters is hard for existing solvers.
We generate random logic programs with 2-literals, and our experiments show that it is hard for ASP solvers to obtain answer sets of purely negative and constraint-free programs, indicating the importance of these programs in the development of ASP solvers. An easy-hard-easy pattern emerges as we compute the average number of choice points generated by ASP solvers on randomly generated 2-literal programs with an increasing number of rules. We provide an explanation for the emergence of this pattern in these programs. We also theoretically study the probability of existence of an answer set for sparse and dense 2-literal programs.
We consider simple classes of mixed Horn formulas with purely positive 2- literal clauses and purely negated Horn clauses. First we consider a class of mixed Horn formulas wherein each formula has m 2-literal clauses and k-literal negated Horn clauses. We show that formulas that are generated from the phase transition region of this class are hard for complete SAT solvers. The second class of Mixed Horn Formulas we consider are obtained from completion of a certain class of random logic programs. We show the appearance of an easy-hard-easy pattern as we generate formulas from this class with increasing numbers of clauses, and that the formulas generated in the hard region can be used as benchmarks for testing incomplete SAT solvers.
|
138 |
Implementation methodology for using concurrent and collaborative approaches for theorem provers, with case studies of SAT and LCF style proversG, Sriipriya January 2013 (has links)
Theorem provers are faced with the challenges of size and complexity, fueled by the increasing range of applications. The use of concurrent/ distributed programming paradigms to engineer better theorem provers merits serious investigation, as it provides: more processing power and opportunities for implementing novel approaches to address theorem proving tasks hitherto infeasible in a sequential setting. Investigation of these opportunities for two diverse theorem prover settings with an emphasis on desirable implementation criteria is the core focus of this thesis. Concurrent programming is notoriously error prone, hard to debug and evaluate. Thus, implementation approaches which promote easy prototyping, portability, incremental development and effective isolation of design and implementation can greatly aid the enterprise of experimentation with the application of concurrent techniques to address specific theorem proving tasks. In this thesis, we have explored one such approach by using Alice ML, a functional programming language with support for concurrency and distribution, to implement the prototypes and have used programming abstractions to encapsulate the implementations of the concurrent techniques used. The utility of this approach is illustrated via proof-of-concept prototypes of concurrent systems for two diverse case studies of theorem proving: the propositional satisfiability problem (SAT) and LCF style (first-order) theorem proving, addressing some previously unexplored parallelisation opportunities for each, as follows:. SAT: We have developed a novel hybrid approach for SAT and implemented a prototype for the same: DPLL-Stalmarck. It uses two complementary algorithms for SAT, DPLL and Stalmarck’s. The two solvers run asynchronously and dynamic information exchange is used for co-operative solving. Interaction of the solvers has been encapsulated as a programming abstraction. Compared to the standalone DPLL solver, DPLL-Stalmarck shows significant performance gains for two of the three problem classes considered and comparable behaviour otherwise. As an exploratory research effort, we have developed a novel algorithm, Concurrent Stalmarck, by applying concurrent techniques to the Stalmarck algorithm. A proof-of-concept prototype for the same has been implemented. Implementation of the saturation technique of the Stalmarck algorithm in a parallel setting, as implemented in Concurrent Stalmarck, has been encapsulated as a programming abstraction. LCF: Provision of programmable concurrent primitives enables customisation of concurrent techniques to specific theorem proving scenarios. In this case study, we have developed a multilayered approach to support programmable, sound extensions for an LCF prover: use programming abstractions to implement the concurrent techniques; use these to develop novel tacticals (control structures to apply tactics), incorporating concurrent techniques; and use these to develop novel proof search procedures. This approach has been implemented in a prototypical LCF style first-order prover, using Alice ML. New tacticals developed are: fastest-first; distributed composition; crossTalk: a novel tactic which uses dynamic, collaborative information exchange to handle unification across multiple sub-goals, with shared meta-variables; a new tactic, performing simultaneous proof-refutation attempts on propositional (sub- )goals, by invoking an external SAT solver (SAT case study), as a counter-example finder. Examples of concrete theorem proving scenarios are provided, demonstrating the utility of these extensions. Synthesis of a variety of automatic proof search procedures has been demonstrated, illustrating the scope of programmability and customisation, enabled by our multilayered approach.
|
139 |
The effects of cost, income, and socio-economic variables on student scholastic aptitude scoresAdams, Edward R. January 1994 (has links)
The purpose of the study was to determine at the school district level, what relationships exist, if any, between Indiana school corporation SAT mean scores (a limited output measure of student achievement and aptitude) and six intervening input variables: (1) operating expenditures per pupil, (2) instructional expenditures per pupil, (3) per capita income, (4) corporation enrollment size, (5) degree of population density, and (6) at-risk index characteristics.The study provided a review of the research and related literature on relationships between high school SAT scores, public school expenditures and other intervening input variables. The study addressed questions about relationships and effects of expenditures and other input variables upon SAT scores. The need to examine individual district variation in SAT performance was motivated by the influence comparisons of SAT scores have on public perception of education and the resultant impact on state and local education policy.A principal goal of the study was to add to the understanding of the relationships between public expenditures directed to education, specific demographic and compositional student characteristics, and education performance as measured in SAT mean scores.The study incorporated Pearson product moment correlations and stepwise multiple regression procedures to determine the existence of variation in outputs accounted for by variation in the specific inputs. Initially a Pearson correlation coefficient was calculated to test each of the six null hypotheses. Statistical significance was sought in each instance at the .01 level. Stepwise multiple regressions were then used to examine the SAT output relationships with compounded variables.The following conclusions were drawn from the findings and the summary tables reported in the study: 1. Low per capita income is associated with a decline in SAT scores and higher per capita income to associate with higher SAT scores.2. Increased performance on the SAT is not dependent upon the amount spent in total General Fund expenditures per pupil, however, an increased amount spent on instruction tends to raise SAT scores.3. A high at-risk index presence is associated with lower SAT scores whereas a low at-risk index tends to be associated with higher SAT scores.4. Urban density does not effect SAT scores in a statistically significant manner.5. The size of the school corporation has no relationship to SAT scores.Overall total General Fund expenditures were not shown to significantly affect SAT scores, although such costs were not shown to be detrimental in the multiple regression analysis. More importantly, instructional expenditures per student were demonstrated to be one of three significant factors affecting higher SAT scores. The other significant variables were poverty and high at-risk factors, which were shown to be associated with lower SAT score levels.The data and the study strongly suggest that, if school authorities, legislatures, private business and parents continue to use the SAT scores as a prime barometer and target for educational success, we should immediately begin to compensate dramatically for the atrisk and per capita income deficits in individual students and impacted schools, and maximize financial resources into proven classroom instructional strategies. If the public wishes to narrow the gap in SAT scores, then policy makers need to examine the educational-environmental liabilities of low income, single parent home, and the appropriate level of instructional cost which will generate acceptable SAT results. / Department of Educational Leadership
|
140 |
対話型埋込みによる数独問題の設計ツールKUSAKARI, Keiichirou, SAKABE, Toshiki, NISHIDA, Naoki, SAKAI, Masahiko, UMANO, Yohei, 草刈, 圭一郎, 坂部, 俊樹, 西田, 直樹, 酒井, 正彦, 馬野, 洋平 12 1900 (has links)
No description available.
|
Page generated in 0.0325 seconds