1 |
Diversification and Intensification in Hybrid Metaheuristics for Constraint Satisfaction ProblemsLynden, John M. 01 January 2019 (has links)
Metaheuristics are used to find feasible solutions to hard Combinatorial Optimization Problems (COPs). Constraint Satisfaction Problems (CSPs) may be formulated as COPs, where the objective is to reduce the number of violated constraints to zero. The popular puzzle Sudoku is an NP-complete problem that has been used to study the effectiveness of metaheuristics in solving CSPs. Applying the Simulated Annealing (SA) metaheuristic to Sudoku has been shown to be a successful method to solve CSPs. However, the ‘easy-hard-easy’ phase-transition behavior frequently attributed to a certain class of CSPs makes finding a solution extremely difficult in the hard phase because of the vast search space, the small number of solutions and a fitness landscape marked by many plateaus and local minima. Two key mechanisms that metaheuristics employ for searching are diversification and intensification. Diversification is the method of identifying diverse promising regions of the search space and is achieved through the process of heating/reheating. Intensification is the method of finding a solution in one of these promising regions and is achieved through the process of cooling. The hard phase area of the search terrain makes traversal without becoming trapped very challenging. Running the best available method - a Constraint Propagation/Depth-First Search algorithm - against 30,000 benchmark problem-instances, 20,240 remain unsolved after ten runs at one minute per run which we classify as very hard. This dissertation studies the delicate balance between diversification and intensification in the search process and offers a hybrid SA algorithm to solve very hard instances. The algorithm presents (a) a heating/reheating strategy that incorporates the lowest solution cost for diversification; (b) a more complex two-stage cooling schedule for faster intensification; (c) Constraint Programming (CP) hybridization to reduce the search space and to escape a local minimum; (d) a three-way swap, secondary neighborhood operator for a low expense method of diversification. These techniques are tested individually and in hybrid combinations for a total of 11 strategies, and the effectiveness of each is evaluated by percentage solved and average best run-time to solution. In the final analysis, all strategies are an improvement on current methods, but the most remarkable results come from the application of the “Quick Reset” technique between cooling stages.
|
2 |
Interactive narrative generation using computational verb theorySmit, Marius 24 August 2010 (has links)
Interactive narrative extends traditional story-telling techniques by enabling previously passive observers to become active participants in the narrative events that unfold. A variety of approaches have attempted to construct such interactive narrative spaces and reconcile the goals of interactivity and dramatic story-telling. With the advent of the linguistic variable in 1972, a means was established for modelling natural language words and phrases mathematically and computationally. Over the past decade, the computational verb, first introduced in 1997, has been developed as a mathematical means of modelling natural language verbs in terms of dynamic systems, and vice versa. Computational verb theory extends the initial concept of the linguistic variable beyond being able to model adjectives, nouns, and passive states, into the realm of actions as denoted by natural language verbs. This thesis presents the framework and implementation of a system that generates interactive narrative spaces from narrative text. The concept of interactive narrative is introduced and recent developments in the area of interactive narrative are discussed. Secondly, a brief history of the development of the linguistic variable and the computational verb are provided. With the context of the computational verb (interactive) narrative generation (CVTNG) system presented, the underlying theoretical principles of the system are established. The CVTNG system principles are described in terms of fuzzy set, computational verb, and constraint satisfaction theory. The fuzzy set, computational verb, and constraint satisfaction principles are organised according to a CVTNG architecture. The CVTNG architecture is then described in terms of its subsystems, structures, algorithms, and interfaces. Each CVTNG system component is related to the overall design considerations and goals. A prototype of the CVTNG system is implemented and tested against a suite of natural language sentences. The behaviour and performance of the CVTNG system prototype are discussed in relation to the CVTNG system’s design principles. Results are calculated and stored as variable values that are dynamically and generically associated with representational means, specifically computer graphics, to illustrate the generation of interactive narrative spaces. Plans for future work are discussed to show the immense development potential of this application. The thesis concludes that the CVTNG system provides a solid and extendable base for the intuitive generation of interactive narrative spaces from narrative text, computational verb models, and freely associated media. Copyright / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / Unrestricted
|
3 |
Robustness and stability in dynamic constraint satisfaction problemsCliment Aunés, Laura Isabel 07 January 2014 (has links)
Constraint programming is a paradigm wherein relations between variables are stated in the form of constraints. It is well-known that many real life problems can be modeled as Constraint Satisfaction Problems (CSPs). Much effort has been spent to increase the efficiency of algorithms for solving CSPs. However, many of these techniques assume that the set of variables, domains and constraints involved in the CSP are known and fixed when the problem is modeled. This is a strong limitation because many problems come from uncertain and dynamic environments, where both the original problem may evolve because of the environment, the user or other agents. In such situations, a solution that holds for the original problem can become invalid after changes.
There are two main approaches for dealing with these situations: reactive and proactive approaches. Using reactive approaches entails re-solving the CSP after each solution loss, which is a time consuming. That is a clear disadvantage, especially when we deal with short-term changes, where solution loss is frequent. In addition, in many applications, such as on-line planning and scheduling, the delivery time of a new solution may be too long for actions to be taken on time, so a solution loss can produce several negative effects in the modeled problem. For a task assignment production system with several machines, it could cause the shutdown of the production system, the breakage of machines, the loss of the material/object in production, etc. In a transport timetabling problem, the solution loss, due to some disruption at a point, may produce a delay that propagates through the entire schedule. In addition, all the negative effects stated above will probably entail an economic loss.
In this thesis we develop several proactive approaches. Proactive approaches use knowledge about possible future changes in order to avoid or minimize their effects. These approaches are applied before the changes occur. Thus, our approaches search for robust solutions, which have a high probability to remain valid after changes. Furthermore, some of our approaches also consider that the solutions can be easily adapted when they did not resist the changes in the original problem. Thus, these approaches search for stable solutions, which have an alternative solution that is similar to the previous one and therefore can be used in case of a value breakage.
In this context, sometimes there exists knowledge about the uncertain and dynamic environment. However in many cases, this information is unknown or hard to obtain. For this reason, for the majority of our approaches (specifically 3 of the 4 developed approaches), the only assumptions made about changes are those inherent in the structure of problems with ordered domains. Given this framework and therefore the existence of a significant order over domain values, it is reasonable to assume that the original bounds of the solution space may undergo restrictive or relaxed modifications. Note that the possibility of solution loss only exists when changes over the original bounds of the solution space are restrictive. Therefore, the main objective for searching robust solutions in this framework is to find solutions located as far away as possible from the bounds of the solution space. In order to meet this criterion, we propose several approaches that can be divided in enumeration-based techniques and a search algorithm. / Climent Aunés, LI. (2013). Robustness and stability in dynamic constraint satisfaction problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34785
|
4 |
Automatisation de la synthèse d’architectures appliquée aux aéronefs à voilure tournante / Automated architecture synthesis applied to rotary-wing aircraftsHartmann, Chris 30 January 2018 (has links)
Les travaux, présentés dans ce manuscrit de thèse, s’inscrivent dans les courants de l’Ingénierie Système et de la synthèse assistée par ordinateur. Une méthodologie outillée à l’aide d’un logiciel a été développée et est détaillée. Le processus de synthèse semi-automatisé est organisé en trois grandes phases : l’extraction du besoin et sa transformation en spécification du système à concevoir, une synthèse des architectures logiques et une analyse des architectures physiques. L’extraction et la transformation du besoin est une étape manuelle dans la méthodologie proposée. Elle s’appuie grandement sur des travaux précédents du champ de l’Ingénierie Système. L’objectif de ce sous-processus est d’obtenir une représentation du système compréhensible par l’utilisateur et interprétable par le logiciel. Les parties prenantes, les situations de vie que le système va rencontrer, les besoins, les exigences et les interfaces avec l’environnement sont modélisés. La synthèse, ou génération, des architectures logiques, est le résultat de la modélisation précédente du système. Un code C++ permet la transformation du problème de synthèse en expressions mathématiques qui sont résolues à l’aide d’un solveur CSP entier. Le résultat de ce sous-processus est un ensemble de graphes, triés par famille. Ces graphes représentent toutes les architectures logiques viables vis-à-vis des connexions entre ses sous-systèmes. L’analyse des architectures physiques permet d’écrire, pour chaque architecture logique, un système d’équations physiques non-linéaires mais non différentielles pour une première étape de prédimensionnement. Ces systèmes, écrits sous la forme de problèmes d’optimisation sont ensuite résolus à l’aide d’un solveur CSP réel. Au final, les architectures sont triées suivant la valeur d’une variable d’état commune à toutes les alternatives. / The research work presented in this thesis is related to the System Engineering field and the computer aided synthesis field. A methodology realized by a newsoftware is developed and introduced. The synthesis process is semi-automated and is devided into three phases: the need extraction and its translation into system requirements, a logical architecture synthesis and a physical architecture analysis. The need extraction and its translation into system requirements are highly inspired from previous work from the System Engineering field. Nevertheless, the objective, at this step, is to provide the software and the user with a unique model understandable to both. Stakeholders, life situations, needs, requirements and interfaces with the environment are modelized. The logical architecture synthesis, or logical architecture generation, is in accordance with the models we build previoulsy. That is to say that all logical architectures are instantiations of the system requirements expressed before. A C++ code translates this model into mathematical equations solved by an integer CSP solver. The result of this part of the methodology is a set of graphs, ranked by family. These graphs are views of the logical architectures. They express all the possible links between the sub-systems of the architectures. The physical architecture analysis step is an automated equation writer. The equations are non-linear and non differential and they are written for each logical architecture generated at the previous step. They are used for a first step of pre-sizing. These systems are then solved by a CSP solver for real numbers through an optimization. At the end, all the feasible architectures are ranked according to a unique state variable that iscommon to all possible solutions.
|
5 |
Parameterized algorithms on digraph and constraint satisfaction problemsKim, Eun Jung January 2010 (has links)
While polynomial-time approximation algorithms remain a dominant notion in tackling computationally hard problems, the framework of parameterized complexity has been emerging rapidly in recent years. Roughly speaking, the analytic framework of parameterized complexity attempts to grasp the difference between problems which admit O(c^k . poly(n))-time algorithms such as Vertex Cover, and problems like Dominating Set for which essentially brute-force O(n^k)-algorithms are best possible until now. Problems of the former type is said to be fixed-parameter tractable (FPT) and those of the latter type are regarded intractable. In this thesis, we investigate some problems on directed graphs and a number of constraint satisfaction problems (CSPs) from the parameterized perspective. We develop fixed-parameter algorithms for some digraph problems. In particular, we focus on the basic problem of finding a tree with certain property embedded in a given digraph. New or improved fpt-algorthms are presented for finding an out-branching with many or few leaves (Directed Maximum Leaf, Directed Minimum Leaf problems). For acyclic digraphs, Directed Maximum Leaf is shown to allow a kernel with linear number of vertices. We suggest a kernel for Directed Minimum Leaf with quadratic number of vertices. An improved fpt-algorithm for finding k-Out-Tree is presented and this algorithm is incorporated as a subroutine to obtain a better algorithm for Directed Minimum Leaf. In the second part of this thesis, we concentrate on several CSPs in which we want to maximize the number of satisfied constraints and consider parameterization "above tight lower bound" for these problems. To deal with this type of parameterization, we present a new method called SABEM using probabilistic approach and applying harmonic analysis on pseudo-boolean functions. Using SABEM we show that a number of CSPs admit polynomial kernels, thus being fixed-parameter tractable. Moreover, we suggest some problem-specific combinatorial approaches to Max-2-Sat and a wide special class of Max-Lin2, which lead to a kernel of smaller size than what can be obtained using SABEM for respective problems.
|
6 |
CHORD: constraint handling object-oriented rules with disjunctionsSILVA, Marcos Aurélio Almeida da 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:53:28Z (GMT). No. of bitstreams: 2
arquivo1916_1.pdf: 1434494 bytes, checksum: 1b590044040a8dcd13c8953148aade5a (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Constraint Handling Object-oriented Rules with Disjunctions (CHORD), é uma extensão
orientada a objetos (OO) de CHRv, uma linguagem relacional baseada em regras que foi
inicialmente desenhada para a especificação em caixa branca de resolvedores de restrições mas
veio a mostrar-se uma linguagem bastante flexível. Flexibilidade esta que vêm sendo
demonstrada nos últimos anos pelo grande número de serviços de raciocínio e algoritmos que
foram descritos concisamente por meio desta linguagem.
Para definir a sintaxe de nossa extensão, nós nos baseamos na abordagem seguida na
extensão de Prolog realizada por Frame Logic que é similar à nossa, na qual, a sintaxe de frames
foi introduzida para representar construtores OO em cima dos relacionais originais. Além disso,
em vez de forçar o usuário (programador) a se adequar à uma semântica pré-definida para esses
novos construtores, nós escolhemos uma estratégia inovadora: desacoplar a sintaxe da semântica
da linguagem permitindo que o conjunto de hipóteses semânticas seja totalmente configurável.
Estes avanços claramente contribuem para o domínio do Raciocínio Automático e da
Representação do Conhecimento (AR/KR), dentro do qual CHRv é a linguagem de representa
ção do conhecimento mais versátil. Este trabalho também permite a integração dos serviços de
raciocínio já providos por CHRv ao estado da arte em linguagens orientadas a objetos reduzindo
a quebra de paradigma entre elas.
Há também contribuições a outros domínios: em programação declarativa, ao
disponibilizar a primeira linguagem a integrar as formas mais poderosas de programação
orientada a objetos, baseada em regras e baseada em restrições. No domínio da Web Semântica,
nós mostramos que nossa linguagem generaliza a semântica de três dos mais importantes padrões
de linguagem de representação do conhecimento, provendo uma solução para o problema
recorrente de integração nesta área. No domínio da engenharia guiado por modelos (MDE), este
trabalho provê o primeiro mapeamento semântico para modelos UML/OCL, MOF/OCL e
transformações de modelo explícitamente configurável e com fundamentação axiomática
declarativa e operacional em CFOL.
Neste trabalho nós apresentamos as sintaxes abstrata e contreta de CHORD, suas
semânticas operacional e declarativa em CFOL, e uma ontologia de hipóteses semânticas para
herança. Para validá-lo, nós apresentamos três estudos de caso mostrando que CHORD
generaliza UML/OCL, Frame Logic e OWL
|
7 |
Valued Constraint Satisfaction Problems over Infinite DomainsViola, Caterina 16 July 2020 (has links)
The object of the thesis is the computational complexity of certain combinatorial optimisation problems called \emph{valued constraint satisfaction problems}, or \emph{VCSPs} for short. The requirements and optimisation criteria of these problems are expressed by sums of \emph{(valued) constraints} (also called \emph{cost functions}). More precisely, the input of a VCSP consists of a finite set of variables, a finite set of cost functions that depend on these variables, and a cost $u$; the task is to find values for the variables such that the sum of the cost functions is at most $u$.
By restricting the set of possible cost functions in the input, a great variety of computational optimisation problems can be modelled as VCSPs. Recently, the computational complexity of all VCSPs for finite sets of cost functions over a finite domain has been classified. Many natural optimisation problems, however, cannot be formulated as VCSPs over a finite domain.
We initiate the systematic investigation of infinite-domain VCSPs by studying the complexity of VCSPs for piecewise linear (PL) and piecewise linear homogeneous (PLH) cost functions.
The VCSP for a finite set of PLH cost functions can be solved in polynomial time if the cost functions are improved by fully symmetric fractional operations of all arities. We
show this by (polynomial-time many-one) reducing the problem to a finite-domain VCSP which can be solved using a linear programming relaxation. We apply this result to show the polynomial-time tractability of VCSPs for {\it submodular} PLH cost functions, for {\it convex} PLH cost functions, and for {\it componentwise increasing} PLH cost functions; in fact, we show that submodular PLH functions and componentwise increasing PLH functions form maximally tractable classes of PLH cost functions.
We define the notion of {\it expressive power} for sets of cost functions over arbitrary domains, and discuss the relation between the expressive power and the set of fractional operations improving the same set of cost functions over an arbitrary countable domain.
Finally, we provide a polynomial-time algorithm solving the restriction of the VCSP for {\it all} PL cost functions to a fixed number of variables.
|
8 |
Méthodes à divergences pour la résolution de problèmes de satisfaction de contraintes et d'optimisation combinatoire / Discrepancy-based search for constraint satisfaction and combinatorial optimisation problemsKaroui, Wafa 09 October 2010 (has links)
Le formalisme « Problème de Satisfaction de Contraintes » (ou CSP pour Constraint Satisfaction Problem) peut être considéré comme un langage de représentation formelle qui couvre l'ensemble des problèmes dont la modélisation fait intervenir des contraintes. L'intérêt de ce formalisme réside dans l'exploitation de la généricité d'algorithmes de résolution puissants mais également dans la performance d'algorithmes dédiés à des problèmes particuliers.Dans ce travail de thèse, nous étudions la résolution de CSP par des méthodes de recherche arborescente basées sur la notion de « divergence » (une divergence est relative à la contradiction d’une décision proposée par une heuristique de référence). Dans ce cadre, nous proposons de nouveaux mécanismes d'amélioration des méthodes de recherche générales qui exploitent les échecs rencontrés pendant la résolution, en adoptant des heuristiques de pondération des variables et des valeurs. Nous proposons également d'autres techniques spécifiques aux méthodes à base de divergences qui conditionnent l’exploration de l’arbre de recherche développé, notamment la restriction des divergences, les différents modes de comptage ainsi que le positionnement des divergences. Ces propositions sont validées par des expérimentations numériques menées sur des problèmes de satisfaction de contraintes réels et aléatoires. Des comparaisons sont effectuées entre variantes de méthodes à divergences intégrant différentes combinaisons des améliorations et d’autres méthodes connues pour leur performance.Dans une seconde partie, nous étendons nos propositions à un contexte d'optimisation en considérant la résolution de problèmes d'ordonnancement avec contraintes de délais (time lags). Nous traitons l'adaptation d'une méthode de « recherche par montée de divergences » (Climbing Discrepancy Search) pour la résolution de ces problèmes. Nous validons les performances de certaines variantes de cette méthode intégrant les mécanismes proposés dans ce travail sur des problèmes-test de la littérature / The CSP (Constraint Satisfaction Problem) formalism can be considered as a simple example of a formal representation language covering all problems including constraints. The advantage of this formalism consists in the fact that it allows powerful general-purpose algorithms as much as useful specific algorithms.In this PhD thesis, we study several tree search methods for solving CSPs and focus on ones based on the discrepancy concept (a discrepancy is a deviation from the first choice of the heuristic). In this context, we propose improving mechanisms for general methods. These mechanisms take benefits from conflicts and guide the search by weighting the variables and the values. We propose also special mechanisms for methods based on discrepancies as the discrepancies restriction, the discrepancies counting, and the discrepancies positions. All propositions are validated by experiments done on real and random CSPs. We compare variants of methods based on discrepancies integrating several combinations of improvements and other methods known for their efficiency.In a second part, we extend our propositions to an optimisation context considering scheduling problems with time lags. In this purpose, we adapt a discrepancy-based method, Climbing Discrepancy Search, to solve these problems. Efficiency of some improved variants of this method is tested on known benchmarks
|
9 |
Set-membership state estimation and application on fault detection / Estimations ensemblistes des états et application à la détectionXiong, Jun 12 September 2013 (has links)
La modélisation des systèmes dynamiques requiert la prise en compte d’incertitudes liées à l’existence inévitable de bruits (bruits de mesure, bruits sur la dynamique), à la méconnaissance de certains phénomènes perturbateurs mais également aux incertitudes sur la valeur des paramètres (spécification de tolérances, phénomène de vieillissement). Alors que certaines de ces incertitudes se prêtent bien à une modélisation de type statistique comme par exemple ! les bruits de mesure, d’autres se caractérisent mieux pa ! r des bornes, sans autre attribut. Dans ce travail de thèse, motivés par les observations ci-dessus, nous traitons le problème de l’intégration d’incertitudes statistiques et à erreurs bornées pour les systèmes linéaires à temps discret. Partant du filtre de Kalman Intervalle (noté IKF) développé dans [Chen 1997], nous proposons des améliorations significatives basées sur des techniques récentes de propagation de contraintes et d’inversion ensembliste qui, contrairement aux mécanismes mis en jeu par l’IKF, permettent d’obtenir un résultat garanti tout en contrôlant le pessimisme de l’analyse par intervalles. Cet algorithme est noté iIKF. Le filtre iIKF a la même structure récursive que le filtre de Kalman classique et délivre un encadrement de tous les estimés optimaux et des matrices de covariance possibles. L’algorithme IKF précédent évite quant à lui le problème de l’inversion des matrices intervalles, ce qui lui vaut de perdre des solutions possibles. Pour l’iIKF, nous proposons une méthode originale garantie pour l’inversion des matrices intervalle qui couple l’algorithme SIVIA (Set Inversion via Interval Analysis) et un ensemble de problèmes de propagation de contraintes. Par ailleurs, plusieurs mécanismes basés sur la propagation de contraintes sont également mis en œuvre pour limiter l’effet de surestimation due à la propagation d’intervalles dans la structure récursive du filtre. Un algorithme de détection de défauts basé sur iIKF est proposé en mettant en œuvre une stratégie de boucle semi-fermée qui permet de ne pas réalimenter le filtre avec des mesures corrompues par le défaut dès que celui-ci est détecté. A travers différents exemples, les avantages du filtre iIKF sont exposés et l’efficacité de l’algorithme de détection de défauts est démontré. / In this thesis, a new approach to estimation problems under the presence of bounded uncertain parameters and statistical noise has been presented. The objective is to use the uncertainty model which appears as the most appropriate for every kind of uncertainty. This leads to the need to consider uncertain stochastic systems and to study how the two types of uncertainty combine : statistical noise is modeled as the centered gaussian variable and the unknown but bounded parameters are approximated by intervals. This results in an estimation problem that demands the development of mixed filters and a set-theoretic strategy. The attention is drawn on set inversion problems and constraint satisfaction problems. The former is the foundation of a method for solving interval equations, and the latter can significantly improve the speed of interval based arithmetic and algorithms. An important contribution of this work consists in proposing an interval matrix inversion method which couples the algorithm SIVIA with the construction of a list of constraint propagation problems. The system model is formalized as an uncertain stochastic system. Starting with the interval Kalman filtering algorithm proposed in [Chen 1997] and that we name the IKF, an improved interval Kalman filtering algorithm (iIKF) is proposed. This algorithm is based on interval conditional expectation for interval linear systems. The iIKF has the same structure as the conventional Kalman filter while achieving guaranteed statistical optimality. The recursive computational scheme is developed in the set-membership context. Our improvements achieve guaranteed interval inversion whereas the original version IKF [Chen 1997] uses an instance (the upper bound) of the interval matrix to avoid the possible singularity problems. This point of view leads to a sub-optimal solution that does not preserve guaranteed results, some solutions being lost. On the contrary, in the presence of unknown-but-bounded parameters and measurement statistical errors, our estimation approach in the form of the iIKF provides guaranteed estimates, while maintaining a computational burden comparable to that of classic statistical approaches. Several constraint based techniques have also been implemented to limit the overestimation effect due to interval propagation within the interval Kalman filter recursive structure. The results have shown that the iIKF out puts bounded estimates that enclose all the solutions consistent with bounded errors and achieves good overestimation control. iIKF is used to propose a fault detection algorithm which makes use of a Semi-Closed Loop strategy which does not correct the state estimate with the measure as soon as a fault is detected. Two methods for generating fault indicators are proposed : they use the a priori state estimate and a threshold based on the a posteriori and a priori covariance matrix, respectively, and check the consistency against the measured output. Through different examples, the advantages of the iIKF with respect to previous versions are exhibited and the efficiency of the iIKF based Semi-Closed Loop fault detection algorithm is clearly demonstrated.
|
10 |
Complexity issues in counting, polynomial evaluation and zero findingBriquel, Irénée 29 November 2011 (has links) (PDF)
In the present thesis, we try to compare the classical boolean complexity with the algebraic complexity, by studying problems related to polynomials. We consider the algebraic models from Valiant and from Blum, Shub and Smale (BSS). To study the algebraic complexity classes, one can start from results and open questions from the boolean case, and look at their translation in the algebraic context. The comparison of the results obtained in the two settings will then boost our understanding of both complexity theories. The first part follows this framework. By considering a polynomial canonically associated to a boolean formula, we get a link between boolean complexity issues on the formula and algebraic complexity problems on the polynomial. We studied the complexity of computing the polynomial in Valiant's model, as a function of the complexity of the boolean formula. We found algebraic counterparts to some boolean results. Along the way, we could also use some algebraic methods to improve boolean results, in particular by getting better counting reductions. Another motivation for algebraic models of computation is to offer an elegant framework to the study of numerical algorithms. The second part of this thesis follows this approach. We started from new algorithms for the search of approximate zeros of complex systems of n polynomials in n variables. Up to now, those were BSS machine algorithms. We studied the implementation of these algorithms on digital computers, and propose an algorithm using floating arithmetic for this problem.
|
Page generated in 0.1374 seconds