111 |
Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued DecisionsAlsolami, Fawaz 25 April 2016 (has links)
‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important areas of applications are knowledge extraction and representation.
The benefit of considering inhibitory rules is connected with the fact that in some situations they can describe more knowledge than the decision ones. Decision tables with many-valued decisions arise in combinatorial optimization, computational geometry, fault diagnosis, and especially under the processing of data sets.
In this thesis, various examples of real-life problems are considered which help to understand the motivation of the investigation. We extend relatively simple results obtained earlier for decision rules over decision tables with many-valued decisions to the case of inhibitory rules. The behavior of Shannon functions (which characterize complexity of rule systems) is studied for finite and infinite information systems, for global and local approaches, and for decision and inhibitory rules.
The extensions of dynamic programming for the study of decision rules over decision tables with single-valued decisions are generalized to the case of decision tables with many-valued decisions. These results are also extended to the case of inhibitory rules. As a result, we have algorithms (i) for multi-stage optimization of rules relative to such criteria as length or coverage, (ii) for counting the number of optimal rules, (iii) for construction of Pareto optimal points for bi-criteria optimization problems, (iv) for construction of graphs describing relationships between two cost functions, and (v) for construction of graphs describing relationships between cost and accuracy of rules.
The applications of created tools include comparison (based on information about Pareto optimal points) of greedy heuristics for bi-criteria optimization of rules, and construction (based on multi-stage optimization of rules) of relatively short systems of rules that can be used for knowledge representation.
|
112 |
The Systems of Post and Post Algebras: A Demonstration of an Obvious FactLeyva, Daviel 21 March 2019 (has links)
In 1942, Paul C. Rosenbloom put out a definition of a Post algebra after Emil L. Post published a collection of systems of many–valued logic. Post algebras became easier to handle following George Epstein’s alternative definition. As conceived by Rosenbloom, Post algebras were meant to capture the algebraic properties of Post’s systems; this fact was not verified by Rosenbloom nor Epstein and has been assumed by others in the field. In this thesis, the long–awaited demonstration of this oft–asserted assertion is given.
After an elemental history of many–valued logic and a review of basic Classical Propositional Logic, the systems given by Post are introduced. The definition of a Post algebra according to Rosenbloom together with an examination of the meaning of its notation in the context of Post’s systems are given. Epstein’s definition of a Post algebra follows the necessary concepts from lattice theory, making it possible to prove that Post’s systems of many–valued logic do in fact form a Post algebra.
|
113 |
A 3-valued approach to disbeliefNittka, Alexander 20 October 2017 (has links)
Es wird eine sprachliche Erweiterung der Aussagenlogik vorgeschlagen. Es handelt sich um eine Art von schwacher Negation ('disbelief'). Eine entsprechende Logik wird entwickelt. Diese wird semantisch charakterisiert. Weiterhin wird auf Schwierigkeiten hingewiesen, die bei der Axiomatisierung auftreten werden.
|
114 |
Models for Quantitative Distributed Systems and Multi-Valued LogicsHuschenbett, Martin 26 February 2018 (has links)
We investigate weighted asynchronous cellular automata with weights in valuation monoids. These automata form a distributed extension of weighted finite automata and allow us to model concurrency. Valuation monoids are abstract weight structures that include semirings and (non-distributive) bounded lattices but also offer the possibility to model average behaviors. We prove that weighted asynchronous cellular automata and weighted finite automata which satisfy an I-diamond property are equally expressive. Depending on the properties of the valuation monoid, we characterize this expressiveness by certain syntactically restricted fragments of weighted MSO logics. Finally, we define the quantitative model-checking problem for distributed systems and show how it can be reduced to the corresponding problem
for sequential systems.
|
115 |
Investigating Normality in Lattice Valued Topological SpacesHetzel, Luke 09 May 2022 (has links)
No description available.
|
116 |
Evolutionary Optimization Algorithms for Nonlinear SystemsRaj, Ashish 01 May 2013 (has links)
Many real world problems in science and engineering can be treated as optimization problems with multiple objectives or criteria. The demand for fast and robust stochastic algorithms to cater to the optimization needs is very high. When the cost function for the problem is nonlinear and non-differentiable, direct search approaches are the methods of choice. Many such approaches use the greedy criterion, which is based on accepting the new parameter vector only if it reduces the value of the cost function. This could result in fast convergence, but also in misconvergence where it could lead the vectors to get trapped in local minima. Inherently, parallel search techniques have more exploratory power. These techniques discourage premature convergence and consequently, there are some candidate solution vectors which do not converge to the global minimum solution at any point of time. Rather, they constantly explore the whole search space for other possible solutions. In this thesis, we concentrate on benchmarking three popular algorithms: Real-valued Genetic Algorithm (RGA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). The DE algorithm is found to out-perform the other algorithms in fast convergence and in attaining low-cost function values. The DE algorithm is selected and used to build a model for forecasting auroral oval boundaries during a solar storm event. This is compared against an established model by Feldstein and Starkov. As an extended study, the ability of the DE is further put into test in another example of a nonlinear system study, by using it to study and design phase-locked loop circuits. In particular, the algorithm is used to obtain circuit parameters when frequency steps are applied at the input at particular instances.
|
117 |
The spatial structure of genetic diversity under natural selection and in heterogeneous environments / Structure spatiale de la diversité génétique : influence de la sélection naturelle et d'un environnement hétérogèneForien, Raphael 24 November 2017 (has links)
Cette thèse porte sur la structure spatiale de la diversité génétique. Dans un premier temps, nous étudions un processus à valeurs mesure décrivant l'évolution de la composition génétique d'une population soumise à la sélection naturelle. Nous montrons que ce processus satisfait un théorème de la limite centrale, et que ses fluctuations sont données par la solution d'une équation aux dérivées partielles stochastique. Nous utilisons ce résultat pour donner une estimation du fardeau de dérive au sein d'une population structurée en espace.Dans un deuxième temps, nous nous intéressons à la composition génétique d'une population lorsque les individus se déplacent plus facilement dans une région de l'espace que dans l'autre (on parle alors de dispersion hétérogène). Nous démontrons dans ce cas la convergence des fréquences alléliques via la convergence des lignées ancestrales vers un système de mouvements browniens de Walsh.Nous détaillons également l'impact d'une barrière géographique traversant l'habitat d'une population sur sa diversité génétique. Nous montrons que les lignées ancestrales décrivent dans ce cas des mouvements browniens partiellement réfléchis, dont nous donnons plusieurs constructions.Dans le but d'appliquer ces travaux, nous adaptons une méthode d'inférence démographique au cas de la dispersion hétérogène. Cette méthode utilise les blocs continus de génome hérités d'un même ancêtre entre les paires d'individus dans l'échantillon et permet d'estimer les caractéristiques démographiques d'une population lorsque celles-ci varient dans l'espace. Pour terminer nous démontrons l'efficacité de notre méthode sur des données simulées. / This thesis deals with the spatial structure of genetic diversity. We first study a measure-valued process describing the evolution of the genetic composition of a population subject to natural selection. We show that this process satisfies a central limit theorem and that its fluctuations are given by the solution to a stochastic partial differential equation. We then use this result to obtain an estimate of the drift load in spatially structured populations.Next we investigate the genetic composition of a populations whose individuals move more freely in one part of space than in the other (a situation called dispersal heterogeneity). We show in this case the convergence of allele frequencies via the convergence of ancestral lineages to a system of skew Brownian motions.We then detail the effect of a barrier to gene flow dividing the habitat of a population. We show that ancestral lineages follow partially reflected Brownian motions, of whom we give several constructions.To apply these results, we adapt a method for demographic inference to the setting of dispersal heterogeneity. This method makes use of long blocks of genome along which pairs of individuals share a common ancestry, and allows to estimate several demographic parameters when they vary accross space. To conclude, we demonstrate the accuracy of our method on simulated datasets.
|
118 |
Weak Measure-Valued Solutions to a Nonlinear Conservation Law Modeling a Highly Re-entrant Manufacturing SystemJanuary 2019 (has links)
abstract: The main part of this work establishes existence, uniqueness and regularity properties of measure-valued solutions of a nonlinear hyperbolic conservation law with non-local velocities. Major challenges stem from in- and out-fluxes containing nonzero pure-point parts which cause discontinuities of the velocities. This part is preceded, and motivated, by an extended study which proves that an associated optimal control problem has no optimal $L^1$-solutions that are supported on short time intervals.
The hyperbolic conservation law considered here is a well-established model for a highly re-entrant semiconductor manufacturing system. Prior work established well-posedness for $L^1$-controls and states, and existence of optimal solutions for $L^2$-controls, states, and control objectives. The results on measure-valued solutions presented here reduce to the existing literature in the case of initial state and in-flux being absolutely continuous measures. The surprising well-posedness (in the face of measures containing nonzero pure-point part and discontinuous velocities) is directly related to characteristic features of the model that capture the highly re-entrant nature of the semiconductor manufacturing system.
More specifically, the optimal control problem is to minimize an $L^1$-functional that measures the mismatch between actual and desired accumulated out-flux. The focus is on the transition between equilibria with eventually zero backlog. In the case of a step up to a larger equilibrium, the in-flux not only needs to increase to match the higher desired out-flux, but also needs to increase the mass in the factory and to make up for the backlog caused by an inverse response of the system. The optimality results obtained confirm the heuristic inference that the optimal solution should be an impulsive in-flux, but this is no longer in the space of $L^1$-controls.
The need for impulsive controls motivates the change of the setting from $L^1$-controls and states to controls and states that are Borel measures. The key strategy is to temporarily abandon the Eulerian point of view and first construct Lagrangian solutions. The final section proposes a notion of weak measure-valued solutions and proves existence and uniqueness of such.
In the case of the in-flux containing nonzero pure-point part, the weak solution cannot depend continuously on the time with respect to any norm. However, using semi-norms that are related to the flat norm, a weaker form of continuity of solutions with respect to time is proven. It is conjectured that also a similar weak continuous dependence on initial data holds with respect to a variant of the flat norm. / Dissertation/Thesis / Doctoral Dissertation Applied Mathematics 2019
|
119 |
Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor ModelHawash, Maher Mofeid 30 May 2013 (has links)
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research area is focused on reversible, permutative and fully specified binary and ternary specifications and the applicability of the resulting circuit to the physical limitations of existing quantum technologies.
Automated synthesis of arbitrary reversible specifications is an NP hard, multiobjective optimization problem, where 1) the amount of time and computational resources required to synthesize the specification, 2) the number of primitive quantum gates in the resulting circuit (quantum cost), and 3) the number of ancillary qubits (variables added to hold intermediate calculations) are all minimized while 4) the number of variables is maximized. Some of the existing algorithms in the literature ignored objective 2 by focusing on the synthesis of a single solution without the addition of any ancillary qubits while others attempted to explore every possible solution in the search space in an effort to discover the optimal solution (i.e., sacrificed objective 1 and 4).
Other algorithms resorted to adding a huge number of ancillary qubits (counter to objective 3) in an effort minimize the number of primitive gates (objective 2). In this dissertation, I first introduce the MMDSN algorithm that is capable of synthesizing binary specifications up to 30 variables, does not add any ancillary variables, produces better quantum cost (8-50% improvement) than algorithms which limit their search to a single solution and within a minimal amount of time compared to algorithms which perform exhaustive search (seconds vs. hours). The MMDSN algorithm introduces an innovative method of using the Hasse diagram to construct candidate solutions that are guaranteed to be valid and then selects the solution with the minimal quantum cost out of this subset.
I then introduce the Covered Set Partitions (CSP) algorithm that expands the search space of valid candidate solutions and allows for exploring solutions outside the range of MMDSN. I show a method of subdividing the expansive search landscape into smaller partitions and demonstrate the benefit of focusing on partition sizes that are around half of the number of variables (15% to 25% improvements, over MMDSN, for functions less than 12 variables, and more than 1000% improvement for functions with 12 and 13 variables). For a function of n variables, the CSP algorithm, theoretically, requires n times more to synthesize; however, by focusing on the middle k (k by MMDSN which typically yields lower quantum cost. I also show that using a Tabu search for selecting the next set of candidate from the CSP subset results in discovering solutions with even lower quantum costs (up to 10% improvement over CSP with random selection).
In Chapters 9 and 10 I question the predominant methods of measuring quantum cost and its applicability to physical implementation of quantum gates and circuits. I counter the prevailing literature by introducing a new standard for measuring the performance of quantum synthesis algorithms by enforcing the Linear Nearest Neighbor Model (LNNM) constraint, which is imposed by the today's leading implementations of quantum technology. In addition to enforcing physical constraints, the new LNNM quantum cost (LNNQC) allows for a level comparison amongst all methods of synthesis; specifically, methods which add a large number of ancillary variables to ones that add no additional variables. I show that, when LNNM is enforced, the quantum cost for methods that add a large number of ancillary qubits increases significantly (up to 1200%).
I also extend the Hasse based method to the ternary and I demonstrate synthesis of specifications of up to 9 ternary variables (compared to 3 ternary variables that existed in the literature). I introduce the concept of ternary precedence order and its implication on the construction of the Hasse diagram and the construction of valid candidate solutions. I also provide a case study comparing the performance of ternary logic synthesis of large functions using both a CUDA graphic processor with 1024 cores and an Intel i7 processor with 8 cores. In the process of exploring large ternary functions I introduce, to the literature, eight families of ternary benchmark functions along with a Multiple Valued file specification (the Extended Quantum Specification XQS). I also introduce a new composite quantum gate, the multiple valued Swivel gate, which swaps the information of qubits around a centrally located pivot point.
In summary, my research objectives are as follows:
* Explore and create automated synthesis algorithms for reversible circuits both in binary and ternary logic for large number of variables.
* Study the impact of enforcing Linear Nearest Neighbor Model (LNNM) constraint for every interaction between qubits for reversible binary specifications.
* Advocate for a revised metric for measuring the cost of a quantum circuit in concordance with LNNM, where, on one hand, such a metric would provide a way for balanced comparison between the various flavors of algorithms, and on the other hand, represents a realistic cost of a quantum circuit with respect to an ion trap implementation.
* Establish an open source repository for sharing the results, software code and publications with the scientific community. With the dwindling expectations for a new lifeline on silicon-based technologies, quantum computations have the potential of becoming the future workhorse of computations. Similar to the automated CAD tools of classical logic, my work lays the foundation for creating automated tools for constructing quantum circuits from reversible specifications.
|
120 |
Scalable Map-Reduce Algorithms for Mining Formal Concepts and Graph SubstructuresKumar, Lalit January 2018 (has links)
No description available.
|
Page generated in 0.0423 seconds