• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 22
  • 18
  • 13
  • 9
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 320
  • 320
  • 105
  • 87
  • 76
  • 67
  • 44
  • 40
  • 37
  • 35
  • 28
  • 28
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS

Liu, Xudong 01 January 2016 (has links)
In my Ph.D. dissertation, I have studied problems arising in various aspects of preferences: preference modeling, preference learning, and preference reasoning, when preferences concern outcomes ranging over combinatorial domains. Preferences is a major research component in artificial intelligence (AI) and decision theory, and is closely related to the social choice theory considered by economists and political scientists. In my dissertation, I have exploited emerging connections between preferences in AI and social choice theory. Most of my research is on qualitative preference representations that extend and combine existing formalisms such as conditional preference nets, lexicographic preference trees, answer-set optimization programs, possibilistic logic, and conditional preference networks; on learning problems that aim at discovering qualitative preference models and predictive preference information from practical data; and on preference reasoning problems centered around qualitative preference optimization and aggregation methods. Applications of my research include recommender systems, decision support tools, multi-agent systems, and Internet trading and marketing platforms.
172

A Dynamic and Thermodynamic Approach to Complexity.

Yang, Jin 08 1900 (has links)
The problem of establishing the correct approach to complexity is a very hot and crucial issue to which this dissertation gives some contributions. This dissertation considers two main possibilities, one, advocated by Tsallis and co-workers, setting the foundation of complexity on a generalized, non-extensive , form of thermodynamics, and another, proposed by the UNT Center for Nonlinear Science, on complexity as a new condition that, for physical systems, would be equivalent to a state of matter intermediate between dynamics and thermodynamics. In the first part of this dissertation, the concept of Kolmogorov-Sinai entropy is introduced. The Pesin theorem is generalized in the formalism of Tsallis non-extensive thermodynamics. This generalized form of Pesin theorem is used in the study of two major classes of problems, whose prototypes are given by the Manneville and the logistic map respectively. The results of these studies convince us that the approach to complexity must be made along lines different from those of the non-extensive thermodynamics. We have been convinced that the Lévy walk can be used as a prototype model of complexity, as a condition of balance between order and randomness that yields new phenomena such as aging, and multifractality. We reach the conclusions that these properties must be studied within a dynamic rather than thermodynamic perspective. The second part focuses on the study of the heart beating problem using a dynamic model, the so-called memory beyond memory, based on the Lévy walker model. It is proved that the memory beyond memory effect is more obvious in the healthy heart beating sequence. The concepts of fractal, multifractal, wavelet transformation and wavelet transform maximum modulus (WTMM) method are introduced. Artificial time sequences are generated by the memory beyond memory model to mimic the heart beating sequence. Using WTMM method, the multifratal singular spectrums of the sequences are calculated. It is clear that the sequence with strong memory beyond memory effect has broader singular spectrum.2003-08
173

Adaptive constraint solving for information flow analysis

Dash, Santanu Kumar January 2015 (has links)
In program analysis, unknown properties for terms are typically represented symbolically as variables. Bound constraints on these variables can then specify multiple optimisation goals for computer programs and nd application in areas such as type theory, security, alias analysis and resource reasoning. Resolution of bound constraints is a problem steeped in graph theory; interdependencies between the variables is represented as a constraint graph. Additionally, constants are introduced into the system as concrete bounds over these variables and constants themselves are ordered over a lattice which is, once again, represented as a graph. Despite graph algorithms being central to bound constraint solving, most approaches to program optimisation that use bound constraint solving have treated their graph theoretic foundations as a black box. Little has been done to investigate the computational costs or design e cient graph algorithms for constraint resolution. Emerging examples of these lattices and bound constraint graphs, particularly from the domain of language-based security, are showing that these graphs and lattices are structurally diverse and could be arbitrarily large. Therefore, there is a pressing need to investigate the graph theoretic foundations of bound constraint solving. In this thesis, we investigate the computational costs of bound constraint solving from a graph theoretic perspective for Information Flow Analysis (IFA); IFA is a sub- eld of language-based security which veri es whether con dentiality and integrity of classified information is preserved as it is manipulated by a program. We present a novel framework based on graph decomposition for solving the (atomic) bound constraint problem for IFA. Our approach enables us to abstract away from connections between individual vertices to those between sets of vertices in both the constraint graph and an accompanying security lattice which defines ordering over constants. Thereby, we are able to achieve significant speedups compared to state-of-the-art graph algorithms applied to bound constraint solving. More importantly, our algorithms are highly adaptive in nature and seamlessly adapt to the structure of the constraint graph and the lattice. The computational costs of our approach is a function of the latent scope of decomposition in the constraint graph and the lattice; therefore, we enjoy the fastest runtime for every point in the structure-spectrum of these graphs and lattices. While the techniques in this dissertation are developed with IFA in mind, they can be extended to other application of the bound constraints problem, such as type inference and program analysis frameworks which use annotated type systems, where constants are ordered over a lattice.
174

Space in Proof Complexity

Vinyals, Marc January 2017 (has links)
ropositional proof complexity is the study of the resources that are needed to prove formulas in propositional logic. In this thesis we are concerned with the size and space of proofs, and in particular with the latter. Different approaches to reasoning are captured by corresponding proof systems. The simplest and most well studied proof system is resolution, and we try to get our understanding of other proof systems closer to that of resolution. In resolution we can prove a space lower bound just by showing that any proof must have a large clause. We prove a similar relation between resolution width and polynomial calculus space that lets us derive space lower bounds, and we use it to separate degree and space. For cutting planes we show length-space trade-offs. This is, there are formulas that have a proof in small space and a proof in small length, but there is no proof that can optimize both measures at the same time. We introduce a new measure of space, cumulative space, that accounts for the space used throughout a proof rather than only its maximum. This is exploratory work, but we can also prove new results for the usual space measure. We define a new proof system that aims to capture the power of current SAT solvers, and we show a landscape of length-space trade-offs comparable to those in resolution. To prove these results we build and use tools from other areas of computational complexity. One area is pebble games, very simple computational models that are useful for modelling space. In addition to results with applications to proof complexity, we show that pebble game cost is PSPACE-hard to approximate. Another area is communication complexity, the study of the amount of communication that is needed to solve a problem when its description is shared by multiple parties. We prove a simulation theorem that relates the query complexity of a function with the communication complexity of a composed function. / <p>QC 20170509</p>
175

Graph colourings and games

Meeks, Kitty M. F. T. January 2012 (has links)
Graph colourings and combinatorial games are two very widely studied topics in discrete mathematics. This thesis addresses the computational complexity of a range of problems falling within one or both of these subjects. Much of the thesis is concerned with the computational complexity of problems related to the combinatorial game (Free-)Flood-It, in which players aim to make a coloured graph monochromatic ("flood" the graph) with the minimum possible number of flooding operations; such problems are known to be computationally hard in many cases. We begin by proving some general structural results about the behaviour of the game, including a powerful characterisation of the number of moves required to flood a graph in terms of the number of moves required to flood its spanning trees; these structural results are then applied to prove tractability results about a number of flood-filling problems. We also consider the computational complexity of flood-filling problems when the game is played on a rectangular grid of fixed height (focussing in particular on 3xn and 2xn grids), answering an open question of Clifford, Jalsenius, Montanaro and Sach. The final chapter concerns the parameterised complexity of list problems on graphs of bounded treewidth. We prove structural results determining the list edge chromatic number and list total chromatic number of graphs with bounded treewidth and large maximum degree, which are special cases of the List (Edge) Colouring Conjecture and Total Colouring Conjecture respectively. Using these results, we show that the problem of determining either of these quantities is fixed parameter tractable, parameterised by the treewidth of the input graph. Finally, we analyse a list version of the Hamilton Path problem, and prove it to be W[1]-hard when parameterised by the pathwidth of the input graph. These results answer two open questions of Fellows, Fomin, Lokshtanov, Rosamond, Saurabh, Szeider and Thomassen.
176

Temporal Complexity and Stochastic Central Limit Theorem

Pramukkul, Pensri 08 1900 (has links)
Complex processes whose evolution in time rests on the occurrence of a large and random number of intermittent events are the systems under study. The mean time distance between two consecutive events is infinite, thereby violating the ergodic condition and activating at the same time a stochastic central limit theorem that explains why the Mittag-Leffler function is a universal property of nature. The time evolution of these complex systems is properly generated by means of fractional differential equations, thus leading to the interpretation of fractional trajectories as the average over many random trajectories, each of which fits the stochastic central limit theorem and the condition for the Mittag-Leffler universality. Additionally, the effect of noise on the generation of the Mittag-Leffler function is discussed. Fluctuations of relatively weak intensity can conceal the asymptotic inverse power law behavior of the Mittag-Leffler function, providing a reason why stretched exponentials are frequently found in nature. These results afford a more unified picture of complexity resting on the Mittag-Leffler function and encompassing the standard inverse power law definition.
177

Topics in monitoring and planning for embedded real-time systems

Ho, Hsi-Ming January 2015 (has links)
The verification of real-time systems has gained much interest in the formal verification community during the past two decades. In this thesis, we investigate two real-time verification problems that benefit from the techniques normally used in untimed verification. The first part of this thesis is concerned with the monitoring of real-time specifications. We study the expressiveness of metric temporal logics over timed words, a problem that dates back to early 1990s. We show that the logic obtained by extending Metric Temporal Logic (MTL) with two families of new modalities is expressively complete for the Monadic First-Order Logic of Order and Metric (FO[&lt;,+1]) in time-bounded settings. Furthermore, by allowing rational constants, expressive completeness also holds in the general (time-unbounded) setting. Finally, we incorporate several notions and techniques from LTL monitoring to obtain the first trace-length independent monitoring procedure for this logic. The second part of this thesis concerns a decision problem regarding UAVs: given a set of targets (each ascribed with a relative deadline) and flight times between each pair of targets, is there a way to coordinate a flock of k identical UAVs so that all targets are visited infinitely often and no target is ever left unvisited for a time longer than its relative deadline? We show that the problem is PSPACE-complete even in the single-UAV case, thereby corrects an erroneous claim from the literature. We then complement this result by proposing an efficient antichain-based approach where a delayed simulation is used to prune the state space. Experimental results clearly demonstrate the effectiveness of our approach.
178

Algoritmické problémy související s průnikovými grafy / Algoritmické problémy související s průnikovými grafy

Ivánek, Jindřich January 2013 (has links)
In this thesis we study two clique-cover problems which have interesting applications regarding the k -bend intersection graph representation: the edge-clique-cover-degree problem and the edge-clique-layered-cover problem. We focus on the complexity of these problems and polynomial time algorithms on restricted classes of graphs. The main results of the thesis are NP-completness of the edge-clique-layered-cover problem and a polynomial-time 2-approximation algorithm on the subclass of diamond-free graphs for the same problem as well as some upper bounds on particular graph classes.
179

PROBLÈMES COMBINATOIRES EN CONFIGURATION DES LIGNES DE FABRICATION : ANALYSE DE COMPLEXITÉ ET OPTIMISATION / COMBINATORIAL PROBLEMS IN PRODUCTION LINES CONFIGURATION : COMPUTATIONAL ANALYSIS AND OPTIMIZATION

Kovalev, Sergey 23 November 2012 (has links)
L'objectif de la thèse est de créer et développer de nouvelles méthodes de résolution efficaces des problèmes combinatoires en configuration des lignes de fabrication. Deux problèmes ont été particulièrement étudiés: le problème d'équilibrage et de choix d'équipement pour des lignes dédiées et le problème de minimisation des coûts de changements de séries pour des lignes multi-produits. Une solution du premier problème consiste en une affectation admissible des ressources à un nombre de stations à déterminer de sorte que le coût total soit minimal. Afin de résoudre ce problème, nous l'avons réduit au problème de partition d'ensemble et l'avons résolu par des heuristiques gloutonnes et une méthode exacte de génération de contraintes. Les expérimentations sur différentes instances ont montré que la nouvelle approche de résolution surclasse les approches antérieures de la littérature en termes de qualité de solution et de temps de calcul. Pour le second problème deux critères sont considérés lexicographiquement : la minimisation du nombre de stations et la minimisation du coût de changement de séries. Nous avons examiné successivement les cas d'exécution parallèle et séquentielle des opérations. Des solutions approchées ont été trouvées par des heuristiques gloutonnes. Ensuite, nous avons proposé deux modèles de programmation linéaire en nombres entiers (PLNE) afin de trouver le nombre de stations minimal et ensuite d'obtenir le coût de changement de séries minimal. Les résultats des expérimentations sur ces nouveaux problèmes se sont avérés prometteurs à la fois en termes de qualité de solution et de temps de calcul. / The objective of this thesis is to create and develop new effective solution methods for production line configuration problems. Two problems were studied: the equipment selection and balancing problem for dedicated lines and the setup cost minimization problem for multi-product lines. A solution for the first problem consists in a feasible assignment of the resources to an unknown number of stations so that the total cost is minimized. In order to solve this problem, we reduced it to the set partitioning problem and solved it by greedy heuristics and an exact method of constraint generation. The computer experiments on different problem instances showed that the new solution approach outperforms the previous methods from the literature both in terms of solution quality and computational time. For the second problem two criteria were considered lexicographically: the minimization of the number of stations and the minimization of the total setup cost. We examined successively the cases with parallel and sequential execution of operations. Approximate solutions were found by greedy heuristics. Then, we proposed two integer programming models in order to obtain the minimal number of stations and then the minimal setup cost. The experimental results for this new problem proved to be promising both in terms of solution quality and computational time.
180

Algorithmes de prise de décision pour la "cognitive radio" et optimisation du "mapping" de reconfigurabilité de l'architecture de l'implémentation numérique. / Decision making algorithms for cognitive radio and optimization of the reconfigurability mapping for the numerical architecture of implementation

Bourbia, Salma 27 November 2013 (has links)
Dans cette thèse nous nous intéressons au développement d'une méthode de prise de décision pour un équipement de réception de Radio Intelligente qui s’adapte dynamiquement à son environnement. L'approche que nous adoptons est basée sur la modélisation statistique de l'environnement radio. En caractérisant statistiquement les observations fournies par les capteurs de l'environnement, nous mettons en place des règles de décisions statistiques qui prennent en considération les erreurs d'observation des métriques radio, ce qui contribue à minimiser les taux des décisions erronées. Nous visons aussi à travers cette thèse à utiliser les capacités intelligentes de prise de décision pour contribuer à la réduction de la complexité de calcul au niveau de l'équipement de réception. En effet, nous identifions des scénarios de prise de décision de reconfiguration qui limitent la présence de certains composants ou fonctions de la chaîne de réception. En particulier, nous traitons, deux scénarios de décision qui adaptent respectivement la présence des fonctions d’égalisation et du beamforming en réception. La limitation de ces deux opérations contribue à la réduction de la complexité de calcul au niveau de la chaîne de réception sans dégrader ses performances. Enfin, nous intégrons notre méthode de décision par modélisation statistique ainsi que les deux scénarios de décision traités dans une architecture de gestion d'une radio intelligente, afin de mettre en valeur le contrôle de l'intelligence et de la reconfiguration dans un équipement radio. / In this thesis we focus on the development of a decision making method for the cognitive radio receiver that dynamically adapts to its environment. The approach that we use is based on the statistical modeling of the radio environment. By statistically characterizing the observations provided by the radio sensor, we set up statistical decision rules that take into account the observations’ errors. This helps to minimize the rate of bad decisions. Also, we aim to use the intelligent capacities to reduce the computational complexity in the receiver chain. Indeed, we identify decision scenarios that limit some operators. In particular, we address two decision scenarios that adapt the presence of the equalization and of the beamforming to the environment. The limitation of these two operations helps to reduce the computational complexity in reception. Finally, we integrate our decision method and the two decision scenarios in a management architecture of reconfiguration and intelligence.

Page generated in 0.1119 seconds