201 |
The complexity of constraint satisfaction problems and symmetric Datalog /Egri, László. January 2007 (has links)
Constraint satisfaction problems (CSPs) provide a unified framework for studying a wide variety of computational problems naturally arising in combinatorics, artificial intelligence and database theory. To any finite domain D and any constraint language Γ (a finite set of relations over D), we associate the constraint satisfaction problem CSP(Γ): an instance of CSP(Γ) consists of a list of variables x1, x2,..., x n and a list of constraints of the form "(x 7, x2,..., x5) ∈ R" for some relation R in Γ. The goal is to determine whether the variables can be assigned values in D such that all constraints are simultaneously satisfied. The computational complexity of CSP(Γ) is entirely determined by the structure of the constraint language Γ and, thus, one wishes to identify classes of Γ such that CSP(Γ) belongs to a particular complexity class. / In recent years, logical and algebraic perspectives have been particularly successful in classifying CSPs. A major weapon in the arsenal of the logical perspective is the database-theory-inspired logic programming language called Datalog. A Datalog program can be used to solve a restricted class of CSPs by either accepting or rejecting a (suitably encoded) set of input constraints. Inspired by Dalmau's work on linear Datalog and Reingold's breakthrough that undirected graph connectivity is in logarithmic space, we use a new restriction of Datalog called symmetric Datalog to identify a class of CSPs solvable in logarithmic space. We establish that expressibility in symmetric Datalog is equivalent to expressibility in a specific restriction of second order logic called Symmetric Restricted Krom Monotone SNP that has already received attention for its close relationship with logarithmic space. / We also give a combinatorial description of a large class of CSPs lying in L by showing that they are definable in symmetric Datalog. The main result of this thesis is that directed st-connectivity and a closely related CSP cannot be defined in symmetric Datalog. Because undirected st-connectivity can be defined in symmetric Datalog, this result also sheds new light on the computational differences between the undirected and directed st-connectivity problems.
|
202 |
A Bilevel Optimization Algorithm to Identify Enzymatic Capacity Constraints in Metabolic Networks - Development and ApplicationYang, Laurence 25 July 2008 (has links)
Constraint-based models of metabolism seldom incorporate capacity constraints on intracellular fluxes due to the lack of experimental data. This can sometimes lead to
inaccurate growth phenotype predictions. Meanwhile, other forms of data such as fitness profiling data from growth competition experiments have been demonstrated to contain valuable information for elucidating key aspects of the underlying metabolic network. Hence, the optimal capacity constraint identification (OCCI) algorithm is developed to reconcile constraint-based models of metabolism with fitness profiling data by identifying a set of flux capacity constraints that optimally fits a wide array of strains. OCCI is able to identify capacity constraints with considerable accuracy by matching 1,155 in
silico-generated growth rates using a simplified model of Escherichia coli central carbon metabolism. Capacity constraints identified using experimental fitness profiles with OCCI generated novel hypotheses, while integrating thermodynamics-based metabolic flux analysis allowed prediction of metabolite concentrations.
|
203 |
REVISION PROGRAMMING: A KNOWLEDGE REPRESENTATION FORMALISMPivkina, Inna Valentinovna 01 January 2001 (has links)
The topic of the dissertation is revision programming. It is a knowledge representation formalismfor describing constraints on databases, knowledge bases, and belief sets, and providing acomputational mechanism to enforce them. Constraints are represented by sets of revision rules.Revision rules could be quite complex and are usually in a form of conditions (for instance, ifthese elements are present and those elements are absent, then this element must be absent). Inaddition to being a logical constraint, a revision rule specify a preferred way to satisfy the constraint.Justified revisions semantics assigns to any database a set (possibly empty) of revisions.Each revision satisfies the constraints, and all deletions and additions of elements in a transitionfrom initial database to the revision are derived from revision rules.Revision programming and logic programming are closely related. We established an elegantembedding of revision programs into logic programs, which does not increase the size of a program.Initial database is used in transformation of a revision program into the corresponding logicprogram, but it is not represented in the logic program.The connection naturally led to extensions of revision programming formalism which correspondto existing extensions of logic programming. More specific, a disjunctive and a nestedversions of revision programming were introduced.Also, we studied annotated revision programs, which allow annotations like confidence factors,multiple experts, etc. Annotations were assumed to be elements of a complete infinitely distributivelattice. We proposed a justified revisions semantics for annotated revision programs which agreedwith intuitions.Next, we introduced definitions of well-founded semantics for revision programming. It assignsto a revision problem a single "intended" model which is computable in polynomial time.Finally, we extended syntax of revision problems by allowing variables and implemented translatorsof revision programs into logic programs and a grounder for revision programs. The implementationallows us to compute justified revisions using existing implementations of the stablemodel semantics for logic programs.
|
204 |
Constraint Programming for Wireless Sensor NetworksHassani Bijarbooneh, Farshid January 2015 (has links)
In recent years, wireless sensor networks (WSNs) have grown rapidly and have had a substantial impact in many applications. A WSN is a network that consists of interconnected autonomous nodes that monitor physical and environmental conditions, such as temperature, humidity, pollution, etc. If required, nodes in a WSN can perform actions to affect the environment. WSNs present an interesting and challenging field of research due to the distributed nature of the network and the limited resources of the nodes. It is necessary for a node in a WSN to be small to enable easy deployment in an environment and consume as little energy as possible to prolong its battery lifetime. There are many challenges in WSNs, such as programming a large number of nodes, designing communication protocols, achieving energy efficiency, respecting limited bandwidth, and operating with limited memory. WSNs are further constrained due to the deployment of the nodes in indoor and outdoor environments and obstacles in the environment. In this dissertation, we study some of the fundamental optimisation problems related to the programming, coverage, mobility, data collection, and data loss of WSNs, modelled as standalone optimisation problems or as optimisation problems integrated with protocol design. Our proposed solution methods come from various fields of research including constraint programming, integer linear programming, heuristic-based algorithms, and data inference techniques. / ProFuN
|
205 |
Widening the Knowledge Acquisition Bottleneck for Intelligent Tutoring SystemsSuraweera, Pramuditha January 2007 (has links)
Empirical studies have shown that Intelligent Tutoring Systems (ITS) are effective tools for education. However, developing an ITS is a labour-intensive and time-consuming process. A major share of the development effort is devoted to acquiring the domain knowledge that accounts for the intelligence of the system. The goal of this research is to reduce the knowledge acquisition bottleneck and enable domain experts to build the domain model required for an ITS. In pursuit of this goal an authoring system capable of producing a domain model with the assistance of a domain expert was developed. Unlike previous authoring systems, this system (named CAS) has the ability to acquire knowledge for non-procedural as well as procedural tasks. CAS was developed to generate the knowledge required for constraint-based tutoring systems, reducing the effort as well as the amount of expertise in knowledge engineering and programming required. Constraint-based modelling is a student modelling technique that assists in somewhat easing the knowledge acquisition bottleneck due to the abstract representation. CAS expects the domain expert to provide an ontology of the domain, example problems and their solutions. It uses machine learning techniques to reason with the information provided by the domain expert for generating a domain model. A series of evaluation studies of this research produced promising results. The initial evaluation revealed that the task of composing an ontology of the domain assisted with the manual composition of a domain model. The second study showed that CAS was effective in generating constraints for the three vastly different domains of database modelling, data normalisation and fraction addition. The final study demonstrated that CAS was also effective in generating constraints when assisted by novice ITS authors, producing constraint sets that were over 90% complete.
|
206 |
Essays on Foreign Aid, Government Spending and Tax EffortBrown, Leanora A. 07 August 2012 (has links)
This dissertation comprises two essays that attempt to determine, empirically, the fiscal response of governments’ to international assistance. The first essay examines whether an increasingly popular recommendation in international aid policy to switch from tied foreign assistance to untied foreign assistance affects investment in critical development expenditure sectors by developing countries. In the past, most international aid has been in the form of tied assistance as donors believed that tying aid will improve its effectiveness. It has been argued, that if tied aid is well designed and effectively managed then its overall effectiveness can be improved. On the contrary, it is also believed that tied aid acts as an impediment to donor cooperation and the building of partnership with developing countries. In addition, it is also argued that it removes the ‘feeling’ of ownership and responsibility of projects from partner countries in aid supported development. Two other more popular arguments used to challenge the effectiveness of foreign aid is that it is compromised when tied to the goods and services of the donor countries because almost 30 percent of its value is eliminated and also because it does not allow recipient countries to act on their priorities for public spending. These problems bring into question whether tied aid is truly the most effective way to help poor countries. A recommendation by the international community is that a switch to untied aid would be necessary. With untied aid, the recipient country is not obligated to buy the goods of the donor country neither is it compelled to pursue the public expenditure priorities of donors. Instead with untied aid they will have greater flexibility over spending decisions and can more easily pursue the priorities of their countries as they see fit. Hence, one could expect that a one dollar increase in untied aid will increase spending in the critical priority sectors by more than a one dollar increase in tied assistance. The question therefore is whether national domestic priorities coincide or not with what the international community has traditionally deemed should be priority. Empirically, we test this prediction using country-by-country data for 57 countries for the period 1973 to 2006. The results suggest that on average untied aid has a greater impact on pro-poor spending than do tied aid. In addition, the results also suggest that fungibility is still an issue even after accounting for the effects of untied aid. However, one could argue that fungibility may not be as bad as it appears since the switch to untied aid improves spending in the sectors that are essential for growth and development.
The second essay explores the hypothesis that the expectations of debt forgiveness can discourage developing countries from attaining fiscal independence through an improvement of their tax effort. On the one hand, the international financial community typically advises poor countries to improve revenue mobilization but, on the other hand, the same international community routinely continues to bail-out poor countries that fail to meet their loan repayment obligations. The act of bailing-out these countries creates an expectation on the part of developing country governments that they will receive debt forgiveness time and again in the future. Therefore, the expectation of future bail outs creates a moral hazard that leads to endemic lower tax efforts. The key prediction of our simple theoretical model is that in the presence of debt forgiveness, tax ratios will decline and this decline will be stronger the higher the frequency and intensity of the bailouts. Empirically, we test this prediction using country-level data for 66 countries for the period 1989 to 2006. The results strongly suggest that debt forgiveness plays a significant role in the low tax effort observed in developing countries. Our empirical model allows for the endogeneity of tax effort and debt forgiveness. Interestingly we find that more debt forgiveness is actually provided to countries with lower tax effort. The results are robust to various specifications.
|
207 |
Towards inclusive design through constraint modelling and computer aided ergonomicsGoonetilleke, Thanuja Shiromie January 2003 (has links)
Inclusive Design is a concept that aims to design mainstream products, workplaces, services and facilities that can accommodate or `include' a maximum percentage of the user population disregarding their age and/or disabilities. The main idea behind Inclusive Design is to design products or workplaces that can be used by all including older, disabled and able-bodied people rather than having two streams of products. There are many social and economic benefits in achieving inclusivity in design such as improving the life of the elderly and disabled people and reaping the profits from the market that extend because of the increased number of consumers. Origins of Inclusive Design go back several decades and are due mainly to the demographic, legislative, and social as well as economic changes that occurred during this period. This research was conducted to study methods of implementation of Inclusive Design. The research has shown that although there are many advantages of designing for the whole population, designers are reluctant to do this mainly because of the enormity of the task which can take up a huge amount of time and man-power. One solution to this can be found in design tools, which provide the designers with a means to achieve inclusivity relatively quickly and with less effort. Therefore this research has developed a new methodology and a computer tool to assist designers to implement Inclusive Design with ease. The methodology discussed in this thesis incorporates the physical characteristics of the users of products and workplaces in the design process in order to search for better configurations for designs. It is shown here that by considering the physical aspects of the individual users such as their anthropometry, joint constraints, capabilities etc in a design optimisation process, the percentage user accommodation of a product can be maximised. In order to achieve this, ergonomics analysis methods and mathematical methods were used to interpret user characteristics in terms of design variables and then constraint modelling was used to model the whole design problem and search for better solutions within the constraints of the design. To implement this method a software tool called SHIELDS was created. This tool utilises the capabilities of four other pieces of software to accomplish the design synthesis. These are HADRIAN and SAMMIE for ergonomics evaluation and MATHEMATICA for mathematical functions fitting and SWORDS constraint modeller to find best solutions. Two case studies were performed to test the functionality of the software and the validity of the methodology developed.
|
208 |
Physical Planning and Uncore Power Management for Multi-Core ProcessorsChen, Xi 02 October 2013 (has links)
For the microprocessor technology of today and the foreseeable future, multi-core is a key engine that drives performance growth under very tight power dissipation constraints. While previous research has been mostly focused on individual processor cores, there is a compelling need for studying how to efficiently manage shared resources among cores, including physical space, on-chip communication and on-chip storage.
In managing physical space, floorplanning is the first and most critical step that largely affects communication efficiency and cost-effectiveness of chip designs. We consider floorplanning with regularity constraints that requires identical processing/memory cores to form an array. Such regularity can greatly facilitate design modularity and therefore shorten design turn-around time. Very little attention has been paid to automatic floorplanning considering regularity constraints because manual floorplanning has difficulty handling the complexity as chip core count increases. In this dissertation work, we investigate the regularity constraints in a simulated-annealing based floorplanner for multi/many core processor designs. A simple and effective technique is proposed to encode the regularity constraints in sequence-pair, which is a classic format of data representation in automatic floorplanning. To the best of our knowledge, this is the first work on regularity-constrained floorplanning in the context of multi/many core processor designs.
On-chip communication and shared last level cache (LLC) play a role that is at least as equally important as processor cores in terms of chip performance and power. This dissertation research studies dynamic voltage and frequency scaling for on-chip network and LLC, which forms a single uncore domain of voltage and frequency. This is in contrast to most previous works where the network and LLC are partitioned and associated with processor cores based on physical proximity. The single shared domain can largely avoid the interfacing overhead across domain boundaries and is practical and very useful for industrial products. Our goal is to minimize uncore energy dissipation with little, e.g., 5% or less, performance degradation. The first part of this study is to identify a metric that can reflect the chip performance determined by uncore voltage/frequency. The second part is about how to monitor this metric with low overhead and high fidelity. The last part is the control policy that decides uncore voltage/frequency based on monitoring results. Our approach is validated through full system simulations on public architecture benchmarks.
|
209 |
Capacity-Achieving Distributions of Gaussian Multiple Access Channel with Peak ConstraintsMamandipoor, Babak January 2013 (has links)
Characterizing probability distribution function for the input of a communication channel that achieves the maximum possible data rate is one of the most fundamental problems in the field of information theory. In his ground-breaking paper, Shannon showed that the capacity of a point-to-point additive white Gaussian noise channel under an average power constraint at the input, is achieved by Gaussian distribution. Although imposing a limitation on the peak of the channel input is also very important in modelling the communication system more accurately, it has gained much less attention in the past few decades. A rather unexpected result of Smith indicated that the capacity achieving distribution for an AWGN channel under peak constraint at the input is unique and discrete, possessing a finite number of mass points.
In this thesis, we study multiple access channel under peak constraints at the inputs of the channel. By extending Smith's argument to our multi-terminal problem we show that any point on the boundary of the capacity region of the channel is only achieved by discrete distributions with a finite number of mass points. Although we do not claim uniqueness of the capacity-achieving distributions, however, we show that only discrete distributions with a finite number of mass points can achieve points on the boundary of the capacity region.
First we deal with the problem of maximizing the sum-rate of a two user Gaussian MAC with peak constraints. It is shown that generating the code-books of both users according to discrete distributions with a finite number of mass points achieves the largest sum-rate in the network. After that we generalize our proof to maximize the weighted sum-rate of the channel and show that the same properties hold for the optimum input distributions. This completes the proof that the capacity region of a two-user Gaussian MAC is achieved by discrete input distributions with a finite number of mass points.
|
210 |
Combinatorial Problems in Compiler OptimizationBeg, Mirza Omer 08 April 2013 (has links)
Several important compiler optimizations such as instruction scheduling
and register allocation are fundamentally hard and are usually solved using heuristics
or approximate solutions.
In contrast, this thesis examines optimal solutions to three combinatorial problems in compiler optimization.
The first problem addresses instruction scheduling for clustered
architectures, popular in embedded systems. Given a set of
instructions the optimal solution gives the best possible schedule for a given clustered
architectural model. The problem is solved
using a decomposition technique applied to constraint programming which determines the spatial and
temporal schedule using an integrated approach. The experiments
show that our solver can tradeoff some compile time efficiency to solve most instances in
standard benchmarks giving significant performance improvements.
The second problem addresses
instruction selection in the compiler code generation phase.
Given the intermediate representation of code the optimal solution
determines the sequence of equivalent machine instructions as it optimizes for code size.
This thesis shows that a large number of benchmark instances can be solved optimally
using constraint programming techniques.
The third problem addressed is the placement of data in memory for efficient
cache utilization.
Using the data access patterns of a given program, our algorithm
determines a placement to reorganize data in
memory which would result in fewer cache misses.
By focusing on graph theoretic placement techniques it is
shown that there exist, in special cases, efficient and optimal algorithms for
data placement that significantly
improve cache utilization. We also propose heuristic solutions for solving larger instances
for which provably optimal solutions cannot be determined using polynomial time algorithms.
We demonstrate that cache hit rates can
be significantly improved by using profiling techniques over a wide range of benchmarks and cache configurations.
|
Page generated in 0.061 seconds