Spelling suggestions: "subject:"informatics""
31 |
Secure gossiping techniques and componentsJesi, Gian Paolo <1975> 16 April 2007 (has links)
Gossip protocols have proved to be a viable solution to set-up and manage largescale
P2P services or applications in a fully decentralised scenario.
The gossip or epidemic communication scheme is heavily based on stochastic
behaviors and it is the fundamental idea behind many large-scale P2P protocols.
It provides many remarkable features, such as scalability, robustness to failures,
emergent load balancing capabilities, fast spreading, and redundancy of information.
In some sense, these services or protocols mimic natural system behaviors
in order to achieve their goals.
The key idea of this work is that the remarkable properties of gossip hold
when all the participants follow the rules dictated by the actual protocols. If one
or more malicious nodes join the network and start cheating according to some
strategy, the result can be catastrophic.
In order to study how serious the threat posed by malicious nodes can be
and what can be done to prevent attackers from cheating, we focused on a general
attack model aimed to defeat a key service in gossip overlay networks (the
Peer Sampling Service [JGKvS04]). We also focused on the problem of protecting
against forged information exchanged in gossip services.
We propose a solution technique for each problem; both techniques are general
enough to be applied to distinct service implementations. As gossip protocols,
our solutions are based on stochastic behavior and are fully decentralized.
In addition, each technique’s behaviour is abstracted by a general primitive function
extending the basic gossip scheme; this approach allows the adoptions of our
solutions with minimal changes in different scenarios.
We provide an extensive experimental evaluation to support the effectiveness
of our techniques. Basically, these techniques aim to be building blocks or P2P
architecture guidelines in building more resilient and more secure P2P services.
|
32 |
User interaction widgets for interactive theorem provingZacchiroli, Stefano <1979> 16 April 2007 (has links)
Matita (that means pencil in Italian) is a new interactive theorem prover under
development at the University of Bologna. When compared with state-of-the-art
proof assistants, Matita presents both traditional and innovative aspects.
The underlying calculus of the system, namely the Calculus of (Co)Inductive
Constructions (CIC for short), is well-known and is used as the basis of another
mainstream proof assistant—Coq—with which Matita is to some extent compatible.
In the same spirit of several other systems, proof authoring is conducted by the
user as a goal directed proof search, using a script for storing textual commands for
the system. In the tradition of LCF, the proof language of Matita is procedural and
relies on tactic and tacticals to proceed toward proof completion. The interaction
paradigm offered to the user is based on the script management technique at the
basis of the popularity of the Proof General generic interface for interactive theorem
provers: while editing a script the user can move forth the execution point to deliver
commands to the system, or back to retract (or “undo”) past commands.
Matita has been developed from scratch in the past 8 years by several members
of the Helm research group, this thesis author is one of such members. Matita
is now a full-fledged proof assistant with a library of about 1.000 concepts. Several
innovative solutions spun-off from this development effort. This thesis is about the
design and implementation of some of those solutions, in particular those relevant for
the topic of user interaction with theorem provers, and of which this thesis author
was a major contributor. Joint work with other members of the research group
is pointed out where needed. The main topics discussed in this thesis are briefly
summarized below.
Disambiguation. Most activities connected with interactive proving require the
user to input mathematical formulae. Being mathematical notation ambiguous,
parsing formulae typeset as mathematicians like to write down on paper is a challenging
task; a challenge neglected by several theorem provers which usually prefer
to fix an unambiguous input syntax. Exploiting features of the underlying calculus,
Matita offers an efficient disambiguation engine which permit to type formulae in
the familiar mathematical notation.
Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts
to combine tactics together. With tacticals scripts can be made shorter, readable,
and more resilient to changes. Unfortunately they are de facto incompatible with
state-of-the-art user interfaces based on script management. Such interfaces indeed
do not permit to position the execution point inside complex tacticals, thus introducing
a trade-off between the usefulness of structuring scripts and a tedious big
step execution behavior during script replaying. In Matita we break this trade-off
with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in
a more fine-grained manner.
Extensible yet meaningful notation. Proof assistant users often face the need
of creating new mathematical notation in order to ease the use of new concepts. The
framework used in Matita for dealing with extensible notation both accounts for
high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation)
and provides meaningful notation, where presentational fragments are
kept synchronized with semantic representation of terms. Using our approach interoperability
with other systems can be achieved at the content level, and direct
manipulation of formulae acting on their rendered forms is possible too.
Publish/subscribe hints. Automation plays an important role in interactive
proving as users like to delegate tedious proving sub-tasks to decision procedures
or external reasoners. Exploiting the Web-friendliness of Matita we experimented
with a broker and a network of web services (called tutors) which can try independently
to complete open sub-goals of a proof, currently being authored in Matita.
The user receives hints from the tutors on how to complete sub-goals and can interactively
or automatically apply them to the current proof.
Another innovative aspect of Matita, only marginally touched by this thesis,
is the embedded content-based search engine Whelp which is exploited to various
ends, from automatic theorem proving to avoiding duplicate work for the user.
We also discuss the (potential) reusability in other systems of the widgets presented
in this thesis and how we envisage the evolution of user interfaces for interactive
theorem provers in the Web 2.0 era.
|
33 |
Interactions between normative systems and software cognitive agents. A formalization in temporal modal defeasible logic and its implementationRiveret, Régis <1979> 03 June 2008 (has links)
Sustainable computer systems require some flexibility to adapt to environmental
unpredictable changes. A solution lies in autonomous software agents which can
adapt autonomously to their environments. Though autonomy allows agents to decide
which behavior to adopt, a disadvantage is a lack of control, and as a side effect
even untrustworthiness: we want to keep some control over such autonomous agents.
How to control autonomous agents while respecting their autonomy?
A solution is to regulate agents’ behavior by norms. The normative paradigm
makes it possible to control autonomous agents while respecting their autonomy,
limiting untrustworthiness and augmenting system compliance. It can also facilitate
the design of the system, for example, by regulating the coordination among agents.
However, an autonomous agent will follow norms or violate them in some conditions.
What are the conditions in which a norm is binding upon an agent?
While autonomy is regarded as the driving force behind the normative paradigm,
cognitive agents provide a basis for modeling the bindingness of norms. In order to
cope with the complexity of the modeling of cognitive agents and normative bindingness,
we adopt an intentional stance.
Since agents are embedded into a dynamic environment, things may not pass at
the same instant. Accordingly, our cognitive model is extended to account for some
temporal aspects. Special attention is given to the temporal peculiarities of the legal
domain such as, among others, the time in force and the time in efficacy of provisions.
Some types of normative modifications are also discussed in the framework.
It is noteworthy that our temporal account of legal reasoning is integrated to our
commonsense temporal account of cognition.
As our intention is to build sustainable reasoning systems running unpredictable
environment, we adopt a declarative representation of knowledge. A declarative representation
of norms will make it easier to update their system representation, thus
facilitating system maintenance; and to improve system transparency, thus easing
system governance.
Since agents are bounded and are embedded into unpredictable environments,
and since conflicts may appear amongst mental states and norms, agent reasoning
has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic
logic, namely into a temporal modal defeasible logic, in order to account for the
interactions between normative systems and software cognitive agents.
|
34 |
Constraint handling rules. Compositional semantics and program transformationTacchella, Paolo <1976> 28 April 2008 (has links)
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It
proposes a compositional semantics and a technique for program transformation.
CHR is a concurrent committed-choice constraint logic programming language consisting
of guarded rules, which transform multi-sets of atomic formulas (constraints) into
simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family.
It was initially designed for writing constraint solvers but it has recently also proven to be
a general purpose language, being as it is Turing equivalent [SSD05a].
Compositionality is the first CHR aspect to be considered. A trace based compositional
semantics for CHR was previously defined in [DGM05]. The reference operational
semantics for such a compositional model was the original operational semantics for CHR
which, due to the propagation rule, admits trivial non-termination.
In this thesis we extend the work of [DGM05] by introducing a more refined trace
based compositional semantics which also includes the history. The use of history is a
well-known technique in CHR which permits us to trace the application of propagation
rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04].
Naturally, the reference operational semantics, of our new compositional one, uses history
to avoid trivial non-termination too.
Program transformation is the second CHR aspect to be considered, with particular
regard to the unfolding technique. Said technique is an appealing approach which allows
us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption.
Essentially it consists of a sequence of syntactic program manipulations
which preserve a kind of semantic equivalence called qualified answer [Frü98], between
the original program and the transformed ones. The unfolding technique is one of the
basic operations which is used by most program transformation systems. It consists in the
replacement of a procedure-call by its definition. In CHR every conjunction of constraints
can be considered as a procedure-call, every CHR rule can be considered as a procedure
and the body of said rule represents the definition of the call. While there is a large body
of literature on transformation and unfolding of sequential programs, very few papers
have addressed this issue for concurrent languages.
We define an unfolding rule, show its correctness and discuss some conditions in
which it can be used to delete an unfolded rule while preserving the meaning of the original
program. Finally, confluence and termination maintenance between the original and
transformed programs are shown.
This thesis is organized in the following manner. Chapter 1 gives some general notion
about CHR. Section 1.1 outlines the history of programming languages with particular
attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples.
Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely,
Section 1.4 introduces the syntax and the operational and declarative semantics
for the first CHR language proposed. Finally, the methodologies to solve the problem of
trivial non-termination related to propagation rules are discussed in Section 1.5.
Chapter 2 introduces a compositional semantics for CHR where the propagation rules
are considered. In particular, Section 2.1 contains the definition of the semantics. Hence,
Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon
the correctness results.
Chapter 3 presents a particular program transformation known as unfolding. This
transformation needs a particular syntax called annotated which is introduced in Section
3.1 and its related modified operational semantics !0t is presented in Section 3.2.
Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in
Section 3.4 the problems related to the replacement of a rule by its unfolded version are
discussed and this in turn gives a correctness condition which holds for a specific class of
rules. Section 3.5 proves that confluence and termination are preserved by the program
modifications introduced.
Finally, Chapter 4 concludes by discussing related works and directions for future
work.
|
35 |
Automatic code generation: from process algebraic architectural descriptions to multithreaded java programsBontà, Edoardo <1971> 28 April 2008 (has links)
Process algebraic architectural description languages provide a formal
means for modeling software systems and assessing their properties.
In order to bridge the gap between system modeling and system im-
plementation, in this thesis an approach is proposed for automatically
generating multithreaded object-oriented code from process algebraic
architectural descriptions, in a way that preserves – under certain
assumptions – the properties proved at the architectural level.
The approach is divided into three phases, which are illustrated
by means of a running example based on an audio processing
system. First, we develop an architecture-driven technique for
thread coordination management, which is completely automated
through a suitable package. Second, we address the translation
of the algebraically-specified behavior of the individual software
units into thread templates, which will have to be filled in by
the software developer according to certain guidelines. Third, we
discuss performance issues related to the suitability of synthesizing
monitors rather than threads from software unit descriptions that
satisfy specific constraints.
In addition to the running example, we present two case studies about
a video animation repainting system and the implementation of a
leader election algorithm, in order to summarize the whole approach.
The outcome of this thesis is the implementation of the proposed
approach in a translator called PADL2Java and its integration in the
architecture-centric verification tool TwoTowers.
|
36 |
A tuple space implementation for large-scale infrastructuresCapizzi, Sirio <1980> 28 April 2008 (has links)
Coordinating activities in a distributed system is an open research topic. Several models have been
proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple
spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In
particular we have applied spatial database techniques to tuple spaces in order to increase their
performance when handling a large number of tuples. Moreover, we have studied how structured
peer to peer approaches can be applied to better distribute tuples on large networks. Using some of
these result, we have developed a tuple space implementation for the Globus Toolkit that can be
used by Grid applications as a coordination service. The development of such a service has been
quite challenging due to the limitations imposed by XML serialization that have heavily influenced
its design. Nevertheless, we were able to complete its implementation and use it to implement two
different types of test applications: a completely parallelizable one and a plasma simulation that is
not completely parallelizable. Using this last application we have compared the performance of our
service against MPI. Finally, we have developed and tested a simple workflow in order to show the
versatility of our service.
|
37 |
Cross-layer optimizations in multi-hop ad hoc networksDi Felice, Marco <1980> 28 April 2008 (has links)
Unlike traditional wireless networks, characterized by the presence of last-mile, static and
reliable infrastructures, Mobile ad Hoc Networks (MANETs) are dynamically formed by
collections of mobile and static terminals that exchange data by enabling each other's
communication. Supporting multi-hop communication in a MANET is a challenging
research area because it requires cooperation between different protocol layers (MAC,
routing, transport). In particular, MAC and routing protocols could be considered
mutually cooperative protocol layers. When a route is established, the exposed and
hidden terminal problems at MAC layer may decrease the end-to-end performance
proportionally with the length of each route. Conversely, the contention at MAC layer
may cause a routing protocol to respond by initiating new routes queries and routing table
updates.
Multi-hop communication may also benefit the presence of pseudo-centralized virtual
infrastructures obtained by grouping nodes into clusters. Clustering structures may
facilitate the spatial reuse of resources by increasing the system capacity: at the same
time, the clustering hierarchy may be used to coordinate transmissions events inside the
network and to support intra-cluster routing schemes. Again, MAC and clustering
protocols could be considered mutually cooperative protocol layers: the clustering
scheme could support MAC layer coordination among nodes, by shifting the distributed
MAC paradigm towards a pseudo-centralized MAC paradigm. On the other hand, the
system benefits of the clustering scheme could be emphasized by the pseudo-centralized
MAC layer with the support for differentiated access priorities and controlled contention.
In this thesis, we propose cross-layer solutions involving joint design of MAC, clustering
and routing protocols in MANETs.
As main contribution, we study and analyze the integration of MAC and clustering
schemes to support multi-hop communication in large-scale ad hoc networks. A novel
clustering protocol, named Availability Clustering (AC), is defined under general nodes'
heterogeneity assumptions in terms of connectivity, available energy and relative
mobility. On this basis, we design and analyze a distributed and adaptive MAC protocol,
named Differentiated Distributed Coordination Function (DDCF), whose focus is to
implement adaptive access differentiation based on the node roles, which have been
assigned by the upper-layer's clustering scheme. We extensively simulate the proposed
clustering scheme by showing its effectiveness in dominating the network dynamics,
under some stressing mobility models and different mobility rates. Based on these results,
we propose a possible application of the cross-layer MAC+Clustering scheme to support
the fast propagation of alert messages in a vehicular environment.
At the same time, we investigate the integration of MAC and routing protocols in large
scale multi-hop ad-hoc networks. A novel multipath routing scheme is proposed, by
extending the AOMDV protocol with a novel load-balancing approach to concurrently
distribute the traffic among the multiple paths. We also study the composition effect of a
IEEE 802.11-based enhanced MAC forwarding mechanism called Fast Forward (FF),
used to reduce the effects of self-contention among frames at the MAC layer. The
protocol framework is modelled and extensively simulated for a large set of metrics and
scenarios.
For both the schemes, the simulation results reveal the benefits of the cross-layer
MAC+routing and MAC+clustering approaches over single-layer solutions.
|
38 |
Knowledge management in intelligent tutoring systemsRiccucci, Simone <1978> 28 April 2008 (has links)
In the last years, Intelligent Tutoring Systems have been a very successful way for improving
learning experience. Many issues must be addressed until this technology can be defined mature.
One of the main problems within the Intelligent Tutoring Systems is the process of contents
authoring: knowledge acquisition and manipulation processes are difficult tasks because they
require a specialised skills on computer programming and knowledge engineering. In this thesis we
discuss a general framework for knowledge management in an Intelligent Tutoring System and
propose a mechanism based on first order data mining to partially automate the process of
knowledge acquisition that have to be used in the ITS during the tutoring process. Such a
mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor.
We design and implement a part of the proposed architecture, mainly the module of knowledge
acquisition from examples based on first order data mining. We then show that the algorithm can be
applied at least two different domains: first order algebra equation and some topics of C
programming language. Finally we discuss the limitation of current approach and the possible
improvements of the whole framework.
|
39 |
Interactive theorem provers: issues faced as a user and tackled as a developerTassi, Enrico <1980> 28 April 2008 (has links)
Interactive theorem provers (ITP for short) are tools whose final aim is to certify
proofs written by human beings. To reach that objective they have to fill the gap
between the high level language used by humans for communicating and reasoning
about mathematics and the lower level language that a machine is able to “understand”
and process. The user perceives this gap in terms of missing features or
inefficiencies. The developer tries to accommodate the user requests without increasing
the already high complexity of these applications. We believe that satisfactory
solutions can only come from a strong synergy between users and developers.
We devoted most part of our PHD designing and developing the Matita interactive
theorem prover. The software was born in the computer science department
of the University of Bologna as the result of composing together all the technologies
developed by the HELM team (to which we belong) for the MoWGLI project. The
MoWGLI project aimed at giving accessibility through the web to the libraries of
formalised mathematics of various interactive theorem provers, taking Coq as the
main test case. The motivations for giving life to a new ITP are:
• study the architecture of these tools, with the aim of understanding the source
of their complexity
• exploit such a knowledge to experiment new solutions that, for backward compatibility
reasons, would be hard (if not impossible) to test on a widely used
system like Coq.
Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive
Constructions (CIC) as its logical foundation. Proof objects are thus, at some
extent, compatible with the ones produced with the Coq ITP, that is itself able to
import and process the ones generated using Matita. Although the systems have a
lot in common, they share no code at all, and even most of the algorithmic solutions
are different.
The thesis is composed of two parts where we respectively describe our experience
as a user and a developer of interactive provers. In particular, the first part is based
on two different formalisation experiences:
• our internship in the Mathematical Components team (INRIA), that is formalising
the finite group theory required to attack the Feit Thompson Theorem.
To tackle this result, giving an effective classification of finite groups of odd
order, the team adopts the SSReflect Coq extension, developed by Georges
Gonthier for the proof of the four colours theorem.
• our collaboration at the D.A.M.A. Project, whose goal is the formalisation
of abstract measure theory in Matita leading to a constructive proof of
Lebesgue’s Dominated Convergence Theorem.
The most notable issues we faced, analysed in this part of the thesis, are the following:
the difficulties arising when using “black box” automation in large formalisations;
the impossibility for a user (especially a newcomer) to master the context
of a library of already formalised results; the uncomfortable big step execution of
proof commands historically adopted in ITPs; the difficult encoding of mathematical
structures with a notion of inheritance in a type theory without subtyping like
CIC.
In the second part of the manuscript many of these issues will be analysed with
the looking glasses of an ITP developer, describing the solutions we adopted in the
implementation of Matita to solve these problems: integrated searching facilities to
assist the user in handling large libraries of formalised results; a small step execution
semantic for proof commands; a flexible implementation of coercive subtyping allowing
multiple inheritance with shared substructures; automatic tactics, integrated
with the searching facilities, that generates proof commands (and not only proof
objects, usually kept hidden to the user) one of which specifically designed to be
user driven.
|
40 |
Evolutionary methods for self-organizing cooperation in peer-to-peer networksArteconi, Stefano <1979> 28 April 2008 (has links)
The Peer-to-Peer network paradigm is drawing the attention of both final users
and researchers for its features. P2P networks shift from the classic client-server
approach to a high level of decentralization where there is no central control and
all the nodes should be able not only to require services, but to provide them to
other peers as well. While on one hand such high level of decentralization might
lead to interesting properties like scalability and fault tolerance, on the other hand
it implies many new problems to deal with.
A key feature of many P2P systems is openness, meaning that everybody is
potentially able to join a network with no need for subscription or payment systems.
The combination of openness and lack of central control makes it feasible for a user
to free-ride, that is to increase its own benefit by using services without allocating
resources to satisfy other peers’ requests. One of the main goals when designing
a P2P system is therefore to achieve cooperation between users. Given the nature
of P2P systems based on simple local interactions of many peers having partial
knowledge of the whole system, an interesting way to achieve desired properties on
a system scale might consist in obtaining them as emergent properties of the many
interactions occurring at local node level.
Two methods are typically used to face the problem of cooperation in P2P networks:
1) engineering emergent properties when designing the protocol; 2) study
the system as a game and apply Game Theory techniques, especially to find Nash
Equilibria in the game and to reach them making the system stable against possible
deviant behaviors. In this work we present an evolutionary framework to enforce
cooperative behaviour in P2P networks that is alternative to both the methods
mentioned above. Our approach is based on an evolutionary algorithm inspired by
computational sociology and evolutionary game theory, consisting in having each
peer periodically trying to copy another peer which is performing better.
The proposed algorithms, called SLAC and SLACER, draw inspiration from tag
systems originated in computational sociology, the main idea behind the algorithm
consists in having low performance nodes copying high performance ones. The
algorithm is run locally by every node and leads to an evolution of the network both
from the topology and from the nodes’ strategy point of view. Initial tests with a
simple Prisoners’ Dilemma application show how SLAC is able to bring the network
to a state of high cooperation independently from the initial network conditions.
Interesting results are obtained when studying the effect of cheating nodes on
SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system
for their own benefit can actually improve system performance from the cooperation
formation point of view.
The final step is to apply our results to more realistic scenarios. We put our
efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen
not only for its popularity but because it has many points in common with SLAC and
SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like
mechanism) to the swarms topology.
We discovered fairness, meant as ratio between uploaded and downloaded data,
to be a weakness of the original BitTorrent protocol and we drew inspiration from the
knowledge of cooperation formation and maintenance mechanism derived from the
development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding
and cheating in BitTorrent. We produced an extension of BitTorrent called
BitFair that has been evaluated through simulation and has shown the abilities of
enforcing fairness and tackling free-riding and cheating nodes.
|
Page generated in 0.0809 seconds