Sustainable computer systems require some flexibility to adapt to environmental
unpredictable changes. A solution lies in autonomous software agents which can
adapt autonomously to their environments. Though autonomy allows agents to decide
which behavior to adopt, a disadvantage is a lack of control, and as a side effect
even untrustworthiness: we want to keep some control over such autonomous agents.
How to control autonomous agents while respecting their autonomy?
A solution is to regulate agents’ behavior by norms. The normative paradigm
makes it possible to control autonomous agents while respecting their autonomy,
limiting untrustworthiness and augmenting system compliance. It can also facilitate
the design of the system, for example, by regulating the coordination among agents.
However, an autonomous agent will follow norms or violate them in some conditions.
What are the conditions in which a norm is binding upon an agent?
While autonomy is regarded as the driving force behind the normative paradigm,
cognitive agents provide a basis for modeling the bindingness of norms. In order to
cope with the complexity of the modeling of cognitive agents and normative bindingness,
we adopt an intentional stance.
Since agents are embedded into a dynamic environment, things may not pass at
the same instant. Accordingly, our cognitive model is extended to account for some
temporal aspects. Special attention is given to the temporal peculiarities of the legal
domain such as, among others, the time in force and the time in efficacy of provisions.
Some types of normative modifications are also discussed in the framework.
It is noteworthy that our temporal account of legal reasoning is integrated to our
commonsense temporal account of cognition.
As our intention is to build sustainable reasoning systems running unpredictable
environment, we adopt a declarative representation of knowledge. A declarative representation
of norms will make it easier to update their system representation, thus
facilitating system maintenance; and to improve system transparency, thus easing
system governance.
Since agents are bounded and are embedded into unpredictable environments,
and since conflicts may appear amongst mental states and norms, agent reasoning
has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic
logic, namely into a temporal modal defeasible logic, in order to account for the
interactions between normative systems and software cognitive agents.
Identifer | oai:union.ndltd.org:unibo.it/oai:amsdottorato.cib.unibo.it:911 |
Date | 03 June 2008 |
Creators | Riveret, Régis <1979> |
Contributors | Palmirani, Monica, Rotolo, Antonino |
Publisher | Alma Mater Studiorum - Università di Bologna |
Source Sets | Università di Bologna |
Language | English |
Detected Language | English |
Type | Doctoral Thesis, PeerReviewed |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0023 seconds