Return to search

Beyond self-assembly: Mergeable nervous systems, spatially targeted communication, and supervised morphogenesis for autonomous robots

The study of self-assembling robots represents a promising strand within the emerging field of modular robots research. Self-assembling robots have the potential to autonomously adapt their bodies to new tasks and changing environments long after their initial deployment by forming new or reorganizing existing physical connections to peer robots. In previous research, many approaches have been presented to enable self-assembling robots to form composite morphologies. Recent technological advances have also increased the number of robots able to form such morphologies by at least two orders of magnitude. However, to date, composite robot morphologies have not been able to solve real-world tasks nor have they been able to adapt to changing conditions entirely without human assistance or prior knowledge.In this thesis, we identify three reasons why self-assembling robots may not have been able to fully unleash their potential and propose appropriate solutions. First, composite morphologies are not able to show sensorimotor coordination similar to those seen in their monolithic counterparts. We propose "mergeable nervous systems" -- a novel methodology that unifies independent robotic units into a single holistic entity at the control level. Our experiments show that mergeable nervous systems can enable self-assembling robots to demonstrate feats that go beyond those seen in any engineered or biological system. Second, no proposal has been tabled to enable a robot in a decentralized multirobot system select its communication partners based on their location. We propose a new form of highly scalable mechanism to enable "spatially targeted communication" in such systems. Third, the question of when and how to trigger a self-assembly process has been ignored by researchers to a large extent. We propose "supervised morphogenesis" -- a control methodology that is based on spatially targeted communication and enables cooperation between aerial and ground-based self-assembling robots. We show that allocating self-assembly related decision-making to a robot with an aerial perspective of the environment can allow robots on the ground to operate in entirely unknown environments and to solve tasks that arise during mission time. For each of the three propositions put forward in this thesis, we present results of extensive experiments carried out on real robotic hardware. Our results confirm that we were able to substantially advance the state of the art in self-assembling robots by unleashing their potential for morphological adaptation through enhanced sensorimotor coordination and by improving their overall autonomy through cooperation with aerial robots. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished

Identiferoai:union.ndltd.org:ulb.ac.be/oai:dipot.ulb.ac.be:2013/267717
Date26 February 2018
CreatorsMathews, Nithin
ContributorsDorigo, Marco, Bersini, Hugues, Birattari, Mauro, Allwright, Michael, Christensen, Anders Lyhne, O'Grady, Rehan, Støy, Kasper
PublisherUniversite Libre de Bruxelles, Université libre de Bruxelles, Ecole polytechnique de Bruxelles – Informatique, Bruxelles
Source SetsUniversité libre de Bruxelles
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/doctoralThesis, info:ulb-repo/semantics/doctoralThesis, info:ulb-repo/semantics/openurl/vlink-dissertation
FormatNo full-text files

Page generated in 0.0025 seconds