Cuttlefish are renowned for their ability to quickly alter the colour and texture of their skin, for camouflage and communication. This is due to the presence of thousands of pigment-filled sacs, known as chromatophores, which are distributed across the skin. The chromatophores are innervated by motoneurons, which dilate the chromatophores to create the spots, stripes, and other markings, known as chromatic components. There are 34 recognized chromatic components, and it is an interesting question how cuttlefish coordinate the expression of these components to camouflage and communicate. The digital age has introduced new, powerful algorithms and methods to tease out subtle features in the coloration patterns, by means of image registration, segmentation, and identification, as well as methods for modeling the underlying control systems. These tools offer major new insights into the mechanisms of visual perception. In addition, powerful techniques have recently been developed that have yet to be applied to this complex visual motor control system. These methods have large potential in helping discover what features between the pattern and the environment are necessary to prevent detection. Here I present four laboratory experiments, that for the first time use machine learning models, to investigate cuttlefish pattern formation, implementation, and information. The first two experimental chapters investigate how cuttlefish orchestrate their chromatic components for camouflage patterns, and what strategies they employ on diverse backgrounds. I demonstrate that components are expressed more independently than previously believed, finding that the range of patterns expressed lie on a continuum, allowing us to suggest a revised classification scheme for cuttlefish body patterns. The diversity of patterns seem to imply that a cuttlefish could use its repertoire flexibly to display the maximally cryptic pattern for a given background, however I show that cuttlefish to not in fact select a single (possibly optimal) camouflage pattern, continually alter their appearance on a given background, and that the frequency of change increases in relation to the size of the objects in the environment. My third chapter investigated the language-like properties of cuttlefish communication using human speech recognition models. From our subset of cuttlefish patterns, I discovered cuttlefish utilize a lexicon of 10 patterns, with language-like properties such as: they obeyed Zipf's law, contained around 1.6 bits per display, and interestingly, while 2 patterns were visually similar, they were displayed in separate contexts. By implementing a regression onto the patterns, I introduce a basic dictionary of cuttlefish terms and their meaning. From my investigations into cuttlefish intraspecific signaling, I discovered two previously undocumented patterns, used in agonistic encounters between cuttlefish. My final chapter describes these patterns and the contexts they are displayed.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:731213 |
Date | January 2017 |
Creators | Culligan, Jay |
Publisher | University of Sussex |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://sro.sussex.ac.uk/id/eprint/71107/ |
Page generated in 0.0015 seconds