<p>In this thesis we have investigated what concepts are and how they may be represented. We have seen that conceptual representations can be achieved by employing distributed representations in a hidden layer of a neural net- work. A pattern of activity is in this respect a conceptualization while the concept(s) it belongs to is a region of space treated alike by similarity based generalization. That is, the conceptualization may still have its individual properties only attributed to itself, but the properties relevant to the concept are shared among the representations in that region of space. These regions of space are allocated as dictated by coherently covarying properties of the domain, and thus constitutes a hierarchical representation of it. In this hier- archical representation, the most general concepts occupy the largest amount of space, with their subordinate concepts distributed in clusters allocated inside this space. This hierarchic representation is discovered in a coarse to fine manner, mirroring the conceptual development of a child. Properties being highly typical for a concept are, however, easier to learn and may thus be acquired before properties of concepts superordinate to them, mirroring basic level advantages in lexical acquisition. These typical properties show a higher level of activation throughout training. Frequency of presentation also influences how easy a concept or pattern is to acquire. Frequency of presentation causes a higher pressure to differentiate the instance, thus allo- cating a larger amount of space to it. This in turn facilitates the learning of its individual properties, thus attenuating the basic level advantages. The properties that covary coherently in the domain becomes more salient than other properties. This allows concepts to be acquired based on especially in- formative properties, thus possibly overlooking perceptual similarity. When noise was introduced into the system, the hierarchy broke down in a fine to coarse manner. These effects are all due to similarity based generalization and the coarse to fine differentiation of conceptual distinctions, and support many findings in semantic cognition. PDP thus serve as a good starting place for achieving conceptual representations. By viewing concepts as simulators (Barsalou, 1999; Barsalou, 2003a; Barsalou, 2003b), they are a skill to produce context-specific representations. This is also true of the hidden layer conceptual representations, although depending on whether the context is predictive. A simulator is comprised by a set of modality specific perceptual symbols extracted from perceptual states. Barsalou (1999) also offered valuable insights as to how simulators can support productivity and abstract thought. We have also seen how categorization can influence perceptual discrim- ination (Goldstone, 1994). By acquiring categories, the category relevant boundaries acquire distinctiveness with emphasis on the category boundary. For separable dimensions, the irrelevant dimension may receive acquired acquired similarity, however, one null effect was also found in (Goldstone, 60 1994). For Integral dimensions, the irrelevant dimension also acquired dis- tinctiveness. When two dimensions were relevant for categorization, the separable dimensions competed with each other, while the integral did not. Based on results from (Gluck & Meyers, 1993) I have proposed that a predic- tive auto-encoder can account for the results found for separable dimensions. During categorization learning, the stimuli along with the assigned category is processed by a predictive auto-encoder. The result is that predictive dimensions acquire distinctiveness while redundant ones acquire similarity. Whether or not the irrelevant dimension acquire similarity will thus depend on whether it has previously been predictive. Language is another factor influencing perceptual discrimination. When language was introduced into the system it had a profound influence on the conceptual representations (Cangelosi & Parisi, 2001). The representations acquired within category similarity and between category distinctiveness. The effect was largest for verbs, but was also present during non-linguistic processing. Language thus helped the network perfect its conceptual skills with respect to non-linguistic behavior. In Cangelosi & Riga (2006) language was used to implement grounding transfer. This is a process where new behavior is acquired by grounding it in previously learned behavior. This was achieved in the guid- ance of language. This could also be seen as an implementation of Barsalous (1999) productivity mechanism. The involvement of language in simulating abstract thought has also been discussed. With reference to cangelosi & Parisi (2001) and Cangelosi & Riga (2006) it seems that language has a profound effect on conceptual processing. Dimensionality of the representation is another important factor in con- ceptual representations. As the dimensionality increases, the number of examples necessary to reach a given level of performance increases exponen- tially (Edelman & Intrator, 1997). Auto-encoders is a common method for unsupervised dimensionality reductions which also preserves the topology of the original domain. The dimensionality is reduced by compressing redun- dant information thus allowing conception to focus on the relevant aspect of the representation. We have also reviewed a theory if prefrontal cortex function suggesting its implication in guiding computation along processing specific pathways and also in acquiring categories and rules (Miller & Freedman, et. al., 2002; Miller & Cohen, 2001; Braver & Cohen, 2000). The PFC thus seems es- sential in conception. However, as the rules learned in the PFC is executed frequently, they get pushed down to more autonomous areas of the brain and thus become more autonomous. The PFC will thus be most involved in behavior requiring attention, among which acquiring concepts certainly belongs. A framework for higher level cognitive behavior from Veflingstad & Yildirim (2007) was introduced. This framework was introduced within three levels of cognition: the stimulus-resonse level, the conceptual level 61 and the language level. Within this framework it is proposed that algo- rithms exist in the brain and that they are represented non-symbolically at the conceptual level. They operate on non-symbolic concepts and makes decisions using feed forward networks modeling an if-then rule. By em- ploying distributed representations these algorithms exhibit the properties we have this far discussed and will thus exhibit semantic task performance. These algorithms help experiencing more complex thought and are engaged in higher level cognitive tasks such as planning. A simulation of a non- symbolic summation algorithm was presented showing the feasibility of the approach. It was proposed that the PFC is in charge of learning these algo- rithms, but as they are frequently executed they get pushed down to more autonomous areas of the brain and thus no longer require as much attention to be executed. Novelty was proposed as a means of autonomous exploration and a con- tinuous type checking parameter. Novelty is an informative and important signal as it allows one to assess knowledge of a perceived instance without any explicit reference of memory. This was implemented in a simulation as the sum of differences between the input pattern and the output pattern of an auto-encoder. The simulation showed that novelty could be reliably as- sessed within and between modalities as long as the environment was noise free. When noise was introduced, the performance dropped. The simula- tion was, however, very constrained as the link between the modalities only supported one to one relationships. It was therefor suggested that novelty of associations was better assessed as the amount of selective attention the PFC must exert in order for a pattern of activity in a massively recurrent system to settle into a new attractor. It should be mentioned that novelty is here interpreted very broadly. It might be possible that a specific association has been observed many times but that some other association overrides it in the system. This association would thus not be novel in that it has not been experienced but in that it has not been learned to a sufficient degree. Novelty is here also used as an assessment of which of two associations are least familiar. Novelty in this respect would thus be a measure of the amount of stress a current line of processing introduces in the system. From the material presented in this thesis I will in line with Barsalou (2003b) conclude that the concept arises from a skill for producing context- specific representations. This skill arises from interacting with the world and observing meaningful relationships and properties within it. As this skill improves, perception is affected in a way further facilitating this skill. Once this skill has reached a certain level, language can be acquired, improving this skill even more. This in turn, probably facilitates further acquisition of language. Within reference to the three levels proposed there seems to be a circular dependency between the layers with the concept arising from this interaction. However, since conception can arise simply by similarity based generalization, language would not seem necessary for conception. It 62 does seem important in the complex conceptual abilities to humans though. Even though it is here concluded that the concept emerges from the skill of the system, this does not mean that it can not be investigated as patterns of activation. As wee have seen, much can be learned from these patterns. They can also be employed in algorithms achieving more complex thought.</p>
Identifer | oai:union.ndltd.org:UPSALLA/oai:DiVA.org:ntnu-9525 |
Date | January 2007 |
Creators | veflingstad, henning |
Publisher | Norwegian University of Science and Technology, Department of Computer and Information Science, Institutt for datateknikk og informasjonsvitenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, text |
Page generated in 0.003 seconds