• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1314
  • 700
  • 234
  • 112
  • 97
  • 43
  • 36
  • 18
  • 16
  • 16
  • 15
  • 15
  • 11
  • 10
  • 10
  • Tagged with
  • 3151
  • 582
  • 547
  • 368
  • 355
  • 298
  • 296
  • 294
  • 237
  • 221
  • 215
  • 208
  • 191
  • 186
  • 180
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Computational Intelligence and Complexity Measures for Chaotic Information Processing

Arasteh, Davoud 16 May 2008 (has links)
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
642

Optimization in Graphs under Degree Constraints. Application to Telecommunication Networks

Sau, Ignasi 16 October 2009 (has links) (PDF)
La première partie de cette thèse s'intéresse au groupage de trafic dans les réseaux de télécommunications. La notion de groupage de trafic correspond à l'agrégation de flux de faible débit dans des conduits de plus gros débit. Cependant, à chaque insertion ou extraction de trafic sur une longueur d'onde il faut placer dans le noeud du réseau un multiplexeur à insertion/extraction (ADM). De plus il faut un ADM pour chaque longueur d'onde utilisée dans le noeud, ce qui représente un coût d'équipements important. Les objectifs du groupage de trafic sont d'une part le partage efficace de la bande passante et d'autre part la réduction du coût des équipements de routage. Nous présentons des résultats d'inapproximabilité, des algorithmes d'approximation, un nouveau modèle qui permet au réseau de pouvoir router n'importe quel graphe de requêtes de degré borné, ainsi que des solutions optimales pour deux scénarios avec trafic all-to-all: l'anneau bidirectionnel et l'anneau unidirectionnel avec un facteur de groupage qui change de manière dynamique. La deuxième partie de la thèse s'intéresse aux problèmes consistant à trouver des sous-graphes avec contraintes sur le degré. Cette classe de problèmes est plus générale que le groupage de trafic, qui est un cas particulier. Il s'agit de trouver des sous-graphes d'un graphe donné avec contraintes sur le degré, tout en optimisant un paramètre du graphe (très souvent, le nombre de sommets ou d'arêtes). Nous présentons des algorithmes d'approximation, des résultats d'inapproximabilité, des études sur la complexité paramétrique, des algorithmes exacts pour les graphes planaires, ainsi qu'une méthodologie générale qui permet de résoudre efficacement cette classe de problèmes (et de manière plus générale, la classe de problèmes tels qu'une solution peut être codé avec une partition d'un sous-ensemble des sommets) pour les graphes plongés dans une surface. Finalement, plusieurs annexes présentent des résultats sur des problèmes connexes.
643

Effects of morphometric isolation and vegetation on the macroinvertebrate community in shallow Baltic Sea land-uplift bays

Hansen, Joakim January 2010 (has links)
Shallow sheltered Baltic Sea bays are ecologically important habitats that harbour a unique vegetation community and constitute vital reproduction areas for many coastal fish species. Knowledge about the invertebrate community in these bays is, however, limited. This thesis examines the macroinvertebrate community in shallow sheltered Baltic Sea bays and how it is affected by: (1) the natural morphometric isolation of bays from the sea due to post-glacial land uplift; and (2) differences in vegetation types. The invertebrate biomass and number of taxa was found to decrease with increased bay isolation. The taxon composition changed from dominance by bivalves and gastropods in open bays to a community composed of a larger proportion of insects in isolated bays. Stable isotope analysis indicated epiphytes and periphyton as the major energy resources for most of the examined consumers, but the relative importance of these in relation to larger plants decreased for some consumers with increased bay isolation. A comparison of invertebrate abundance between plants revealed a close relationship with morphological complexity of the plants. More complexly structured plants had higher invertebrate abundance than plants with simpler morphology. The results suggest that management of these coastal habitats should be dynamic and take into consideration the natural change in invertebrate community resulting from the slow bay isolation process. In addition, the results imply that changes in the aquatic vegetation due to anthropogenic influences could induce changes in the invertebrate community as the plant habitat structure is altered. A changed invertebrate community may in turn affect higher trophic levels since invertebrates are important food for many fish and waterfowl species. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Submitted. Paper 4: In press.
644

A Complexity Theory for VLSI

Thompson, C. D. 01 August 1980 (has links)
The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a ''VLSI model of computation'' and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken by any VLSI chip using any algorithm to perform an N-point Fourier transform must satisfy AT2 ≥ c N2 log2 N, for some fixed c > 0. A more general result for both sorting and Fourier transformation is that AT2x = Ω(N1 + x log2x N) for any x in the range 0 < x < 1. Also, the energy dissipated by a VLSI chip during the solution of either of these problems is at least Ω(N3/2 log N). The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the shuffle-exchange interconnection pattern are fast but large: T = O(log2 N) for Fourier transformation, T = O(log3 N) for sorting; both have area A of at most O(N2 / log1/2 N). The circuits based on the mesh interconnection pattern are slow but small: T = O(N1/2 loglog N), A = O(N log2 N).
645

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
646

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
647

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
648

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
649

Application of Complexity Measures to Stratospheric Dynamics

Krützmann, Nikolai Christian January 2008 (has links)
This thesis examines the utility of mathematical complexity measures for the analysis of stratospheric dynamics. Through theoretical considerations and tests with artificial data sets, e.g., the iteration of the logistic map, suitable parameters are determined for the application of the statistical entropy measures sample entropy (SE) and Rényi entropy (RE) to methane (a long-lived stratospheric tracer) data from simulations of the SOCOL chemistry-climate model. The SE is shown to be useful for quantifying the variability of recurring patterns in a time series and is able to identify tropical patterns similar to those reported by previous studies of the ``tropical pipe'' region. However, the SE is found to be unsuitable for use in polar regions, due to the non-stationarity of the methane data at extra-tropical latitudes. It is concluded that the SE cannot be used to analyse climate complexity on a global scale. The focus is turned to the RE, which is a complexity measure of probability distribution functions (PDFs). Using the second order RE and a normalisation factor, zonal PDFs of ten consecutive days of methane data are created with a Bayesian optimal binning technique. From these, the RE is calculated for every day (moving 10-day window). The results indicate that the RE is a promising tool for identifying stratospheric mixing barriers. In Southern Hemisphere winter and early spring, RE produces patterns similar to those found in other studies of stratospheric mixing. High values of RE are found to be indicative of the strong fluctuations in tracer distributions associated with relatively unmixed air in general, and with gradients in the vicinity of mixing barriers, in particular. Lower values suggest more thoroughly mixed air masses. The analysis is extended to eleven years of model data. Realistic inter-annual variability of some of the RE structures is observed, particularly in the Southern Hemisphere. By calculating a climatological mean of the RE for this period, additional mixing patterns are identified in the Northern Hemisphere. The validity of the RE analysis and its interpretation is underlined by showing that qualitatively similar patterns can be seen when using observational satellite data of a different tracer. Compared to previous techniques, the RE has the advantage that it requires significantly less computational effort, as it can be used to derive dynamical information from model or measurement tracer data without relying on any additional input such as wind fields. The results presented in this thesis strongly suggest that the RE is a useful new metric for analysing stratospheric mixing and its variability from climate model data. Furthermore, it is shown that the RE measure is very robust with respect to data gaps, which makes it ideal for application to observations. Hence, using the RE for comparing observations of tracer distributions with those from model simulations potentially presents a novel approach for analysing mixing in the stratosphere.
650

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.

Page generated in 0.0592 seconds