1 |
Etude des pratiques de ressources humaines des moyennes entreprises : une approche managériale / A study of human resources practices in medium-sized enterprises : a managerial approachSebti, Bouchra 25 June 2014 (has links)
Cette étude des pratiques RH dans les moyennes entreprises met en évidence l'influence significative des facteurs organisationnels et institutionnels. Dans le cadre de cette recherche, nous nous intéressons à ces pratiques à travers une approche intégratrice des théories mobilisées et montrons des particularismes en termes : 1/ de différenciation et d'hybridité structurelle, 2/ de légitimité institutionnelle et 3/ de complexité relationnelle.Reposant sur une méthodologie de recherche qualitative, l'étude empirique comprend 40 entretiens dont 13 menés dans le cadre de deux études de cas. Ces entretiens réalisés auprès de plusieurs acteurs RH permettent des comparaisons entre des moyennes entreprises de différentes tailles.À la lumière des spécificités des moyennes entreprises, nos résultats mettent en évidence une diversité des pratiques de RH. Notre recherche confirme l'intérêt d'une approche « non consensuelle » des pratiques de RH, celle-ci étant due au recul de la hiérarchie face à l'arrivée des managers. Nos résultats montrent aussi que dans les moyennes entreprises, les pratiques de RH sont personnelles, conformistes, pseudo-conformistes, pseudo-innovantes et innovantes. / This study of HR practices in medium-sized enterprises emphasizes on the significative influence of organizational and institutional factors. In this research, we examine these HR practices through an approach which integrates mobilized theories. We also point-out the specificities of these HRpractices in terms of: 1/ differentiation and structural hybridity, 2/ institutional legitimacy and 3/ relational complexity.Based on a methodology of qualitative research, this empirical study includes 40 interviews, 13 of which were part of two case studies. These interviews with different HR actors allow to make comparisons between medium-sized enterprises of different sizes.In light of the specificities of the medium-sized enterprises, our results highlight a variety of HR practices. Our research confirms that beyond the leader, there is a strong interest for a « nonconsensual » approach of HR practices, given that the hierarchy was forced to play a lesser role after the arrival of the managers. Our results show that in medium-sized enterprises, HR practicesare personal, conformist, pseudo-conformist, pseudo-innovative and innovative.
|
2 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual's processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task's complexity using the same information that is used to validate the individual's processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) - the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
3 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individuals processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a tasks complexity using the same information that is used to validate the individuals processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
4 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individuals processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a tasks complexity using the same information that is used to validate the individuals processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
5 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individuals processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a tasks complexity using the same information that is used to validate the individuals processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
6 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individuals processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a tasks complexity using the same information that is used to validate the individuals processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
7 |
The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult ReasoningBirney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individuals processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a tasks complexity using the same information that is used to validate the individuals processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
|
8 |
Rozšiřující vlastnosti struktur / Extension property of structuresHartman, David January 2014 (has links)
This work analyses properties of relational structures that imply a high degree of symmetry. A structure is called homogeneous if every mapping from any finite substructure can be extended to a mapping over the whole structure. The various types of these mappings determine corresponding types of homogeneity. A prominent position belongs to ultrahomogeneity, for which every local isomorphism can be extended to an automorphism. In contrast to graphs, the classification of ultrahomogeneous relational struc- tures is still an open problem. The task of this work is to characterize "the distance" to homogeneity using two approaches. Firstly, the classification of homogeneous structures is studied when the "complexity" of a structure is increased by introducing more relations. This leads to various classifications of homomorphism-homogeneous L-colored graphs for different L, where L- colored graphs are graphs having sets of colors from a partially ordered set L assigned to vertices and edges. Moreover a hierarchy of classes of ho- mogeneous structures defined via types of homogeneity is studied from the viewpoint of classes coincidence. The second approach analyses for fixed classes of structures the least way to extend their language so as to achieve homogeneity. We obtain results about relational complexity for finite...
|
Page generated in 0.0867 seconds