• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1303
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3138
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

A Complexity Theory for VLSI

Thompson, C. D. 01 August 1980 (has links)
The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a ''VLSI model of computation'' and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken by any VLSI chip using any algorithm to perform an N-point Fourier transform must satisfy AT2 ≥ c N2 log2 N, for some fixed c > 0. A more general result for both sorting and Fourier transformation is that AT2x = Ω(N1 + x log2x N) for any x in the range 0 < x < 1. Also, the energy dissipated by a VLSI chip during the solution of either of these problems is at least Ω(N3/2 log N). The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the shuffle-exchange interconnection pattern are fast but large: T = O(log2 N) for Fourier transformation, T = O(log3 N) for sorting; both have area A of at most O(N2 / log1/2 N). The circuits based on the mesh interconnection pattern are slow but small: T = O(N1/2 loglog N), A = O(N log2 N).
642

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
643

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
644

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
645

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
646

Application of Complexity Measures to Stratospheric Dynamics

Krützmann, Nikolai Christian January 2008 (has links)
This thesis examines the utility of mathematical complexity measures for the analysis of stratospheric dynamics. Through theoretical considerations and tests with artificial data sets, e.g., the iteration of the logistic map, suitable parameters are determined for the application of the statistical entropy measures sample entropy (SE) and Rényi entropy (RE) to methane (a long-lived stratospheric tracer) data from simulations of the SOCOL chemistry-climate model. The SE is shown to be useful for quantifying the variability of recurring patterns in a time series and is able to identify tropical patterns similar to those reported by previous studies of the ``tropical pipe'' region. However, the SE is found to be unsuitable for use in polar regions, due to the non-stationarity of the methane data at extra-tropical latitudes. It is concluded that the SE cannot be used to analyse climate complexity on a global scale. The focus is turned to the RE, which is a complexity measure of probability distribution functions (PDFs). Using the second order RE and a normalisation factor, zonal PDFs of ten consecutive days of methane data are created with a Bayesian optimal binning technique. From these, the RE is calculated for every day (moving 10-day window). The results indicate that the RE is a promising tool for identifying stratospheric mixing barriers. In Southern Hemisphere winter and early spring, RE produces patterns similar to those found in other studies of stratospheric mixing. High values of RE are found to be indicative of the strong fluctuations in tracer distributions associated with relatively unmixed air in general, and with gradients in the vicinity of mixing barriers, in particular. Lower values suggest more thoroughly mixed air masses. The analysis is extended to eleven years of model data. Realistic inter-annual variability of some of the RE structures is observed, particularly in the Southern Hemisphere. By calculating a climatological mean of the RE for this period, additional mixing patterns are identified in the Northern Hemisphere. The validity of the RE analysis and its interpretation is underlined by showing that qualitatively similar patterns can be seen when using observational satellite data of a different tracer. Compared to previous techniques, the RE has the advantage that it requires significantly less computational effort, as it can be used to derive dynamical information from model or measurement tracer data without relying on any additional input such as wind fields. The results presented in this thesis strongly suggest that the RE is a useful new metric for analysing stratospheric mixing and its variability from climate model data. Furthermore, it is shown that the RE measure is very robust with respect to data gaps, which makes it ideal for application to observations. Hence, using the RE for comparing observations of tracer distributions with those from model simulations potentially presents a novel approach for analysing mixing in the stratosphere.
647

The Measurement of Task Complexity and Cognitive Ability: Relational Complexity in Adult Reasoning

Birney, Damian Patrick Unknown Date (has links)
The theory of relational complexity (RC) developed by Halford and his associates (Halford et al., 1998a) proposes that, in addition to the number of unique entities that can be processed in parallel, it is the structure (complexity) of the relations between these entities that most appropriately captures the essence of processing capacity limitations. Halford et al. propose that the relational complexity metric forms an ordinal scale along which both task complexity and an individual’s processing capacity can be ranked. However, the underlying quantitative structure of the RC metric is largely unknown. It is argued that an assessment of the measurement properties of the RC metric is necessary to first demonstrate that the scale is able to rank order task complexity and cognitive capacity in adults. If in addition to ordinal ranking, it can be demonstrated that a continuous monotonic scale underlies the ranking of capacity (the natural extension of the complexity classification), then the potential to improve our understanding of adult cognition is further realised. Using a combination of cognitive psychology and individual differences methodologies, this thesis explores the psychometric properties of RC in three high level reasoning tasks. The Knight-Knave Task and the Sentence Comprehension Task come from the psychological literature. The third task, the Latin Square Task, was developed especially for this project to test the RC theory. An extensive RC analysis of the Knight-Knave Task is conducted using the Method for Analysis of Relational Complexity (MARC). Processing in the Knight-Knave Task has been previously explored using deduction-rules and mental models. We have taken this work as the basis for applying MARC and attempted to model the substantial demands these problems make on limited working memory resources in terms of their relational structure. The RC of the Sentence Comprehension Task has been reported in the literature and we further review and extend the empirically evidence for this task. The primary criterion imposed for developing the Latin Square Task was to minimize confounds that might weaken the identification and interpretation of a RC effect. Factors such as storage load and prior experience were minimized by specifying that the task should be novel, have a small number of general rules that could be mastered quickly by people of differing ages and abilities, and have no rules that are complexity level specific. The strength of MARC lies in using RC to explicitly link the cognitive demand of a task with the capacity of the individual. The cognitive psychology approach predicts performance decrements with increased task complexity and primarily deals with aggregated data across task condition (comparison of means). It is argued however that to minimise the subtle circularity created by validating a task’s complexity using the same information that is used to validate the individual’s processing capacity, an integration of the individual differences approach is necessary. The first major empirical study of the project evaluates the utility of the traditional dual-task approach to analyse the influence of the RC manipulation on the dual-task deficit. The Easy-to-Hard paradigm, a modification of the dual-task methodology, is used to explore the influence of individual differences in processing capacity as a function of RC. The second major empirical study explores the psychometric approach to cognitive complexity. The basic premise is that if RC is a manipulation of cognitive complexity in the traditional psychometric sense, then it should display similar psychometric properties. That is, increasing RC should result in an increasing monotonic relationship between task performance and Fluid Intelligence (Gf) – the complexity-Gf effect. Results from the comparison of means approach indicates that as expected, mean accuracy and response times differed reliably as a function of RC. An interaction between RC and Gf on task performance was also observed. The pattern of correlations was generally not consistent across RC tasks and is qualitatively different in important ways to the complexity-Gf effect. It is concluded that the Latin Square Task has sufficient measurement properties to allows us to discuss (i) how RC differs from complexity in tasks in which expected patterns of correlations are observed, (ii) what additional information needs to be considered to assist with the a priori identification of task characteristics that impose high cognitive demand, and (iii) the implications for understanding reasoning in dynamic and unconstrained environments outside the laboratory. We conclude that relational complexity theory provides a strong foundation from which to explore the influence of individual differences in performance further.
648

Complexity theory as a model for the delivery of high value IT solutions

Wehmeyer, Baden 03 1900 (has links)
Thesis (MPhil)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: Many variations of Systems Development Life Cycle models have evolved over the last fifty years of systems engineering and software science, yet not enough knowledge is available to better understand these as Complex Adaptive Systems by studying chaos and complexity theories. The primary application domain of the thesis is focused on the development of electronic hardware and software products. There is a great need for innovation to reach all corners of the development ecosystem; however a large cognitive distance exists between the concept of systematic product development and that of value creation. Instruments are needed to aid process agility, for defusing imminent problems as they mount, and for making effective decisions to sustain maximum productivity. Many of these objectives are neglected in systems development practices. As with so many management fads, it appears that no single one of these models lived up to all of the expectations and in many cases ended up being recipes for disaster. The statistics available on failed projects are concerning but has not stopped the scientific and engineering communities from trying over, and over again, to make progress. The goal of the thesis is therefore to identify the most viable model that supports the sustainability of systems development team performance. The research draws insights from extant literature, by applying a knowledge management theory based analysis on the various models with specific attention given to complexity theory. The dominant metric discovered is to measure the Value Velocity of a Systems Development Team. This metric is determined by two independent variables, being Value Created and Delivery Delay. Complex Adaptive Systems simply requires a guiding vision and a carefully selected set of generative rules for increasing and sustaining the Value Velocity. / AFRIKAANSE OPSOMMING: Menige variasies van stelselsontwikkelingsmodelle het ontwikkel oor die afgelope vyftig jaar in stelselsingenieurswese en sagtewarewetenskap, en steeds is daar nie genoegsame kennis beskikbaar om beter begrip te kry oor hoe hierdie stelsels as Komplekse Aanpassende Sisteme bestudeer kan word nie, ten einde die bestuur daarvan te verbeter. Die primêre toepassingsgebied in die tesis is gespits op die ontwikkeling van rekenaarhardeware en - sagteware. Die behoefte vir innovasie moet al die fasette van die ontwikkelingsekosisteem bereik. Die bewusheidsgaping tussen sistemiese produkontwikkeling en waardeskepping, is te wyd. Instumentasie word benodig om te help met ratsheid in prosesuitvoering, om dreigende probleme te ontlont, en effektief besluitneming toe te pas, en sodoende produktiwiteit op ‘n maksimum vlak te hou. Hierdie doelwitte word tot ’n meerdere mate in die huidige praktyk verontagsaam. Net soos somige bestuursadvies oneffektief is, blyk dit dat daar nog steeds geen stelselsmodelle is wat alle verwagtinge bevredig nie. In baie gevalle eindig die toepassing daarvan in waan en mislukking. Die statistiek beskikbaar op mislukte projekte is onrusbarend, tog het dit nie vooruitgang gekelder nie, en die behoefte na verbetering bestaan steeds. Die doelwit van die tesis is dus om die mees lewensvatbare model wat die voortbestaan van stelselsontwikkelingsgroepe sal kan handhaaf, uit te sonder. Die navorsing neem insigte uit hedendagse literatuur en is gebasseer op ’n analiese van verskeide kennisbestuursteorieё teenoor die bestaande stelselsontwikkelingsmodelle. Die fokus is meer spesifiek toegespits op kompleksiteitsteorie. Die hoofmaatstaaf is om die Waardesnelheid van ’n stelselsontwikkelingspan te bepaal. Hierdie maatstaaf word gepyl deur twee onafhanklike veranderlikes, naamlik die Waarde Geskep en die Afleweringsvertraging. Ten slotte, vereis Kompleks Aanpassende Sisteme slegs die aanwesigheid van 'n leidende visie tesame met 'n goeddeurdagte stel ontwikkelingsreëls, wat aanleiding sal gee tot die verhoging en behoud van die Waardesnelheid.
649

Fragmentos de complexidade aplicados ao mercado financeiro

Kagi, Reinaldo Kenji 10 March 2014 (has links)
Submitted by Reinaldo Kagi (reinaldo.kagi@gmail.com) on 2014-03-07T16:23:26Z No. of bitstreams: 1 KAGI, Reinaldo Kenji. (2014) - Fragmentos da Complexidade Aplicados ao Mercado Financeiro.pdf: 976757 bytes, checksum: 053f1bfa52560a1fc533f8c2860a3caa (MD5) / Rejected by JOANA MARTORINI (joana.martorini@fgv.br), reason: prezado Reinaldo, boa tarde A numerações das paginas não podem ficar visíveis antes da introdução, existem paginas em branco que devem ser retiradas e o titulo não está de acordo com o titulo apresentado na Ata de Dissertação conforme segue abaixo: Fragmentos DE complexidade aplicados ao mercado financeiro Fragmentos DA complexidade aplicados ao mercado financeiro Por gentileza fazer os ajustes listados acima. on 2014-03-07T18:20:57Z (GMT) / Submitted by Reinaldo Kagi (reinaldo.kagi@gmail.com) on 2014-03-07T19:05:43Z No. of bitstreams: 1 KAGI, Reinaldo Kenji. (2014) - Fragmentos de Complexidade Aplicados ao Mercado Financeiro.pdf: 953573 bytes, checksum: d484fb79cf4de8d48c79d7b72dd959a0 (MD5) / Rejected by JOANA MARTORINI (joana.martorini@fgv.br), reason: Prezado Reinaldo, boa tarde. Tem um único ponto a ser ajustado, na folha de assinatura a data está errada, o correto seria 07/02/2014 (data de sua apresentação de dissertação. por gentileza ajustar. on 2014-03-07T19:25:51Z (GMT) / Submitted by Reinaldo Kagi (reinaldo.kagi@gmail.com) on 2014-03-07T19:31:16Z No. of bitstreams: 1 KAGI, Reinaldo Kenji. (2014) - Fragmentos de Complexidade Aplicados ao Mercado Financeiro.pdf: 975473 bytes, checksum: 955bb2b02ccec9d2b6a97da997eb9af3 (MD5) / Approved for entry into archive by JOANA MARTORINI (joana.martorini@fgv.br) on 2014-03-07T19:32:30Z (GMT) No. of bitstreams: 1 KAGI, Reinaldo Kenji. (2014) - Fragmentos de Complexidade Aplicados ao Mercado Financeiro.pdf: 975473 bytes, checksum: 955bb2b02ccec9d2b6a97da997eb9af3 (MD5) / Made available in DSpace on 2014-03-07T19:50:04Z (GMT). No. of bitstreams: 1 KAGI, Reinaldo Kenji. (2014) - Fragmentos de Complexidade Aplicados ao Mercado Financeiro.pdf: 975473 bytes, checksum: 955bb2b02ccec9d2b6a97da997eb9af3 (MD5) Previous issue date: 2014-03-10 / The rise of complexity in financial markets has been reported by Rajan (2005), Gorton (2008) and e Haldane & May (2011) as one of the main features that led to the increase of systemic risk, which climaxed in the financial crisis of 2007/08. The Bank for International Settlements (2013) covers the matters of complexity in the context of banking regulation and discusses the comparability of capital adequacy among banks and jurisdictions. Nonetheless, definitions for concepts such as complexity and complex adaptive systems are omitted from the major discussions. This paper elucidates some concepts related to the Theories of Complexity, how this phenomenon arises, how they may be applied to financial markets. We discuss the use of two tools in the context of complex adaptive systems: Agent Based Models (ABMs) and entropy. We come to the conclusion that although the complexity research agenda still leaves us some gaps, it most definitely contributes to the economic research in understanding the mechanisms that trigger systemic risks, as well as adding tools that allows us model interacting heterogeneous agents, which leads to the rise of emergent phenomena in the system. Some research hypotheses are suggested for later development. / O aumento da complexidade do mercado financeiro tem sido relatado por Rajan (2005), Gorton (2008) e Haldane e May (2011) como um dos principais fatores responsáveis pelo incremento do risco sistêmico que culminou na crise financeira de 2007/08. O Bank for International Settlements (2013) aborda a questão da complexidade no contexto da regulação bancária e discute a comparabilidade da adequação de capital entre os bancos e entre jurisdições. No entanto, as definições dos conceitos de complexidade e de sistemas adaptativos complexos são suprimidas das principais discussões. Este artigo esclarece alguns conceitos relacionados às teorias da Complexidade, como se dá a emergência deste fenômeno, como os conceitos podem ser aplicados ao mercado financeiro. São discutidas duas ferramentas que podem ser utilizadas no contexto de sistemas adaptativos complexos: Agent Based Models (ABMs) e entropia e comparadas com ferramentas tradicionais. Concluímos que ainda que a linha de pesquisa da complexidade deixe lacunas, certamente esta contribui com a agenda de pesquisa econômica para se compreender os mecanismos que desencadeiam riscos sistêmicos, bem como adiciona ferramentas que possibilitam modelar agentes heterogêneos que interagem, de forma a permitir o surgimento de fenômenos emergentes no sistema. Hipóteses de pesquisa são sugeridas para aprofundamento posterior.
650

Relacionamento na cadeia produtiva da maçã sob a ótica da teoria da complexidade

Cruz, Marcia Rohr da 14 September 2009 (has links)
A realidade das organizações atualmente faz com que os desafios apresentados sejam enfrentados procurando-se a maior eficácia para suas soluções e com isso surjam as adaptações necessárias as constantes mudanças de maneira que os entraves sejam solucionados levando em consideração toda a amplitude que apresentarem. A Teoria da Complexidade por sua essência pode propiciar uma visão mais próxima da realidade, auxiliando na busca pelo entendimento de processos relacionados a gestão dos sistemas. Essa abordagem pode proporcionar uma maior efetividade nas ações das organizações, podendo servir de suporte para o melhor entendimento dos relacionamentos e das necessidades dos atores tanto internos como dos atores externos envolvidos nas organizações. Este trabalho foi construído a partir da necessidade do entendimento de como acontecem os relacionamentos entre os integrantes da cadeia produtiva da maçã para a partir disso propor mecanismos que possam auxiliar na melhoria das inter-relações. Assim, utilizou-se a abordagem do anel tetralógico para auxiliar na identificação das ações decorrentes da implementação da PIM na cadeia produtiva da maçã nos estados do Rio Grande do Sul e Santa Catarina. Trata-se de uma pesquisa qualitativa, operacionalizada através de um estudo de caso. Para que os objetivos fossem atingidos foi realizada uma revisão teórica a cerca dos pilares que sustentam o estudo, cadeia produtiva, relacionamento e teoria da Complexidade. Baseando-se na revisão teórica foram construídos os procedimentos metodológicos para a coleta e análise dos dados. A realização do estudo de caso realizou-se a partir de entrevistas com especialistas integrantes da cadeia produtiva da maçã brasileira. A análise dos dados proporcionou a evidencia de resultados como: identificação dos conceitos do anel tetralógico na implementação da Produção Integrada de Maçã; verificação das estratégias que a cadeia produtiva da maçã utiliza como norteadores do sistema; presença nos relacionamentos de ações e atitudes que trazem ao sistema a necessidade de trabalho voltado para o comportamento de cooperação e trabalho em equipe; além da necessidade de uma reorganização voltada para o cumprimento dos processos de certificação por parte do elo compradores atacadistas e varejistas, assim como a conscientização dos consumidores finais para a exigência de controle de qualidade no momento da compra. / Submitted by Marcelo Teixeira (mvteixeira@ucs.br) on 2014-05-28T19:54:25Z No. of bitstreams: 1 Dissertacao Marcia Rohr da Cuz.pdf: 556190 bytes, checksum: a8d9e8fb44293b0763108a51bc83715c (MD5) / Made available in DSpace on 2014-05-28T19:54:25Z (GMT). No. of bitstreams: 1 Dissertacao Marcia Rohr da Cuz.pdf: 556190 bytes, checksum: a8d9e8fb44293b0763108a51bc83715c (MD5) / The reality of the organizations at present does so that the presented challenges are faced looking for the biggest efficiency for his solutions and the necessary adaptations appear for the constant changes so that the hamper are solved taking into account the whole breadth what to present. The Theory of the Complexity for his essence can give a vision more near the reality, helping in the search for the understanding of processes related of the systems of management. This approach can provide a bigger effectiveness in the actions of the organizations, being able to serve of support for the best understanding of the relationships and of the necessities of the actors so the internal as the external actors wrapped in the organizations. This work was fulfilled from the necessity of the understanding of how the relationships happen between the integrants of the productive chain of the apple for from that to propose mechanisms that could help in the improvement of the inter-relations. So, there was suggested the use of the approach of the tetralogic ring for help in the identification of the actions resulting from the implementation of the PIM in the productive chain of the apple in the states of Rio Grande do Sul and Santa Catarina. So that the objectives being reached it was carried out a theoretical revision around the pillars that support the study, productive chain, relationship and theory of the Complexity. Being based on the theoretical revision there was built proceedings for the collection and analysis of the data. The realization of the study case happened from interviews with specialists, integrant of the productive chain of the Brazilian apple. The analysis of the data provided shows up of results like: identification of the concepts of the tetralógic ring in the implementation of the Integrated Production of Apple; checking of the strategies that the productive chain of the apple uses how direction of the system; presence in the relationships of actions and attitudes that bring to the system the necessity of work turned to the behavior of cooperation and work in team; besides the necessity of a reorganization turned to the fulfillment of the processes of certification for part of the link wholesale and retail buyers, as well as the conscience of the final consumers for the demand of quality control at the moment of the purchase.

Page generated in 0.1102 seconds