Spelling suggestions: "subject:"complex lemsystems."" "subject:"complex atemsystems.""
261 |
Designing a knowledge management architecture to support self-organization in a hotel chainKaldis, Emmanuel January 2014 (has links)
Models are incredibly insidious; they slide undetected into discussions and then dominate the way people think. Since Information Systems (ISs) and particularly Knowledge Management Systems (KMSs) are socio-technical systems, they unconsciously embrace the characteristics of the dominant models of management thinking. Thus, their limitations can often be attributed to the deficiencies of the organizational models they aim to support. Through the case study of a hotel chain, this research suggests that contemporary KMSs in the hospitality sector are still grounded in the assumptions of the mechanistic organizational model which conceives an organization as a rigid hierarchical entity governed from the top. Despite the recent technological advances in terms of supporting dialogue and participation between members, organizational knowledge is still transferred vertically; from the top to the bottom or from the bottom to the top. A number of limitations still exist in terms of supporting effectively the transfer of knowledge horizontally between the geographically distributed units of an organization. Inspired from the key concepts of the more recent complex systems model, referred frequently as complexity theories, a Knowledge Management Architecture (KMA) is proposed aiming to re-conceptualize the existing KMSs towards conceiving an organization as a set self-organizing communities of practice (CoP). In every such CoP, order is created from the dynamic exchange of knowledge between the structurally similar community members. Thus, the focus of the KMA is placed on capturing systematically for reuse the architectural knowledge created upon every initiative for change and share such knowledge with the rest of the members of the CoP. A KMS was also developed to support the dynamic dimensions that the KMA proposes. The KMS was then applied in the case of the hotel chain, where it brought significant benefits which constitute evidence of an improved self-organizing ability. The previously isolated hotel units residing in distant regions could now trace but also reapply easily changes undertaken by the other community members. Top-management’s intervention to promote change was reduced, while the pace of change increased. Moreover, the organizational cohesion, the integration of new members as well as the level of management alertness was enhanced. The case of the hotel chain is indicative. It is believed that the KMA proposed can be applicable to geographically distributed organizations operating in different sectors too. At the same time, this research contributes to the recent discourse between the fields of IS and complexity by demonstrating how fundamental concepts from complexity such as self-organization, emergence and edge-of-chaos can be embraced by contemporary KMSs.
|
262 |
Développement de concepts et outils d’aide à la décision pour l’optimisation via simulation : intégration des métaheuristiques au formalisme DEVS / Concept development and decision support tools for optimization via simulation : integration of metaheuristics to DEVS formalismPoggi, Bastien 12 December 2014 (has links)
Nous vivons dans un monde où le besoin d’efficacité s’impose de plus en plus. Ce besoin s’exprime dans différents domaines, allant de l’industrie à la médecine en passant par la surveillance environnementale. Engendrées par cette demande, de nombreuses méthodes d’optimisation « modernes » également appelées « métaheuristiques » sont apparues ces quarante dernières années. Ces méthodes se basent sur des raisonnements probabilistes et aléatoires et permettent la résolution de problèmes pour lesquels les méthodes d’optimisation « classiques » également appelées « méthodes déterministes » ne permettent pas l’obtention de résultats dans des temps raisonnables. Victimes du succès de ces méthodes, leurs concepteurs doivent aujourd’hui plus que jamais répondre à de nombreuses problématiques qui restent en suspens : « Comment évaluer de manière fiable et rapide les solutions proposées ? », « Quelle(s) méthode(s) choisir pour le problème étudié ? », « Comment paramétrer la méthode utilisée ? », « Comment utiliser une même méthode sur différents problèmes sans avoir à la modifier ? ». Pour répondre à ces différentes questions, nous avons développé un ensemble de concepts et outils. Ceux-ci ont été réalisés dans le cadre de la modélisation et la simulation de systèmes à évènements discrets avec le formalisme DEVS. Ce choix a été motivé par deux objectifs : permettre l’optimisation temporelle et spatiale de modèles DEVS existants et améliorer les performances du processus d’optimisation (qualité des solutions proposées, temps de calcul). La modélisation et la simulation de l’optimisation permettent de générer directement des propositions de paramètres sur les entrées du modèle à optimiser. Ce modèle, quant à lui, génère des résultats utiles à la progression de l’optimisation. Pour réaliser ce couplage entre optimisation et simulation, nous proposons l’intégration des méthodes d’optimisation sous la forme de modèles simulables et facilement interconnectables. Notre intégration se concentre donc sur la cohérence des échanges entre les modèles dédiés à l’optimisation et les modèles dédiés à la représentation du problème. Elle permet également l’arrêt anticipé de certaines simulations inutiles afin de réduire au maximum la durée de l’optimisation. La représentation des méthodes d’optimisation sous formes de modèles simulables apporte également un élément de réponse dans le choix et le paramétrage des algorithmes. Grace à l’usage de la simulation, différents algorithmes et paramètres peuvent être utilisés pour un même processus d’optimisation. Ces changements sont également influencés par les résultats observés et permettent une adaptation automatique de l’optimisation aux spécificités connues et/ou cachées du problème étudié ainsi qu’à ses différentes étapes de résolution. L’architecture de modèle que nous proposons a été validée sur trois problèmes distincts : l’optimisation de paramètres pour des fonctions mathématiques, l’optimisation spatialisée d’un déploiement de réseau de capteurs sans fil, l’optimisation temporisée de traitements médicaux. La généricité de nos concepts et la modularité de nos modèles ont permis de mettre en avant la facilité d’utilisation de notre outil. Au niveau des performances, l’interruption de certaines simulations ainsi que le dynamisme de l’optimisation ont permis l’obtention de solutions de qualité supérieure dans des temps inférieurs. / In the world in witch we live the efficient needs are increasing in various fields like industry medecine and environnemtale monitoring. To meet this needs, many optimization methods nammed « metaheuristics » have been created over the last forty years. They are based on probabilistic and random reasoning and allow user to solve problems for witch conventional methods can not be used in acceptable computing times. Victim of their methods succes, the developers of the methods have to answer to several questions : « How can the fitness of solutions be assessed ? », « How to use the same method for several projects without change the code? », « What method will we choose for a specific problem ? », « How to parametrize algorithms ? ». To deal with this problem, we have developed a set of concepts and tools. They have been developed in the context of modeling and simulation of discrete event systems with DEVS formalism. The aims pursued are : allow temporized and spacialized optimization of existing DEVS models, improve the optimization process efficiency (quality of solutions, computing time). Modeling and simulation are used to propose parameters toward the input of problem to optimize. This one generate results used to improve the next proposed solutions. In order to combine optimization and simulation, we propose to represent the optimization method as models which can be easily interconnected and simulated. We focus on consistency of exchanges between optimization models and problem models. Our approach allows early stopping of useless simulations and reduce the computing time as a result. Modeling optimization methods in DEVS formalism also allows to autimatically choose the optimization algorithm and its parameters. Various algorithms and parameters can be used for the same problem during optimization process at different steps. This changes are influenced by collected results of problem simulation. They lead on a self adaptation to the visible or/and hidden features of the studied problem. Our models architecture has been tested on three different problems : parametric optimization of mathematical functions, spacialized optimization of a sensor network deployment, temporized optimization of a medical treatment. Genericity of our concepts and scalability of our models underline the usabily of proposed tool. Concerning performance, simulation breaks and dynamic optimization have obtained higher quality solutions in a short time.
|
263 |
Gender Equality as an Idea and Practice - A Case Study of an Office at the United Nations HeadquartersKetonen, Ida E. January 2018 (has links)
Achieving gender equality and empowering all women and girls, is one of the United Nations (UN) core objectives. However, the UN has been struggling with achieving gender balance in its own organisation, despite numerous attempts. Men have been in numerical dominance at the UN since inception, especially on senior positions. This case study takes place just months after the System-wide strategy for gender parity was launched by Secretary-General Guterres. It captures the initial reactions through in-depth, semi-structured interviews with five women working in one UN body at the UN Headquarters in Geneva, Switzerland. Through these stories and experiences, this thesis aims to analyse the UN as a gendered organisation, focusing on organisational structure and culture. I argue that gendered processes of the organisational structure and culture preserve the male-dominance by having including effects on men and excluding effects on women. In this thesis I use gendered processes (Acker 1992), combined with post-structural policy analysis (Bacchi 2009) and complex systems theory (Ramalingam 2013), as analytical tools to show how equality is constructed and understood as an idea and in practice.
|
264 |
Criticality in neural networks = Criticalidade em redes neurais / Criticalidade em redes neuraisReis, Elohim Fonseca dos, 1984- 12 September 2015 (has links)
Orientadores: José Antônio Brum, Marcus Aloizio Martinez de Aguiar / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin / Made available in DSpace on 2018-08-29T15:40:55Z (GMT). No. of bitstreams: 1
Reis_ElohimFonsecados_M.pdf: 2277988 bytes, checksum: 08f2c3b84a391217d575c0f425159fca (MD5)
Previous issue date: 2015 / Resumo: Este trabalho é dividido em duas partes. Na primeira parte, uma rede de correlação é construída baseada em um modelo de Ising em diferentes temperaturas, crítica, subcrítica e supercrítica, usando um algorítimo de Metropolis Monte-Carlo com dinâmica de \textit{single-spin-flip}. Este modelo teórico é comparado com uma rede do cérebro construída a partir de correlações das séries temporais do sinal BOLD de fMRI de regiões do cérebro. Medidas de rede, como coeficiente de aglomeração, mínimo caminho médio e distribuição de grau são analisadas. As mesmas medidas de rede são calculadas para a rede obtida pelas correlações das séries temporais dos spins no modelo de Ising. Os resultados da rede cerebral são melhor explicados pelo modelo teórico na temperatura crítica, sugerindo aspectos de criticalidade na dinâmica cerebral. Na segunda parte, é estudada a dinâmica temporal da atividade de um população neural, ou seja, a atividade de células ganglionares da retina gravadas em uma matriz de multi-eletrodos. Vários estudos têm focado em descrever a atividade de redes neurais usando modelos de Ising com desordem, não dando atenção à estrutura dinâmica. Tratando o tempo como uma dimensão extra do sistema, a dinâmica temporal da atividade da população neural é modelada. O princípio de máxima entropia é usado para construir um modelo de Ising com interação entre pares das atividades de diferentes neurônios em tempos diferentes. O ajuste do modelo é feito com uma combinação de amostragem de Monte-Carlo e método do gradiente descendente. O sistema é caracterizado pelos parâmetros aprendidos, questões como balanço detalhado e reversibilidade temporal são analisadas e variáveis termodinâmicas, como o calor específico, podem ser calculadas para estudar aspectos de criticalidade / Abstract: This work is divided in two parts. In the first part, a correlation network is build based on an Ising model at different temperatures, critical, subcritical and supercritical, using a Metropolis Monte-Carlo algorithm with single-spin-flip dynamics. This theoretical model is compared with a brain network built from the correlations of BOLD fMRI temporal series of brain regions activity. Network measures, such as clustering coefficient, average shortest path length and degree distributions are analysed. The same network measures are calculated to the network obtained from the time series correlations of the spins in the Ising model. The results from the brain network are better explained by the theoretical model at the critical temperature, suggesting critical aspects in the brain dynamics. In the second part, the temporal dynamics of the activity of a neuron population, that is, the activity of retinal ganglion cells recorded in a multi-electrode array was studied. Many studies have focused on describing the activity of neural networks using disordered Ising models, with no regard to the dynamic nature. Treating time as an extra dimension of the system, the temporal dynamics of the activity of the neuron population is modeled. The maximum entropy principle approach is used to build an Ising model with pairwise interactions between the activities of different neurons at different times. Model fitting is performed by a combination of Metropolis Monte Carlo sampling with gradient descent methods. The system is characterized by the learned parameters, questions like detailed balance and time reversibility are analysed and thermodynamic variables, such as specific heat, can be calculated to study critical aspects / Mestrado / Física / Mestre em Física / 2013/25361-6 / FAPESP
|
265 |
Modélisation du système complexe de la publication scientifique / Modeling the complex system of scientific publicationKovanis, Michail 02 October 2017 (has links)
Le système d’évaluation par les pairs est le gold standard de la publication scientifique. Ce système a deux objectifs: d’une part filtrer les articles scientifiques erronés ou non pertinents et d’autre part améliorer la qualité de ceux jugés dignes de publication. Le rôle des revues scientifiques et des rédacteurs en chef est de veiller à ce que des connaissances scientifiques valides soient diffusées auprès des scientifiques concernés et du public. Cependant, le système d’évaluation par les pairs a récemment été critiqué comme étant intenable sur le long terme, inefficace et cause de délais de publication des résultats scientifiques. Dans ce projet de doctorat, j’ai utilisé une modélisation par systèmes complexes pour étudier le comportement macroscopique des systèmes de publication et d’évaluation par les pairs. Dans un premier projet, j’ai modélisé des données empiriques provenant de diverses sources comme Pubmed et Publons pour évaluer la viabilité du système. Je montre que l’offre dépasse de 15% à 249% la demande d’évaluation par les pairs et, par conséquent, le système est durable en termes de volume. Cependant, 20% des chercheurs effectuent 69% à 94% des revues d’articles, ce qui souligne un déséquilibre significatif en termes d’efforts de la communauté scientifique. Les résultats ont permis de réfuter la croyance largement répandue selon laquelle la demande d’évaluation par les pairs dépasse largement l’offre mais ont montré que la majorité des chercheurs ne contribue pas réellement au processus. Dans mon deuxième projet, j’ai développé un modèle par agents à grande échelle qui imite le comportement du système classique d’évaluation par les pairs, et que j’ai calibré avec des données empiriques du domaine biomédical. En utilisant ce modèle comme base pour mon troisième projet, j’ai modélisé cinq systèmes alternatifs d’évaluation par les pairs et évalué leurs performances par rapport au système conventionnel en termes d’efficacité de la revue, de temps passé à évaluer des manuscrits et de diffusion de l’information scientifique. Dans mes simulations, les deux systèmes alternatifs dans lesquels les scientifiques partagent les commentaires sur leurs manuscrits rejetés avec les éditeurs du prochain journal auquel ils les soumettent ont des performances similaires au système classique en termes d’efficacité de la revue. Le temps total consacré par la communauté scientifique à l’évaluation des articles est cependant réduit d’environ 63%. En ce qui concerne la dissémination scientifique, le temps total de la première soumission jusqu’à la publication est diminué d’environ 47% et ces systèmes permettent de diffuser entre 10% et 36% plus d’informations scientifiques que le système conventionnel. Enfin, le modèle par agents développé peut être utilisé pour simuler d’autres systèmes d’évaluation par les pairs ou des interventions, pour ainsi déterminer les interventions ou modifications les plus prometteuses qui pourraient être ensuite testées par des études expérimentales en vie réelle. / The peer-review system is undoubtedly the gold standard of scientific publication. Peer review serves a two-fold purpose; to screen out of publication articles containing incorrect or irrelevant science and to improve the quality of the ones deemed suitable for publication. Moreover, the role of the scientific journals and editors is to ensure that valid scientific knowledge is disseminated to the appropriate target group of scientists and to the public. However, the peer review system has recently been criticized, in that it is unsustainable, inefficient and slows down publication. In this PhD thesis, I used complex-systems modeling to study the macroscopic behavior of the scientific publication and peer-review systems. In my first project, I modeled empirical data from various sources, such as Pubmed and Publons, to assess the sustainability of the system. I showed that the potential supply has been exceeding the demand for peer review by 15% to 249% and thus, the system is sustainable in terms of volume. However, 20% of researchers have been performing 69% to 94% of the annual reviews, which emphasizes a significant imbalance in terms of effort by the scientific community. The results provided evidence contrary to the widely-adopted, but untested belief, that the demand for peer review over-exceeds the supply, and they indicated that the majority of researchers do not contribute to the process. In my second project, I developed a large-scale agent-based model, which mimicked the behavior of the conventional peer-review system. This model was calibrated with empirical data from the biomedical domain. Using this model as a base for my third project, I developed and assessed the performance of five alternative peer-review systems by measuring peer-review efficiency, reviewer effort and scientific dissemination as compared to the conventional system. In my simulations, two alternative systems, in which scientists shared past reviews of their rejected manuscripts with the editors of the next journal to which they submitted, performed equally or sometimes better in terms of peer-review efficiency. They also each reduced the overall reviewer effort by ~63%. In terms of scientific dissemination, they decreased the median time from first submission until publication by ~47% and diffused on average 10% to 36% more scientific information (i.e., manuscript intrinsic quality x journal impact factor) than the conventional system. Finally, my agent-based model may be an approach to simulate alternative peer-review systems (or interventions), find those that are the most promising and aid decisions about which systems may be introduced into real-world trials.
|
266 |
Sociologický simulátor / Sociological SimulatorLudwig, Petr January 2011 (has links)
This thesis describes the paradigm of complex systems and discusses possibilities of their modeling and simulations. The work shows the suitability of using multi-agent modeling for creating abstraction of social environment, that is one of the major complex systems. Thesis content includes an analysis of tools that are available for creating multi-agent simulators. The core of this thesis are processed research documents and a demonstrative model of social phenomenon known as procrastination.
|
267 |
Adaptative modeling of urban dynamics with mobile phone database / Modélisation adaptative de la dynamique urbaine avec une base de données de téléphonie mobileFaisal Behadili, Suhad 29 November 2016 (has links)
Dans cette étude, on s’intéresse à l’étude de la mobilité urbaine à partir de traces de données de téléphonie mobile qui ont été fournies par l’opérateur Orange. Les données fournies concernent la région de la ville de Rouen, durant un événement éphémère qui est l’Armada de 2008. Dans une première étude, on gère une masse importante de données pour extraire des caractéristiques permettant de qualifier des usages de la ville lors d’évènements éphémères, en fonctions des jours d’activités ou de repos des individus. Des visualisations sont données et permettent de comprendre les mobilités engendrées de manière spécifique ou non par l’événement. Dans une seconde partie, on s’intéresse à la reconstitution de trajectoires avec des approches agrégées inspirées des techniques de physique statistique afin de dégager des comportements en fonction des périodes d’activités et d’un découpage spatial en grandes zones urbaines. On tente ainsi de dégager des lois en observant des distributions en loi de puissance caractéristiques de la complexité des systèmes étudiés. / In this study, we are interested in the study of urban mobility from traces of mobile data that were provided by the operator Orange. The data provided relate to the region of the city of Rouen, during an ephemeral event that is the Armada of 2008. In a first study, a large amount of data is managed to extract characteristics allowing to qualify the uses of the city during ephemeral events, depending on the days of activity of the individuals. Visualizations are given and make it possible to understand the mobilities generated in a specific way during the event. In the second part, we study the reconstruction of trajectories with aggregated approaches inspired by statistical physics techniques in order to reveal behaviors according to periods of activity and a spatial division in large urban areas. In order to obtain the general mobility law by observing distributions in power law characteristic for the studied complex system.
|
268 |
A Generalized Framework for Representing Complex NetworksViplove Arora (8086250) 06 December 2019 (has links)
<div>Complex systems are often characterized by a large collection of components interacting in nontrivial ways. Self-organization among these individual components often leads to emergence of a macroscopic structure that is neither completely regular nor completely random. In order to understand what we observe at a macroscopic scale, conceptual, mathematical, and computational tools are required for modeling and analyzing these interactions. A principled approach to understand these complex systems (and the processes that give rise to them) is to formulate generative models and infer their parameters from given data that is typically stored in the form of networks (or graphs). The increasing availability of network data from a wide variety of sources, such as the Internet, online social networks, collaboration networks, biological networks, etc., has fueled the rapid development of network science. </div><div><br></div><div>A variety of generative models have been designed to synthesize networks having specific properties (such as power law degree distributions, small-worldness, etc.), but the structural richness of real-world network data calls for researchers to posit new models that are capable of keeping pace with the empirical observations about the topological properties of real networks. The mechanistic approach to modeling networks aims to identify putative mechanisms that can explain the dependence, diversity, and heterogeneity in the interactions responsible for creating the topology of an observed network. A successful mechanistic model can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. While it is difficult to intuit appropriate mechanisms for network formation, machine learning and evolutionary algorithms can be used to automatically infer appropriate network generation mechanisms from the observed network structure.</div><div><br></div><div>Building on these philosophical foundations and a series of (not new) observations based on first principles, we extrapolate an action-based framework that creates a compact probabilistic model for synthesizing real-world networks. Our action-based perspective assumes that the generative process is composed of two main components: (1) a set of actions that expresses link formation potential using different strategies capturing the collective behavior of nodes, and (2) an algorithmic environment that provides opportunities for nodes to create links. Optimization and machine learning methods are used to learn an appropriate low-dimensional action-based representation for an observed network in the form of a row stochastic matrix, which can subsequently be used for simulating the system at various scales. We also show that in addition to being practically relevant, the proposed model is relatively exchangeable up to relabeling of the node-types. </div><div><br></div><div>Such a model can facilitate handling many of the challenges of understanding real data, including accounting for noise and missing values, and connecting theory with data by providing interpretable results. To demonstrate the practicality of the action-based model, we decided to utilize the model within domain-specific contexts. We used the model as a centralized approach for designing resilient supply chain networks while incorporating appropriate constraints, a rare feature of most network models. Similarly, a new variant of the action-based model was used for understanding the relationship between the structural organization of human brains and the cognitive ability of subjects. Finally, our analysis of the ability of state-of-the-art network models to replicate the expected topological variations in network populations highlighted the need for rethinking the way we evaluate the goodness-of-fit of new and existing network models, thus exposing significant gaps in the literature.</div>
|
269 |
Investigating the collective behaviour of the stock market using Agent-Based ModellingBjörklöf, Christoffer January 2022 (has links)
The stock market is a place in which numerous entities interact, operate, andchange state based on the decisions they make. Further, the stock market itselfevolves and changes its dynamics over time as a consequence of the individualactions taking place in it. In this sense, the stock market can be viewed andtreated as a complex adaptive system. In this study, an agent-based model,simulating the trading of a single asset has been constructed with the purposeof investigating how the collective behaviour affects the dynamics of the stockmarket. For this purpose, the agent-based modelling program NetLogo wasused. Lastly, the conclusion of the study revealed that the dynamics of thestock market are clearly dependent on some specific factors of the collectivebehaviour, such as the information source of the investors.
|
270 |
On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical NetworksGoudarzi, Alireza 01 January 2012 (has links)
The high cost of processor fabrication plants and approaching physical limits have started a new wave research in alternative computing paradigms. As an alternative to the top-down manufactured silicon-based computers, research in computing using natural and physical system directly has recently gained a great deal of interest. A branch of this research promotes the idea that any physical system with sufficiently complex dynamics is able to perform computation. The power of networks in representing complex interactions between many parts make them a suitable choice for modeling physical systems. Many studies used networks with a homogeneous structure to describe the computational circuits. However physical systems are inherently heterogeneous. We aim to study the effect of heterogeneity in the dynamics of physical systems that pertains to information processing. Two particularly well-studied network models that represent information processing in a wide range of physical systems are Random Boolean Networks (RBN), that are used to model gene interactions, and Liquid State Machines (LSM), that are used to model brain-like networks. In this thesis, we study the effects of function heterogeneity, in-degree heterogeneity, and interconnect irregularity on the dynamics and the performance of RBN and LSM. First, we introduce the model parameters to characterize the heterogeneity of components in RBN and LSM networks. We then quantify the effects of heterogeneity on the network dynamics. For the three heterogeneity aspects that we studied, we found that the effect of heterogeneity on RBN and LSM are very different. We find that in LSM the in-degree heterogeneity decreases the chaoticity in the network, whereas it increases chaoticity in RBN. For interconnect irregularity, heterogeneity decreases the chaoticity in LSM while its effects on RBN the dynamics depends on the connectivity. For {K} < 2, heterogeneity in the interconnect will increase the chaoticity in the dynamics and for {K} > 2 it decreases the chaoticity. We find that function heterogeneity has virtually no effect on the LSM dynamics. In RBN however, function heterogeneity actually makes the dynamics predictable as a function of connectivity and heterogeneity in the network structure. We hypothesize that node heterogeneity in RBN may help signal processing because of the variety of signal decomposition by different nodes.
|
Page generated in 0.0403 seconds