• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 147
  • 48
  • 26
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 485
  • 485
  • 149
  • 146
  • 88
  • 65
  • 64
  • 61
  • 55
  • 55
  • 53
  • 52
  • 51
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Outils d'aide à la décision pour la sélection des filières de valorisation des produits de la déconstruction des systèmes en fin de vie : application au domaine aéronautique / End-of-life option selction decision support tools

Godichaud, Matthieu 22 April 2009 (has links)
Dans un contexte de développement durable, les enjeux de la dernière phase du cycle de vie d'un système, la phase de retrait de service, se sont accrus ces dernières années. Les systèmes en fin de vie doivent être déconstruits afin d'être revalorisés pour répondre aux différentes exigences environnementales. Cette responsabilité incombe au concepteur qui doit définir le sous-système support de la phase de retrait de service : le système de déconstruction. Sa principale fonction est la réalisation de l'activité de déconstruction dans l'objectif de favoriser en aval le recyclage de la matière des constituants du système en fin de vie et/ou leur recyclage fonctionnel. Les stratégies de déconstruction doivent répondre à l'ensemble des problèmes de décision posés lors de la phase de retrait de service d'un système. Il s'agit notamment de sélectionner les constituants valorisables suivant des critères techniques, économiques et environnementaux puis de définir et optimiser le système de déconstruction permettant l'obtention de ces produits. La solution obtenue définie ce que nous avons appelé une trajectoire de déconstruction. Nos travaux portent sur la modélisation et l'optimisation de ces trajectoires. Nos développements s'articulent en quatre phases. Etat de l'art et démarche de définition d'une trajectoire. Dans cette phase, une structure de démarche de définition de trajectoires de déconstruction est proposée puis instrumentée. Les modèles généralement utilisés dans ce cadre sont de type déterministe et ne permettent pas de prendre en compte et de gérer les incertitudes inhérentes au processus de déconstruction (état dégradé du système en fin de vie et de ses constituants, demandes en produits issus de la déconstruction, dates de fin de vie des systèmes, …). Pour déterminer une solution robuste de déconstruction d'un système en fin de vie, l'aide à la décision proposée doit intégrer des incertitudes de nature diverse tout en facilitant leur gestion et leurs mises à jour. Incertitudes en déconstruction. Sur la base de ce constat, l'ensemble d'incertitudes couramment mises en jeu dans l'optimisation des trajectoires est identifié et caractérisé. Les méthodes probabilistes apparaissent comme des approches privilégiées pour intégrer ces incertitudes dans une démarche d'aide la décision. Les réseaux bayésiens et leur extension aux diagrammes d'influence sont proposés pour répondre à différents problèmes de décision posés lors de la définition d'une trajectoire de déconstruction. Ils servent de support au développement d'un outil d'aide à la décision. Modélisation de trajectoires de déconstruction : principes et approche statique d'optimisation. Après avoir présenté ses principes de modélisation, l'outil est développé dans une approche de détermination d'une trajectoire de déconstruction d'un système en fin de vie donné. La trajectoire obtenue fixe la profondeur de déconstruction, les options de revalorisation, les séquences et les modes de déconstruction suivant des critères économiques et environnementaux tout en permettant de gérer différents types d'incertitude. L'utilisation de critères économiques est ici privilégiée. Un exemple d'application sur un système aéronautique est développé pour illustrer les principes de modélisation. Approche dynamique pour l'optimisation d'une trajectoire de déconstruction. Le champ d'application de l'outil d'aide à la décision est étendu en intégrant une dimension temporelle à la modélisation du problème à l'aide des réseaux bayésiens dynamiques. Les trajectoires de déconstruction peuvent ainsi être établies sur des horizons couvrant les arrivées de plusieurs systèmes en fin de vie en présence d'incertitudes. Le modèle permet de déterminer des politiques de déconstruction pour chaque opération identifiée dans la trajectoire en fonction de différents paramètres liés à la gestion des demandes et des arrivées ou encore au processus d'obtention de ces produits. Le décideur peut ainsi adapter l'outil à différents contextes de détermination de trajectoire de déconstruction de systèmes en fin de vie. / In a sustainable development context, stakes of the last stage of system life cycle, the end-of-life stage, increase these last years. End-of-life systems have to be demanufactured in order to be valued and answer environmental requirements. The aim of disassembly strategies is to bring solutions to the whole decision problem put during the end-of-life stage of systems. In particular, decision maker have to select valuable products in function of technical, economical and environmental criteria and, then, design and optimise disassembly support system allowing generating these products. The solution determines what we call a disassembly trajectory and ours works deal with modelling and optimization of these trajectories. Definition steps of disassembly trajectories are proposed, structured and instrumented. Models that are generally used in this frame are determinist and do not allow taking into account and managing uncertainties that are inherent to disassembly process (degradation of products, demand for valuable product, systems end-of-life dates, ...). In order to determine a robust disassembly solution, decision aid has to integrate uncertainties from various origins while facilitating their management and their update. On the basis this observation, all the uncertainties involved in disassembly trajectory optimization are identified and characterized. Basing on Bayesian networks, the proposed tool is developed through a “static” approach of disassembly trajectory. Indeed, the obtained trajectory gives the disassembly level of the end-of-life system, recycling options, sequences and disassembly modes in function of economical criteria while allowing managing uncertainties. An application example on an aeronautical system is developed to illustrate the modelling method. The application field of the model is extended to take into account time dimension (dynamic approach) by using dynamic Bayesian networks. Trajectories can be defined on horizons that integrate several arrivals of end-of-life systems. Decision makers can so adapt the model to various contexts
262

Designing a knowledge management architecture to support self-organization in a hotel chain

Kaldis, Emmanuel January 2014 (has links)
Models are incredibly insidious; they slide undetected into discussions and then dominate the way people think. Since Information Systems (ISs) and particularly Knowledge Management Systems (KMSs) are socio-technical systems, they unconsciously embrace the characteristics of the dominant models of management thinking. Thus, their limitations can often be attributed to the deficiencies of the organizational models they aim to support. Through the case study of a hotel chain, this research suggests that contemporary KMSs in the hospitality sector are still grounded in the assumptions of the mechanistic organizational model which conceives an organization as a rigid hierarchical entity governed from the top. Despite the recent technological advances in terms of supporting dialogue and participation between members, organizational knowledge is still transferred vertically; from the top to the bottom or from the bottom to the top. A number of limitations still exist in terms of supporting effectively the transfer of knowledge horizontally between the geographically distributed units of an organization. Inspired from the key concepts of the more recent complex systems model, referred frequently as complexity theories, a Knowledge Management Architecture (KMA) is proposed aiming to re-conceptualize the existing KMSs towards conceiving an organization as a set self-organizing communities of practice (CoP). In every such CoP, order is created from the dynamic exchange of knowledge between the structurally similar community members. Thus, the focus of the KMA is placed on capturing systematically for reuse the architectural knowledge created upon every initiative for change and share such knowledge with the rest of the members of the CoP. A KMS was also developed to support the dynamic dimensions that the KMA proposes. The KMS was then applied in the case of the hotel chain, where it brought significant benefits which constitute evidence of an improved self-organizing ability. The previously isolated hotel units residing in distant regions could now trace but also reapply easily changes undertaken by the other community members. Top-management’s intervention to promote change was reduced, while the pace of change increased. Moreover, the organizational cohesion, the integration of new members as well as the level of management alertness was enhanced. The case of the hotel chain is indicative. It is believed that the KMA proposed can be applicable to geographically distributed organizations operating in different sectors too. At the same time, this research contributes to the recent discourse between the fields of IS and complexity by demonstrating how fundamental concepts from complexity such as self-organization, emergence and edge-of-chaos can be embraced by contemporary KMSs.
263

Développement de concepts et outils d’aide à la décision pour l’optimisation via simulation : intégration des métaheuristiques au formalisme DEVS / Concept development and decision support tools for optimization via simulation : integration of metaheuristics to DEVS formalism

Poggi, Bastien 12 December 2014 (has links)
Nous vivons dans un monde où le besoin d’efficacité s’impose de plus en plus. Ce besoin s’exprime dans différents domaines, allant de l’industrie à la médecine en passant par la surveillance environnementale. Engendrées par cette demande, de nombreuses méthodes d’optimisation « modernes » également appelées « métaheuristiques » sont apparues ces quarante dernières années. Ces méthodes se basent sur des raisonnements probabilistes et aléatoires et permettent la résolution de problèmes pour lesquels les méthodes d’optimisation « classiques » également appelées « méthodes déterministes » ne permettent pas l’obtention de résultats dans des temps raisonnables. Victimes du succès de ces méthodes, leurs concepteurs doivent aujourd’hui plus que jamais répondre à de nombreuses problématiques qui restent en suspens : « Comment évaluer de manière fiable et rapide les solutions proposées ? », « Quelle(s) méthode(s) choisir pour le problème étudié ? », « Comment paramétrer la méthode utilisée ? », « Comment utiliser une même méthode sur différents problèmes sans avoir à la modifier ? ». Pour répondre à ces différentes questions, nous avons développé un ensemble de concepts et outils. Ceux-ci ont été réalisés dans le cadre de la modélisation et la simulation de systèmes à évènements discrets avec le formalisme DEVS. Ce choix a été motivé par deux objectifs : permettre l’optimisation temporelle et spatiale de modèles DEVS existants et améliorer les performances du processus d’optimisation (qualité des solutions proposées, temps de calcul). La modélisation et la simulation de l’optimisation permettent de générer directement des propositions de paramètres sur les entrées du modèle à optimiser. Ce modèle, quant à lui, génère des résultats utiles à la progression de l’optimisation. Pour réaliser ce couplage entre optimisation et simulation, nous proposons l’intégration des méthodes d’optimisation sous la forme de modèles simulables et facilement interconnectables. Notre intégration se concentre donc sur la cohérence des échanges entre les modèles dédiés à l’optimisation et les modèles dédiés à la représentation du problème. Elle permet également l’arrêt anticipé de certaines simulations inutiles afin de réduire au maximum la durée de l’optimisation. La représentation des méthodes d’optimisation sous formes de modèles simulables apporte également un élément de réponse dans le choix et le paramétrage des algorithmes. Grace à l’usage de la simulation, différents algorithmes et paramètres peuvent être utilisés pour un même processus d’optimisation. Ces changements sont également influencés par les résultats observés et permettent une adaptation automatique de l’optimisation aux spécificités connues et/ou cachées du problème étudié ainsi qu’à ses différentes étapes de résolution. L’architecture de modèle que nous proposons a été validée sur trois problèmes distincts : l’optimisation de paramètres pour des fonctions mathématiques, l’optimisation spatialisée d’un déploiement de réseau de capteurs sans fil, l’optimisation temporisée de traitements médicaux. La généricité de nos concepts et la modularité de nos modèles ont permis de mettre en avant la facilité d’utilisation de notre outil. Au niveau des performances, l’interruption de certaines simulations ainsi que le dynamisme de l’optimisation ont permis l’obtention de solutions de qualité supérieure dans des temps inférieurs. / In the world in witch we live the efficient needs are increasing in various fields like industry medecine and environnemtale monitoring. To meet this needs, many optimization methods nammed « metaheuristics » have been created over the last forty years. They are based on probabilistic and random reasoning and allow user to solve problems for witch conventional methods can not be used in acceptable computing times. Victim of their methods succes, the developers of the methods have to answer to several questions : « How can the fitness of solutions be assessed ? », « How to use the same method for several projects without change the code? », « What method will we choose for a specific problem ? », « How to parametrize algorithms ? ». To deal with this problem, we have developed a set of concepts and tools. They have been developed in the context of modeling and simulation of discrete event systems with DEVS formalism. The aims pursued are : allow temporized and spacialized optimization of existing DEVS models, improve the optimization process efficiency (quality of solutions, computing time). Modeling and simulation are used to propose parameters toward the input of problem to optimize. This one generate results used to improve the next proposed solutions. In order to combine optimization and simulation, we propose to represent the optimization method as models which can be easily interconnected and simulated. We focus on consistency of exchanges between optimization models and problem models. Our approach allows early stopping of useless simulations and reduce the computing time as a result. Modeling optimization methods in DEVS formalism also allows to autimatically choose the optimization algorithm and its parameters. Various algorithms and parameters can be used for the same problem during optimization process at different steps. This changes are influenced by collected results of problem simulation. They lead on a self adaptation to the visible or/and hidden features of the studied problem. Our models architecture has been tested on three different problems : parametric optimization of mathematical functions, spacialized optimization of a sensor network deployment, temporized optimization of a medical treatment. Genericity of our concepts and scalability of our models underline the usabily of proposed tool. Concerning performance, simulation breaks and dynamic optimization have obtained higher quality solutions in a short time.
264

Gender Equality as an Idea and Practice - A Case Study of an Office at the United Nations Headquarters

Ketonen, Ida E. January 2018 (has links)
Achieving gender equality and empowering all women and girls, is one of the United Nations (UN) core objectives. However, the UN has been struggling with achieving gender balance in its own organisation, despite numerous attempts. Men have been in numerical dominance at the UN since inception, especially on senior positions. This case study takes place just months after the System-wide strategy for gender parity was launched by Secretary-General Guterres. It captures the initial reactions through in-depth, semi-structured interviews with five women working in one UN body at the UN Headquarters in Geneva, Switzerland. Through these stories and experiences, this thesis aims to analyse the UN as a gendered organisation, focusing on organisational structure and culture. I argue that gendered processes of the organisational structure and culture preserve the male-dominance by having including effects on men and excluding effects on women. In this thesis I use gendered processes (Acker 1992), combined with post-structural policy analysis (Bacchi 2009) and complex systems theory (Ramalingam 2013), as analytical tools to show how equality is constructed and understood as an idea and in practice.
265

Criticality in neural networks = Criticalidade em redes neurais / Criticalidade em redes neurais

Reis, Elohim Fonseca dos, 1984- 12 September 2015 (has links)
Orientadores: José Antônio Brum, Marcus Aloizio Martinez de Aguiar / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin / Made available in DSpace on 2018-08-29T15:40:55Z (GMT). No. of bitstreams: 1 Reis_ElohimFonsecados_M.pdf: 2277988 bytes, checksum: 08f2c3b84a391217d575c0f425159fca (MD5) Previous issue date: 2015 / Resumo: Este trabalho é dividido em duas partes. Na primeira parte, uma rede de correlação é construída baseada em um modelo de Ising em diferentes temperaturas, crítica, subcrítica e supercrítica, usando um algorítimo de Metropolis Monte-Carlo com dinâmica de \textit{single-spin-flip}. Este modelo teórico é comparado com uma rede do cérebro construída a partir de correlações das séries temporais do sinal BOLD de fMRI de regiões do cérebro. Medidas de rede, como coeficiente de aglomeração, mínimo caminho médio e distribuição de grau são analisadas. As mesmas medidas de rede são calculadas para a rede obtida pelas correlações das séries temporais dos spins no modelo de Ising. Os resultados da rede cerebral são melhor explicados pelo modelo teórico na temperatura crítica, sugerindo aspectos de criticalidade na dinâmica cerebral. Na segunda parte, é estudada a dinâmica temporal da atividade de um população neural, ou seja, a atividade de células ganglionares da retina gravadas em uma matriz de multi-eletrodos. Vários estudos têm focado em descrever a atividade de redes neurais usando modelos de Ising com desordem, não dando atenção à estrutura dinâmica. Tratando o tempo como uma dimensão extra do sistema, a dinâmica temporal da atividade da população neural é modelada. O princípio de máxima entropia é usado para construir um modelo de Ising com interação entre pares das atividades de diferentes neurônios em tempos diferentes. O ajuste do modelo é feito com uma combinação de amostragem de Monte-Carlo e método do gradiente descendente. O sistema é caracterizado pelos parâmetros aprendidos, questões como balanço detalhado e reversibilidade temporal são analisadas e variáveis termodinâmicas, como o calor específico, podem ser calculadas para estudar aspectos de criticalidade / Abstract: This work is divided in two parts. In the first part, a correlation network is build based on an Ising model at different temperatures, critical, subcritical and supercritical, using a Metropolis Monte-Carlo algorithm with single-spin-flip dynamics. This theoretical model is compared with a brain network built from the correlations of BOLD fMRI temporal series of brain regions activity. Network measures, such as clustering coefficient, average shortest path length and degree distributions are analysed. The same network measures are calculated to the network obtained from the time series correlations of the spins in the Ising model. The results from the brain network are better explained by the theoretical model at the critical temperature, suggesting critical aspects in the brain dynamics. In the second part, the temporal dynamics of the activity of a neuron population, that is, the activity of retinal ganglion cells recorded in a multi-electrode array was studied. Many studies have focused on describing the activity of neural networks using disordered Ising models, with no regard to the dynamic nature. Treating time as an extra dimension of the system, the temporal dynamics of the activity of the neuron population is modeled. The maximum entropy principle approach is used to build an Ising model with pairwise interactions between the activities of different neurons at different times. Model fitting is performed by a combination of Metropolis Monte Carlo sampling with gradient descent methods. The system is characterized by the learned parameters, questions like detailed balance and time reversibility are analysed and thermodynamic variables, such as specific heat, can be calculated to study critical aspects / Mestrado / Física / Mestre em Física / 2013/25361-6 / FAPESP
266

Modélisation du système complexe de la publication scientifique / Modeling the complex system of scientific publication

Kovanis, Michail 02 October 2017 (has links)
Le système d’évaluation par les pairs est le gold standard de la publication scientifique. Ce système a deux objectifs: d’une part filtrer les articles scientifiques erronés ou non pertinents et d’autre part améliorer la qualité de ceux jugés dignes de publication. Le rôle des revues scientifiques et des rédacteurs en chef est de veiller à ce que des connaissances scientifiques valides soient diffusées auprès des scientifiques concernés et du public. Cependant, le système d’évaluation par les pairs a récemment été critiqué comme étant intenable sur le long terme, inefficace et cause de délais de publication des résultats scientifiques. Dans ce projet de doctorat, j’ai utilisé une modélisation par systèmes complexes pour étudier le comportement macroscopique des systèmes de publication et d’évaluation par les pairs. Dans un premier projet, j’ai modélisé des données empiriques provenant de diverses sources comme Pubmed et Publons pour évaluer la viabilité du système. Je montre que l’offre dépasse de 15% à 249% la demande d’évaluation par les pairs et, par conséquent, le système est durable en termes de volume. Cependant, 20% des chercheurs effectuent 69% à 94% des revues d’articles, ce qui souligne un déséquilibre significatif en termes d’efforts de la communauté scientifique. Les résultats ont permis de réfuter la croyance largement répandue selon laquelle la demande d’évaluation par les pairs dépasse largement l’offre mais ont montré que la majorité des chercheurs ne contribue pas réellement au processus. Dans mon deuxième projet, j’ai développé un modèle par agents à grande échelle qui imite le comportement du système classique d’évaluation par les pairs, et que j’ai calibré avec des données empiriques du domaine biomédical. En utilisant ce modèle comme base pour mon troisième projet, j’ai modélisé cinq systèmes alternatifs d’évaluation par les pairs et évalué leurs performances par rapport au système conventionnel en termes d’efficacité de la revue, de temps passé à évaluer des manuscrits et de diffusion de l’information scientifique. Dans mes simulations, les deux systèmes alternatifs dans lesquels les scientifiques partagent les commentaires sur leurs manuscrits rejetés avec les éditeurs du prochain journal auquel ils les soumettent ont des performances similaires au système classique en termes d’efficacité de la revue. Le temps total consacré par la communauté scientifique à l’évaluation des articles est cependant réduit d’environ 63%. En ce qui concerne la dissémination scientifique, le temps total de la première soumission jusqu’à la publication est diminué d’environ 47% et ces systèmes permettent de diffuser entre 10% et 36% plus d’informations scientifiques que le système conventionnel. Enfin, le modèle par agents développé peut être utilisé pour simuler d’autres systèmes d’évaluation par les pairs ou des interventions, pour ainsi déterminer les interventions ou modifications les plus prometteuses qui pourraient être ensuite testées par des études expérimentales en vie réelle. / The peer-review system is undoubtedly the gold standard of scientific publication. Peer review serves a two-fold purpose; to screen out of publication articles containing incorrect or irrelevant science and to improve the quality of the ones deemed suitable for publication. Moreover, the role of the scientific journals and editors is to ensure that valid scientific knowledge is disseminated to the appropriate target group of scientists and to the public. However, the peer review system has recently been criticized, in that it is unsustainable, inefficient and slows down publication. In this PhD thesis, I used complex-systems modeling to study the macroscopic behavior of the scientific publication and peer-review systems. In my first project, I modeled empirical data from various sources, such as Pubmed and Publons, to assess the sustainability of the system. I showed that the potential supply has been exceeding the demand for peer review by 15% to 249% and thus, the system is sustainable in terms of volume. However, 20% of researchers have been performing 69% to 94% of the annual reviews, which emphasizes a significant imbalance in terms of effort by the scientific community. The results provided evidence contrary to the widely-adopted, but untested belief, that the demand for peer review over-exceeds the supply, and they indicated that the majority of researchers do not contribute to the process. In my second project, I developed a large-scale agent-based model, which mimicked the behavior of the conventional peer-review system. This model was calibrated with empirical data from the biomedical domain. Using this model as a base for my third project, I developed and assessed the performance of five alternative peer-review systems by measuring peer-review efficiency, reviewer effort and scientific dissemination as compared to the conventional system. In my simulations, two alternative systems, in which scientists shared past reviews of their rejected manuscripts with the editors of the next journal to which they submitted, performed equally or sometimes better in terms of peer-review efficiency. They also each reduced the overall reviewer effort by ~63%. In terms of scientific dissemination, they decreased the median time from first submission until publication by ~47% and diffused on average 10% to 36% more scientific information (i.e., manuscript intrinsic quality x journal impact factor) than the conventional system. Finally, my agent-based model may be an approach to simulate alternative peer-review systems (or interventions), find those that are the most promising and aid decisions about which systems may be introduced into real-world trials.
267

Sociologický simulátor / Sociological Simulator

Ludwig, Petr January 2011 (has links)
This thesis describes the paradigm of complex systems and discusses possibilities of their modeling and simulations. The work shows the suitability of using multi-agent modeling for creating abstraction of social environment, that is one of the major complex systems. Thesis content includes an analysis of tools that are available for creating multi-agent simulators. The core of this thesis are processed research documents and a demonstrative model of social phenomenon known as procrastination.
268

Adaptative modeling of urban dynamics with mobile phone database / Modélisation adaptative de la dynamique urbaine avec une base de données de téléphonie mobile

Faisal Behadili, Suhad 29 November 2016 (has links)
Dans cette étude, on s’intéresse à l’étude de la mobilité urbaine à partir de traces de données de téléphonie mobile qui ont été fournies par l’opérateur Orange. Les données fournies concernent la région de la ville de Rouen, durant un événement éphémère qui est l’Armada de 2008. Dans une première étude, on gère une masse importante de données pour extraire des caractéristiques permettant de qualifier des usages de la ville lors d’évènements éphémères, en fonctions des jours d’activités ou de repos des individus. Des visualisations sont données et permettent de comprendre les mobilités engendrées de manière spécifique ou non par l’événement. Dans une seconde partie, on s’intéresse à la reconstitution de trajectoires avec des approches agrégées inspirées des techniques de physique statistique afin de dégager des comportements en fonction des périodes d’activités et d’un découpage spatial en grandes zones urbaines. On tente ainsi de dégager des lois en observant des distributions en loi de puissance caractéristiques de la complexité des systèmes étudiés. / In this study, we are interested in the study of urban mobility from traces of mobile data that were provided by the operator Orange. The data provided relate to the region of the city of Rouen, during an ephemeral event that is the Armada of 2008. In a first study, a large amount of data is managed to extract characteristics allowing to qualify the uses of the city during ephemeral events, depending on the days of activity of the individuals. Visualizations are given and make it possible to understand the mobilities generated in a specific way during the event. In the second part, we study the reconstruction of trajectories with aggregated approaches inspired by statistical physics techniques in order to reveal behaviors according to periods of activity and a spatial division in large urban areas. In order to obtain the general mobility law by observing distributions in power law characteristic for the studied complex system.
269

A Generalized Framework for Representing Complex Networks

Viplove Arora (8086250) 06 December 2019 (has links)
<div>Complex systems are often characterized by a large collection of components interacting in nontrivial ways. Self-organization among these individual components often leads to emergence of a macroscopic structure that is neither completely regular nor completely random. In order to understand what we observe at a macroscopic scale, conceptual, mathematical, and computational tools are required for modeling and analyzing these interactions. A principled approach to understand these complex systems (and the processes that give rise to them) is to formulate generative models and infer their parameters from given data that is typically stored in the form of networks (or graphs). The increasing availability of network data from a wide variety of sources, such as the Internet, online social networks, collaboration networks, biological networks, etc., has fueled the rapid development of network science. </div><div><br></div><div>A variety of generative models have been designed to synthesize networks having specific properties (such as power law degree distributions, small-worldness, etc.), but the structural richness of real-world network data calls for researchers to posit new models that are capable of keeping pace with the empirical observations about the topological properties of real networks. The mechanistic approach to modeling networks aims to identify putative mechanisms that can explain the dependence, diversity, and heterogeneity in the interactions responsible for creating the topology of an observed network. A successful mechanistic model can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. While it is difficult to intuit appropriate mechanisms for network formation, machine learning and evolutionary algorithms can be used to automatically infer appropriate network generation mechanisms from the observed network structure.</div><div><br></div><div>Building on these philosophical foundations and a series of (not new) observations based on first principles, we extrapolate an action-based framework that creates a compact probabilistic model for synthesizing real-world networks. Our action-based perspective assumes that the generative process is composed of two main components: (1) a set of actions that expresses link formation potential using different strategies capturing the collective behavior of nodes, and (2) an algorithmic environment that provides opportunities for nodes to create links. Optimization and machine learning methods are used to learn an appropriate low-dimensional action-based representation for an observed network in the form of a row stochastic matrix, which can subsequently be used for simulating the system at various scales. We also show that in addition to being practically relevant, the proposed model is relatively exchangeable up to relabeling of the node-types. </div><div><br></div><div>Such a model can facilitate handling many of the challenges of understanding real data, including accounting for noise and missing values, and connecting theory with data by providing interpretable results. To demonstrate the practicality of the action-based model, we decided to utilize the model within domain-specific contexts. We used the model as a centralized approach for designing resilient supply chain networks while incorporating appropriate constraints, a rare feature of most network models. Similarly, a new variant of the action-based model was used for understanding the relationship between the structural organization of human brains and the cognitive ability of subjects. Finally, our analysis of the ability of state-of-the-art network models to replicate the expected topological variations in network populations highlighted the need for rethinking the way we evaluate the goodness-of-fit of new and existing network models, thus exposing significant gaps in the literature.</div>
270

Investigating the collective behaviour of the stock market using Agent-Based Modelling

Björklöf, Christoffer January 2022 (has links)
The stock market is a place in which numerous entities interact, operate, andchange state based on the decisions they make. Further, the stock market itselfevolves and changes its dynamics over time as a consequence of the individualactions taking place in it. In this sense, the stock market can be viewed andtreated as a complex adaptive system. In this study, an agent-based model,simulating the trading of a single asset has been constructed with the purposeof investigating how the collective behaviour affects the dynamics of the stockmarket. For this purpose, the agent-based modelling program NetLogo wasused. Lastly, the conclusion of the study revealed that the dynamics of thestock market are clearly dependent on some specific factors of the collectivebehaviour, such as the information source of the investors.

Page generated in 0.065 seconds