• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 105
  • 26
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 291
  • 81
  • 81
  • 62
  • 58
  • 57
  • 48
  • 34
  • 32
  • 30
  • 29
  • 26
  • 26
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Using Social Network Analysis to Investigate Potential Bias in Editorial Peer Review in Core Journals of Comparative/International Education

Cheng, Biao 03 December 2006 (has links) (PDF)
This study explores potential bias in the editorial peer-review system within the context of the field of comparative and international education. Assuming the role as “Guardian of Science” and “social status judge” (Zuckerman & Merton, 1971), peer-review, the quality control system of science, directly affects the growth of science, scientists' academic career and their institutions. The very basic tenet of the peer review system is its assumed objectivity. Bias in editorial peer review process, however, is inevitable. The constitution of the blind peer review mechanism is itself a simply undeniable acknowledgement of that fact. Therefore, this study investigated potential peer-review bias by examining the core peer-reviewed academic journal publications of the field between 1994 and 2003, through the methods of social network analysis. In addition to some descriptive analysis on the overall state of the field, based on the criterion of centrality, focus was specifically given to two networks (co-authorship network and institutional network) and the network structure for patterns that might indicate bias in terms of author, gender, author-affiliated institution, country, number of articles published and number of journals in which the author published. Findings of this research revealed no discernable patterns nor network-wide centralization in either the co-authorship network or the institution network. Thus, no reason exists to suspect the objectivity of the peer-review process of the five core academic journals of comparative and international education 1994 – 2003 on the base of centrality. Further descriptive analyses, however, did reveal patterns that may represent norms of the field and, thus, may suggest potential sources of bias. Findings indicated that 1) scholars of the field tend to research independently and publish in relative isolation, and single-authored journal articles are the norm of the field; 2) the field is dominated by the scholars and institutions of Western countries, especially the U.K and the U.S; and 3) journals of the field tend to publish more authors from the hosting countries of the journal. The implications of these findings were also discussed.
132

Identifying Important Members in a Complex Online Social Network / Identifiering av inflytelserika medlemmar i ett komplext socialt nätverk

Hannesson, Kristófer January 2017 (has links)
The success of Online Social Networks (OSN) is influenced by having the ability to understand who is important. An OSN can be viewed as a graph where users are vertices and their interactions are edges. Graph-based methods can enable identification of people in these networks who for example exhibit the characteristics of leaders, influencers, or information brokers. A Massively Multiplayer Online game (MMO) is a type of OSN. It is a video game where a large number of players interact with each other in a virtual world. Using behavioral data of players' interactions within the space-based MMO EVE Online, the aim of this thesis is to conduct an experimental study to evaluate the effectiveness of a number graph-based methods at finding important players within different behavioral contexts. For that purpose we extract behavioral data to construct four distinct graphs: Fleet, Aggression, Mail, and Market. We also create a ground truth data set of important players based on heuristics from key gameplay categories. We experiment on these graphs with a selection of graph centrality, Influence Maximization, and heuristic methods. We explore how they perform in terms of ground truth players found per graph and execution time, and when combining results from all graphs. Our results indicate that there is no optimal method across graphs but rather the method and graph should be chosen according to the business intention at each time. To that end we provide recommendations as well as potential business case usages. We believe that this study serves as a starting point towards more graph based analysis within the EVE Online virtual universe where there are many unexplored research opportunities. / Framgången hos Online Sociala Nätverk (OSN) påverkas av förmågan att förstå vem som är viktig. Ett OSN kan ses som en graf där användarna är noder och deras interaktioner ärbågar. Grafbaserade metoder kan möjliggöra identifiering av personer i dessa nätverk somtill exempel uppvisar egenskaper hos ledare, påverkare eller informationsförmedlare. Ett Massively Multiplayer Online game (MMO) representerar en typ av OSN. Det är ett datorrollspel där ett stort antal spelare interagerar med varandra i en virtuell värld. Genomatt använda beteendedata om spelarnas interaktioner i den rymdbaserade MMO:n EVE Online är målet med denna avhandling att genomföra en experimentell studie för att utvärdera effektiviteten hos ett antal grafbaserade metoder för att hitta viktiga spelare inom olika beteendemässiga sammanhang. För det ändamålet extraherar vi beteendedata för att konstruera fyra distinkta grafer: Fleet, Aggression, Mail och Market. Vi skapar också ett ground truth" dataset av viktiga spelare baserat på heuristik från viktiga spelkategorier. Vi utför experiment på dessa grafer med ett urval av grafcentralitet, Influence Maximization och heuristiska metoder. Vi undersöker hur metoderna presterar i termer av antal ground truth spelare som finns per graf och över grafer, och i termer av exekveringstid. Våra resultat tyder på att det inte finns någon optimal metod för alla grafer. Metoden och grafen bör väljas beroende på intentionen vid varje tillfälle. För detta ändamål tillhandahåller vi rekommendationer samt potentiella affärsmässiga användningsområden. Vi tror att denna studie tjänar som utgångspunkt för mer grafbaserad analys inom EVEOnlines virtuella universum där det finns många outforskade forskningsmöjligheter.
133

Modelling Critical Points in Infrastructure

Jallow, Ted January 2022 (has links)
The aim of this paper was to investigate whether infrastructure could be modelled in order to find critical points using network science and graph theory. Since a lot of information about ourinfrastructures is publicly available, an attacker might exploit this to find vulnerabilities in our systems. With that in mind, the methods of this paper were implemented from an attackers point of view who’d want to maximize harm with minimal effort. The Swedish railway network served as an example and starting point for the optimization of the methods. The data for the network was obtained from the Swedish Transport Agency’s website and was implemented in Networkx using Python. Different centrality metrics were used to identify and remove critical nodes in the network. The centrality metrics were also used to rank the nodes and to remove them after order of importance as the size of the largest component was recorded. This was done both with no recalculation and with a recalculation after each removal. The results were compared with the random removal of nodes, and it showed that without a recalculation the random removal performed better but with recalculation all the centrality metrics performed significantly better. The Swedish railway network is a sensitive network in terms of how it is connected. Targeting a few nodes could completely cut off parts of the graph, creating two sub components with no way of traveling between them. Many different methods can identify these nodes, but only a few are suitable in terms of generic results that could work on other networks as well. The most prominent one being the one dealing with degree and connectivity. The nodes that had a higher degree than connectivity between themselves and the source, identified nodes that would completely disrupt the network, leaving two subcomponents. Infrastructures can be seen as highly intertwined systems, where the physical-, cyber- and human parts all affect and interact with each other.
134

Co-Authorship Network Analysis in Constraint Programming Research

Ali, Lana January 2023 (has links)
The aim of this thesis was to study co-authorship in the constraint programming research community. This was done by conducting social network analysis (SNA) based on published scientific papers from the proceedings of the International Conference on Principles and Practice of Constraint Programming. Bibliographic data of the scientific literature was collected for the years 2018–2022 of the annual conference. For quantitative analysis, graph metrics were computed to study the properties and structure of the overall network, and also to study the attributes and characteristics of individual authors to be able to identify central actors of the community. Furthermore, graph layout algorithms were used for visualisation of the network. The computed metrics and the graphical visualisations enabled identifying collaboration patterns and behaviours within the studied field. The results of this study show that the most central actors of the community are mainly male and dominated by white organisations and countries. The results of the study also show that the vast majority of authors of the community collaborate with others in writing papers. However, due to the low density of the network there is opportunity and room for new collaboration patterns to take place within the research community.
135

ESSAYS ON UNDERSTANDING MACROECONOMIC FLUCTUATIONS: AN INPUT-OUTPUT NETWORK APPROACH

Hou, Shuoshuo 08 1900 (has links)
This dissertation includes three chapters. The first chapter studies the impact of financial shocks and financial frictions on business cycle dynamics in China's economy. The second and third chapters focus on the driving force of structural change and its impact on aggregate fluctuations using an input-output network approach. In the second chapter, I study two questions: (i) How has the U.S. production network structure changed from 1970 to 2017? (ii) What impact does that have on aggregate fluctuations? This paper shows that a few industries, like Finance and Insurance and Professional Services, have become much more central input suppliers over time, while others, like Paper Manufacturing, have become far less important. Therefore, the third chapter considers the driving force behind such structural change. In particular, I study the question of what determines the size of an industry in a production network. China has been one of the world's fastest-growing economies over the past several decades and emerged quickly from the global financial crisis of 2008. Chapter 1, titled DO FINANCIAL SHOCKS DRIVE REAL BUSINESS FLUCTUATIONS IN CHINA, investigates to what extent financial shocks can shape business cycle fluctuations in China. Specifically, I document the cyclical properties of China's macroeconomy and financial market and show the procyclicality of dividend payout and the countercyclicality of debt repurchases with real GDP. To account for these features, I use the real business cycle model incorporating debt and equity financing developed by Jermann and Quadrini (2012) to study how the dynamics of macroeconomic and financial variables are affected by financial shocks in China. This paper finds that financial shocks contribute significantly to business cycle fluctuations in China and can account for over 60% of the variations in the growth rate of output, investment, hours worked, and debt repurchases. Hulton's Theorem states that the impact of an industry-specific shock on the aggregate economy is entirely captured by the size of this industry, regardless of its position in the production network. Chapter 2, titled THE IMPORTANCE OF INPUT-OUTPUT NETWORK STRUCTURE IN THE US ECONOMY, proposes the idea that the network structure in isolation plays an essential role in shaping GDP growth and growth volatility. First, I introduce a new measure of network structure named centrality dispersion and document that the U.S. production network has become sparsely connected from 1970 to 2017, where many industries relied on a few central input suppliers for production. Such changes are associated with slower GDP growth and higher volatility. To account for this evidence, I embed input-output linkages into a multisector real business cycle model and provide a nonlinear characterization of the impact of network structure quantified using centrality dispersion on the macroeconomy. Finally, I study model-implied relationships between production network structure, GDP growth, and growth volatility. The calibrated model accounts for approximately one-quarter of the variation in real GDP growth and 40% of GDP volatility, as observed in the data. Chapter 3, titled THE NETWORK ORIGIN OF INDUSTRY SIZE VARIATIONS, quantifies the origin of industry size variations using the features of a production network. In the analysis, I perform an exact variance decomposition of industry total sales into the supplier, buyer, and final demand components. The findings suggest that matching with many buyers in the network, especially many large buyers is essential in understanding industry size variations. More importantly, these buyer characteristics have become increasingly important in contributing to industry size variations over the 1967-2012 period. Finally, I provide new empirical evidence related to the decomposition results. The evidence reveals a strengthening negative correlation between industry size and the concentration of customer networks in the long run. / Economics
136

Centrality Routing and Blockchain Technologies in Distributed Networks

Ghiro, Lorenzo 19 May 2021 (has links)
This thesis contributes to the development of distributed networks proposing: • a technique to enhance the reliability of DV routing protocols; • a critical analysis of the integration of blockchains in distributed networks. First, a novel algorithm for the distributed computation of the Load Centrality (LC), a graph centrality metric, is proposed and then applied for steering the optimization of the route recovery process of Distance-Vector (DV) routing protocols: this way the algorithm contributes to the enhancement of the network reliability. The algorithm convergence is proved also identifying time complexity bounds that are later confirmed by computer simulations. The proposed algorithm is designed as an extension to the Bellman-Ford one and can thus be integrated with any DV routing protocol. An implementation of the algorithm in Babel, a real world DV protocol, is provided in support of this claim. Then an application of the algorithm is presented: the LC is used to find an optimal tuning for the generation frequency of the Babel control messages. This tuning technique effectively reduces the impact of losses consequent to random node failures in the emulations of several real world wireless mesh networks, without increasing the control overhead. A second version of the algorithm is designed to be incrementally deployable. This version can be deployed gradually in production networks also by uncoordinated administrators. When only a fraction of nodes is upgraded so to participate in the protocol, these upgraded nodes estimate their LC indexes approximating the theoretical ones. The approximation error is studied analytically and it is also shown that, even for low penetration ratios of upgraded nodes in the network, the algorithm accurately ranks nodes according to their theoretical centrality. The second contribution of the thesis is the critical discussion of the integration of blockchain technologies in distributed networks. An initial analysis of the literature concerning blockchain based applications reveals an ambiguity around the term "blockchain" itself. The term is used, apparently, to identify a number of similar but different technologies proposed to empower a surprisingly broad range of applications. This thesis prompts therefore the need of formulating a restrictive definition for the term blockchain, necessary for clarifying the role of the same blockchain in distributed networks. The proposed definition is grounded in the critical analysis of the blockchain from a distributed systems perspective: Blockchains are only those platforms that implement an open, verifiable and immutable Shared Ledger, independent of any trusted authority. Observing that the blockchain security grows with the amount of resources consumed to generate blocks, this thesis concludes that a secure blockchain is necessarily resource hungry, therefore, its integration in the constrained domain of distributed networks is not advised. The thesis draws recommendations for a use of the blockchain not in contrast with the definition. For example, it warns about applications that require data to be kept confidential or users to be registered, because the blockchain naturally supports the openness and transparency of data together with the anonymity of users. Finally a feasible role for the blockchain in the Internet of Things (IoT) is outlined: while most of the IoT transactions will be local and Off-Chain, a blockchain can still act as an external and decentralized platform supporting global transactions, offering an alternative to traditional banking services. The enhanced reliability of DV routing protocols encourages a wider adoption of distributed networks, moreover, the distributed algorithm for the computation of centrality enables applications previously restricted to centralized networks also in distributed ones. The discussion about the blockchain increases instead the awareness about the limits and the scope of this technology, inspiring engineers and practitioners in the development of more secure applications for distributed networks. This discussion highlights, for instance, the important role of the networking protocols and communication infrastructure on the blockchain security, pointing out that large delays in the dissemination of blocks of transactions make the blockchain more vulnerable to attacks. Furthermore, it is observed that a high ability to take control over the communications in the network favors eclipse attacks and makes more profitable the so called selfish mining strategy, which is detrimental to the decentralization and the security of blockchains. The two main contributions of this thesis blended together inspire the exploitation of centrality to optimize gossip protocols, minimizing block propagation delays and thus the exposure of the blockchain to attacks. Furthermore, the notion of centrality may be used by the community of miners to measure the nodes influence over the communication of blocks, so it might be used as a security index to warn against selfish mining and eclipse attack.
137

Evaluating network generation algorithms for decentralized social media platforms

Obreykov, Nicky January 2021 (has links)
With the large amount of personal data being shared on social media platforms, there is an increased security risk involved. Individuals are reliant on companies keeping their promises of securely handling personal data. Despite this, previous incidents such as the Cambridge Analytica incident have unveiled issues with the model of trusting a single entity to handle personal data safely. Instead of relying on a single entity keeping their promise, a different type of social media platform has started to emerge that decentralizes control over data. This type of social media platform that removes trust in a central entity, is called a decentralized social media platform. There are a plethora of decentralized social media platforms each relying on different heuristics for creating the network. Depending on the network structure which is the backbone of the platform, each network can have a different degree of centrality. If a decentralized social media platform’s network becomes too centralized, some entities in the network can gain larger control of the network, defeating its intended purpose. To prevent this, studying the network that comprises the platform can be fruitful. The science of network analysis can aid in finding the optimal network structure that best fits a decentralized social media platform. This study has examined five different network generation algorithms with a number of permutations in search of the network generation algorithm that best fits a decentralized social media platform. Each algorithm has generated 1,000 networks which were then used in one-way ANOVA tests to observe differences between the measurements. Four network centralization measures and a network efficiency measure have been used to determine the algorithm that minimizes centralization, while still being functional. The results indicate that, out of the five algorithms, the k-degree algorithm best fits a decentralized social media platform. / Med den stora andelen personlig data som delas på sociala medieplattformar, medför det en ökad säkerhetsrisk. Individer är beroende av att företag håller sina löften kring säkerhetshantering av personlig data. Trots detta har tidigare incidenter som Cambridge Analytica-incidenten avslöjat problem med modellen där individer behöver ha fullt förtroende att en enskild aktör ska hantera personlig data på ett säkert sätt. Istället för att ha förtroende att en enskild aktör ska hålla sitt löfte, har en ny typ av social medieplattform dykt upp som decentraliserar kontrollen över data. Denna typ av social medieplattform som avlägsnar förtroendet i centrala aktörer kallas för en decentraliserad social medieplattform. Det finns en stor andel decentraliserade sociala medieplattformar som beror av olika heuristiker för att bygga nätverket. Baserat på nätverksstrukturen, som är grunden till plattformen, så kan varje nätverk ha olika grad av centralisering. Omen decentraliserad social medieplattforms nätverk blir för centraliserad, kan ett par aktörer i nätverket få större kontroll över nätverket, vilket motverkar dess avsedda syfte. För att förhindra detta kan det vara givande att studera nätverket som är underlag för plattformen. Vetenskapen om nätverksanalys kan bidra att hitta den optimala nätverksstrukturen som på bästa sätt passar en decentraliserad social medieplattform. Denna forskning har undersökt fem olika nätverksgenerande algoritmer med ett antal permutationer för att söka den nätverksgenerande algoritm som bäst passar en decentraliserad social medieplattform. Varje algoritm har generat 1,000 nätverk som sedan använts i ett antal one-way ANOVA test för att observera skillnader mellan mätningarna. Fyra centraliseringmått och ett effektivitetmått har använts för att bestämma den algoritm som minimerar centraliseringen men som fortfarande är funktionell. Resultaten indikerar att, av de fem algoritmerna, passar k-gradsalgoritmen en decentraliserad social medieplattform bäst.
138

VULNERABILITY ASSESSMENT AND RESILIENCE ENHANCEMENT OF CRITICAL INFRASTRUCTURE NETWORKS

Salama, Mohamed January 2022 (has links)
Modern societies are fully dependent on critical infrastructures networks to support the economy, security, and prosperity. Energy infrastructure network is of paramount importance to our societies. As a pillar of the economy, it is necessary that energy infrastructure networks continue to operate safely and be resilient to provide reliable power to other critical infrastructure networks. Nonetheless, frequent large-scale blackouts in recent years have highlighted the vulnerability in the power grids, where disruptions can trigger cascading failures causing a catastrophic regional-level blackout. Such catastrophic blackouts call for a systemic risk assessment approach whereby the entire network/system is assessed against such failures considering the dynamic power flow within. However, the lack of detailed data combining both topological and functional information, and the computational resources typically required for large-scale modelling, considering also operational corrective actions, have impeded large-scale resilience studies. In this respect, the research in the present dissertation focuses on investigating, analyzing, and evaluating the vulnerability of power grid infrastructure networks in an effort to enhance their resilience. Through a Complex Network Theory (CNT) lens, the power grid robustness has been evaluated against random and targeted attacks through evaluating a family of centrality measures. The results shows that CNT models provide a quick and potential indication to identify key network components, which support regulators and operators in making informed decisions to maintain and upgrade the network, constrained by the tolerable risk and allocated financial resources. Furthermore, a dynamic Cascade Failure Model (CFM) has been employed to develop a Physical Flow-Based Model (PFBM). The CFM considers the operational corrective actions in case of failure to rebalance the supply and demand (i.e., dispatch and load shedding). The CFM was subsequently utilized to construct a grid vulnerability map function of the Link Vulnerability Index (LVI), which can be used to rank the line maintenance priority. In addition, a Node Importance Index (NII) has been developed for power substations ranking according to the resulting cascade failure size. The results from CNT and CFM approaches were compared to address the impact of considering the physical behavior of the power grid. The comparison results indicate that relying solely on CNT topology-based model could result in erroneous conclusions pertaining to the grid behavior. Moving forward, a systemic risk mitigation strategy based on the Intentional Controlled Islanding (ICI) approach has been introduced to suppress the failure propagation. The proposed mitigation strategy integrated the operation- with structure-guided strategies has shown excellent capabilities in terms of enhancing the network robustness and minimizing the possibility of catastrophic large-scale blackouts. This research demonstrates the model application on a real large-scale network with data ranging from low to high voltage. In the future, the CFM model can be integrated with other critical infrastructure network systems to establish a network-of-networks interaction model for assessing the systemic risk throughout and between multiple network layers. Understanding the interdependence between different networks will provide stakeholders with insight on enhancing resilience and support policymakers in making informed decisions pertaining to the tolerable systemic risk level to take reliable actions under abnormal conditions. / Thesis / Doctor of Philosophy (PhD)
139

Complex network theoretical approach to investigate the interdependence between factors affecting subsurface radionuclide migration

Narayanan, Brinda Lakshmi January 2022 (has links)
Mining of uranium ore and its extraction using the milling process generates solid and liquid waste, commonly termed uranium mine tailings. Uranium mine tailings is radioactive, as it consists of residual uranium, thorium, and radium, which amounts to 85% of the original ore’s radioactivity. Due to the extensively long half-lives of uranium (4.5x109 years), thorium (75,400 years), and radium (1,620 years) and their harmful radioactive, it is imperative to isolate uranium mine tailings from the environment for a longer period. Containment of uranium mine tailings in dam-like structures, called uranium mine tailings dam (UMTD), is the most followed disposal and storage method. Like a conventional water retention dam, UMTDs are also susceptible to failure, mainly due to adverse weather conditions. Once the UMTD fails, a fraction of the radioactive tailings infiltrates and migrate through the vadose zone contaminating the groundwater sources underlying it. Radionuclide behavior and migration in the subsurface are affected by several environmental factors. To minimize the uncertainty and improve current radionuclide fate and transport models, it is vital to study these factors and any interdependence existing between them. This study aims to understand these environmental factors by i) enlisting the factors affecting subsurface radionuclide migration through scoping review of articles and reports, and ii) analyzing the interdependence existing between the factors using the complex network theory (CNT) approach and identifying the dominant factors among them. Factors such as chemical and biological characteristics of soil stratigraphy, groundwater, and radioactive tailings plume, meteorological, and hydrogeological are found to influence radionuclide behavior and transport mechanisms in the vadose zone. CNT approach described soil microorganisms, fraction of organic carbon, infiltration rate of the soil, transmissivity, clay fraction in the soil, particulates in groundwater, and infiltrating rainwater as dominant factors in the NoF based on their centrality measures and sensitivity analysis of the network of factors (NoF). Any uncertainty associated with these factors will affect and propagate through the model. Hence, sufficient resources should be directed in the future to characterize these factors and minimize their uncertainty, which will lead to developing reliable fate and transport models for radionuclides. / Thesis / Master of Applied Science (MASc) / Waste products from uranium mining and milling operations are called uranium mine tailings, which are radioactive. Generally, uranium mine tailings are disposed of and isolated in dam-like structures referred to as uranium mine tailings dams (UMTD). One of the most common causes of UMTD failure is extreme weather conditions. When a UMTD fails, a part of tailings, consisting of radionuclides uranium, thorium, and radium, infiltrate into the subsurface through the vadose zone. Radionuclide behavior and transport in the subsurface is influenced by several environmental factors. The objective of the present study is to understand the factors affecting radionuclide migration by i) conducting a scoping review on radionuclide migration in the subsurface to describe the factors studied in the literature, and ii) understanding and analyzing any relation among the factors and deriving the most dominant factors based on their relation. This study can be used further to develop accurate and reliable radionuclide fate and transport models with minimal uncertainty.
140

Bulk Synchronous Parallel Implementation of Percolation Centrality for Large Scale Graphs

Saad, Kristen M. 30 August 2017 (has links)
No description available.

Page generated in 0.2006 seconds