11 |
Essays in Industrial Organization and Political EconomyNandy, Abhinaba 12 September 2022 (has links)
This dissertation comprises of three problems in the area of Political Economy and Industrial Organization. The first chapter concerns how ideologically-opposite media firms report a particular event to maximize their payoffs from advocating their ideology and strengthen reader trust which increases if the report is proximate to their beliefs. I use these facts to develop a Hotelling's linear city model of competition where the two media firms choose their respective locations which signify the impression they want to impart to its readers. I find partisan media provides accurate information while covering topics favorable to its ideology. However, for unfavourable topics, the media never provides an indifferent report, but either defends its own ideology or delivers a partially accurate report. For unfavourable issues, imparting an indifferent impression rewards a media with lowest equilibrium payoffs. I identify sufficiency conditions where readers give better assessment to news of a media located farther away from their ideology than one which is nearer. Increasing competition by the entry of a third firm does not necessarily alleviate the level of bias in the news economy. The second paper studies the pricing schedule of a monopolist while it sells a non-durable product over two time periods. The consumer's experience with the product is correlated with two possible states — good (bad) experience is more probable under a high (low) state. Given this, I study the monopolist's pricing scheme in the two periods when consumers are wishful — overly optimistic about the high state even after a bad experience. I provide a comparative study of prices in each periods when the monopolist announces prices with and without commitment when consumers are either naive or sophisticated. The final chapter provides an understanding of the efficacy of two types of trade sanctions (import and export) using a directed network model. Sanctions are common punitive measures taken by a sender player to discipline a target player. Empirical evidences in the realm of international trade show differences in the effectiveness between import and export sanctions. This paper shows that such differences can be explained by one specific centrality feature of the underlying trading network — betweenness-centrality. This measure lends insights to the trade spill-overs following sanctions underscoring why sanctions are ineffective. I highlight when a higher value of this centrality acts as a sufficient condition towards effective sanction. Based on this analysis, one can conclude whether import or export sanction will be more effective for a given trade network. / Doctor of Philosophy / Three essays spanning across topics of political economy and Industrial Organization has been studied. The first essay `Media bias in the best and worst of times' studies how ideology-motivated (partisan) media firms try to create impressions to its audience about a particular issue to increase its payoffs from either of the two sources — reader trust and advocating its ideology. This trade-off depends on the type of issue at hand which either aggravates or moderates a media's wish to generate bias in its news. I investigate not only the degree of bias for any given event, but also study how profits of media are impacted from doing so. The second chapter `Monopoly pricing under wishful thinking' investigates the pricing strategies of a seller when he sells a non-durable product to a wishful buyer twice, over two time periods. Under two possible states of the world — textit{high} and textit{low} — the buyer can derives either a good or bad experience. It is assumed that a good experience is more likely than a bad one under textit{high} state. Would the buyer re-purchase the product after having a bad experience in the first period? A wishful buyer is overly optimistic about a good experience in the future even after a bad experience in the current period. Such optimism paves the way for pricing strategies in favor of the seller under certain conditions. My aim has been to highlight these conditions and draw comparison with a pricing model with non-wishful buyers. The third chapter investigates the effectiveness of trade sanctions. Such sanctions are imposed by a sender country against a target country when the latter has taken an action which the sender disapproves — initiating domestic war, building nuclear arsenals, etc. The sanctions are enforced until the target. However, only 30% sanctions are effective in disciplining the target. This paper studies if any feature of the trade network can explain why sanctions fail and what type of trade sanction — import or export — will be optimal in any given trade network.
|
12 |
Vems landskap ska förändras för att öka den biologiska mångfalden? : En studie av skillnaderna i odlingslandskapets konnektivitet med avseende på två skyddsvärda arter med olika preferenserArnesén, Lisa January 2014 (has links)
Organisms relevant for nature conservation dont follow administrative borders. Because of this there is a need for a landscape perspective within conservation and planning, and a need for the species of interest to have legal protection. Network analysis adapted for ecological purposes has grown to become a powerful tool for studying and communicating the relationships between species dispersion and access to habitat. In this study the following question is posed: How is the Osmoderma eremita and the Pernis apivorus dispersal possibilities in the small scale cultivated landscape of Borås affected by exploitation in respect to a) dispersal ability, b) habitat quality, c) position of habitat patches in a network? The analysis were based on municipal and regional nature conservation data, which in due to confidentiality is not accounted for in the report by maps, coordinates, etc. Several networks were established for both species to indicate how habitat patches are distributed today and how the species dispersal changes depending on which patches are excluded – this was done to imitate how exploitation can affect the species future survival and dispersion. The results showed that the O.e. is mainly inhibited by its poor dispersal abilities, followed by patch position, while the P.a. is the most affected by degrading habitat quality. The most important conclusions of the study were that the O.e. natural dispersal may be restricted but can be improved by linking small network components together and by maintaining the largest components. As for the P.a. it was concluded that a different type of analysis, focusing on its behaviour and need for different patches for different purposes, would generate more interesting results. / Eftersom skyddsvärda organismer inte följer administrativa gränser behövs ett landskapsperspektiv i naturvårds- och planarbete, och de arter som studeras behöver ha juridiska belägg för att skyddas. Nätverksanalyser anpassade för ekologi har vuxit fram som ett kraftfullt verktyg för att studera och kommunicera sambanden mellan arters spridning över större områden. I denna rapport ställs därför frågan: hur läderbaggens (Osmoderma eremita) respektive bivråkens (Pernis apivorus) spridningsmöjligheter i odlingslandskapet i Borås kommun påverkas vid exploatering, med avseende på a)spridningsförmåga, b) habitatkvalitet c) habitatpatchers position i ett nätverk? Analyserna baserades på kommunal och regional naturvårdsdata, som p.g.a. sekretess inte redovisas med kartmaterial, koordinater eller liknande. Flera nätverk etablerades för varje art för att indikera hur nätverken av patcher ser ut idag och hur arternas spridning förändras beroende på vilka patcher som utesluts – detta för att imitera hur exploatering kan påverka arternas fortsatta överlevnad och spridning. Resultaten visade att läderbaggens största begränsning är dess dåliga spridningsförmåga, tätt följd av patchernas position, medan bivråken påverkas mer av habitatkvalitet. De viktigaste slutsatserna som kunde dras var att läderbaggens naturliga spridning må vara begränsad men kan förbättras genom att länka samman små nätverkskomponenter och fortsätta sköta de som är störst idag. För bivråkens del skulle en annan typ av analys med mer fokus på artens beteende och behov av olika patcher för olika aktiviteter ge ett bättre underlag.
|
13 |
ANÁLISE DA REDE SOCIAL TOCANTINS DIGITAL, UTILIZANDO O ALGORITMO k- MÉDIAS E CENTRALIDADE DE INTERMEDIAÇÃO. / SOCIAL NETWORK ANALYSIS TOCANTINS DIGITAL, USING THE K-MEANS ALGORITHM AND BETWEENNESS CENTRALITY.Furlan, Carolina Palma Pimenta 04 September 2014 (has links)
Made available in DSpace on 2016-08-10T10:40:24Z (GMT). No. of bitstreams: 1
Carolina Palma Pimenta Furlan.pdf: 1336228 bytes, checksum: 0e211203f57a4ebf656390682932eb68 (MD5)
Previous issue date: 2014-09-04 / The advent of Internet enabled various means of communication among people and
among them there are the social networking platforms, becoming a new form of
relationship. Explore this new environment has become increasingly appreciated for
researchers as well as for managers in a way in general. This environment provided the
formation of communities and through them it is possible to identify the formation of
groups through their interests. Most visualization algorithms for social networks are
represented in graphs. The environment of this research consists in the subgroup
Tocantins digital social of network Facebook, which has more than 10.000 members. In
this paper the k-means algorithm for clustering of data represented by the five groups as
the best solution found, and also the centrality measure of intermediation which revealed
the existence of three most influential members in your posts, and four products or
services was applied most Viewed within the subgroup. / O advento da Internet possibilitou vários meios de comunicação entre as pessoas e dentre
elas destacam-se as plataformas de redes sociais, tornando-se uma nova forma de
relacionamento. Explorar esse novo ambiente tornou-se cada vez mais apreciado para os
pesquisadores, tanto quanto para os gestores de um modo em geral. Esse ambiente
proporciona a formação de comunidades e através delas é possível a identificação de
formação de grupos através dos seus interesses. A maioria dos algoritmos de visualização
de redes sociais são representados em grafos. O ambiente dessa pesquisa consiste no
subgrupo Tocantins digital da rede social Facebook, o qual possui mais de 10.000
membros. Neste ambiente utilizou-se a ferramenta API, na qual foi possível desenvolver
aplicações para se coletar informações das postagens dos membros do grupo pesquisado.
Neste trabalho foi aplicado o algoritmo k-médias para o agrupamento de dados
representados pelos cinco grupos como a melhor solução encontrada, e também a medida
centralidade de intermediação onde revelou a existência de três membros com maior
influência em suas postagens, e quatro produtos ou serviços mais visualizados dentro do
subgrupo.
|
14 |
Comparing consensus modules using S2B and MODifieRMcCoy, Daniel January 2019 (has links)
It is currently understood that diseases are typically not caused by rogue errors in genetics but have both molecular and environmental causes from myriad overlapping interactions within an interactome. Genetic errors, such as that seen by a single-nucleotide polymorphism can lead to a dysfunctional cell, which in turn can lead to systemic disruptions that result in disease phenotypes. Perturbations within the interactome, as can be caused by many such errors, can be organized into a pathophenotype, or “disease module”. Disease modules are sets of correlated variables that can represent many of a disease’s activities with subgraphs of nodes and edges. Many methods for inferring disease modules are available today, but the results each one yields is not only variable between methods but also across datasets and trial attempts. In this study, several such inference methods for deriving disease modules are evaluated by combining them to create “consensus” modules. The method of focus is Double-Specific Betweenness (S2B), which uses betweenness centrality across separate diseases to derive new modules. This study, however, uses S2B to combine the results of independent inference methods rather than separate diseases to derive new modules. Pre-processed asthma and arthritis data are compared using various combinations of inference methods. The performance of each result is validated using Pathway Scoring Algorithm. The results of this study suggest that combining methods of inference using MODifieR or S2B may be beneficial for deriving meaningful disease modules.
|
15 |
Efficient betweenness Centrality Computations on Hybrid CPU-GPU SystemsMishra, Ashirbad January 2016 (has links) (PDF)
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging.
While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations.
We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it.
Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
|
16 |
Novel measures on directed graphs and applications to large-scale within-network classificationMantrach, Amin 25 October 2010 (has links)
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable.<p>La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré :la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise :en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre :allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art.<p>La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné :le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs.<p>La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances.<p>La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines.<p><p><p>In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time. <p>The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques.<p>The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e. classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes. <p>The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results.<p>The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0156 seconds