• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 10
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Game Theoretic Models of Connectivity Among Internet Access Providers

Badasyan, Narine 22 June 2004 (has links)
The Internet has a loosely hierarchical structure. At the top of the hierarchy are the backbones, also called Internet Access Providers (hereafter IAPs). The second layer of the hierarchy is comprised of Internet Service Providers (hereafter ISPs). At the bottom of the hierarchy are the end users, consumers, who browse the web, and websites. To provide access to the whole Internet, the providers must interconnect with each other and share their network infrastructure. Two main forms of interconnection have emerged — peering under which the providers carry each other's traffic without any payments and transit under which the downstream provider pays the upstream provider a certain settlement payment for carrying its traffic. This dissertation develops three game theoretical models to describe the interconnection agreements among the providers, and analysis of those models from two alternative modeling perspectives: a purely non-cooperative game and a network perspective. There are two original contributions of the dissertation. First, we model the formation of peering/transit contracts explicitly as a decision variable in a non-cooperative game, while the current literature does not employ such modeling techniques. Second, we apply network analysis to examine interconnection decisions of the providers, which yields much realistic results. Chapter 1 provides a brief description of the Internet history, architecture and infrastructure as well as the economic literature. In Chapter 2 we develop a model, in which IAPs decide on private peering agreements, comparing the benefits of private peering relative to being connected only through National Access Points (hereafter NAPs). The model is formulated as a multistage game. Private peering agreements reduce congestion in the Internet, and so improve the quality of IAPs. The results show that even though the profits are lower with private peerings, due to large investments, the network where all the providers privately peer is the stable network. Chapter 3 discusses the interconnection arrangements among ISPs. Intra-backbone peering refers to peering between ISPs connected to the same backbone, whereas inter-backbone peering refers to peering between ISPs connected to different backbones. We formulate the model as a two-stage game. Peering affects profits through two channels - reduction of backbone congestion and ability to send traffic circumventing congested backbones. The relative magnitude of these factors helps or hinders peering. In Chapter 4 we develop a game theoretic model to examine how providers decide who they want to peer with and who has to pay transit. There is no regulation with regard to interconnection policies of providers, though there is a general convention that the providers peer if they perceive equal benefits from peering, and have transit arrangements otherwise. The model discusses a set of conditions, which determine the formation of peering and transit agreements. We argue that market forces determine the terms of interconnection, and there is no need for regulation to encourage peering. Moreover, Pareto optimum is achieved under the transit arrangements. / Ph. D.
2

The economics of internet peering interconnections

Lodhi, Aemen Hassaan 12 January 2015 (has links)
The Internet at the interdomain level is a complex network of approximately 50,000 Autonomous Systems (ASes). ASes interconnect through two types of links: (a) transit (customer-provider) and (b) peering links. Recent studies have shown that despite being optional for most ASes, a rich and dynamic peering fabric exists among ASes. Peering has also grown as one of the main instruments for catching up with asymmetric traffic due to CDNs, online video traffic, performance requirements, etc. Moreover, peering has been in the spotlight recently because of peering conflicts between major ISPs and Content Providers. Such conflicts have led to calls for intervention by communication regulators and legislation at the highest levels of government. Peering disputes have also sometimes resulted in partitioning of the Internet. Despite the broad interest and intense debate about peering, several fundamental questions remain elusive. The objective of this thesis is to study peering from a techno-economics perspective. We explore the following questions: 1- What are the main sources of complexity in Internet peering that defy the development of an automated approach to assess peering relationships? 2- What is the current state of the peering ecosystem, e.g., which categories of ASes are more inclined towards peering? What are the most popular peering strategies among ASes in the Internet? 3- What can we say about the economics of contemporary peering practices, e.g., what is the impact of using different peering traffic ratios as a strategy to choose peers? Is the general notion that peering saves network costs, always valid? 4- Can we propose novel methods for peering that result in more stable and fair peering interconnections? We have used game-theoretic modeling, large-scale computational agent-based modeling, and analysis of publicly available peering data to answer the above questions. The main contributions of this thesis include: 1- Identification of fundamental complexities underlying the evaluation of peers and formation of stable peering links in the interdomain network. 2- An empirical study of the state of the peering ecosystem from August 2010 to August 2013. 3- Development of a large-scale agent-based computational model to study the formation and evolution of Internet peering interconnections. 4- A plausible explanation for the gravitation of Internet transit providers towards Open peering and a prediction of its future consequences. 5- We propose a variant of the Open peering policy and a new policy based on cost-benefit analysis to replace the contemporary simplistic policies.
3

Measuring The Evolving Internet Ecosystem With Exchange Points

Ahmad, Mohammad Zubair 01 January 2013 (has links)
The Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In this dissertation we study and analyze the overall affects of peering and IXP infrastructure on the Internet. We observe IXP peering is enabling a quicker flattening of the Internet topology and leading to over-utilization of popular inter-AS links. Indiscriminate peering at these locations is leading to higher endto-end path latencies for ASes peering at an exchange point, an effect magnified at the most popular worldwide IXPs. We first study the effects of recently discovered IXP links on the inter-AS routes using graph based approaches and find that it points towards the changing and flattening landscape in the evolution of the Internet’s topology. We then study more IXP effects by using measurements to investigate the networks benefits of peering. We propose and implement a measurement framework which identifies default paths through IXPs and compares them with alternate paths isolating the IXP hop. Our system is running and recording default and alternate path latencies and made publicly available. We model the probability of an alternate path performing better than a default path through an IXP iii by identifying the underlying factors influencing the end-to end path latency. Our firstof-its-kind modeling study, which uses a combination of statistical and machine learning approaches, shows that path latencies depend on the popularity of the particular IXP, the size of the provider ASes of the networks peering at common locations and the relative position of the IXP hop along the path. An in-depth comparison of end-to-end path latencies reveal a significant percentage of alternate paths outperforming the default route through an IXP. This characteristic of higher path latencies is magnified in the popular continental exchanges as measured by us in a case study looking at the largest regional IXPs. We continue by studying another effect of peering which has numerous applications in overlay routing, Triangle Inequality Violations (TIVs). These TIVs in the Internet delay space are created due to peering and we compare their essential characteristics with overlay paths such as detour routes. They are identified and analyzed from existing measurement datasets but on a scale not carried out earlier. This implementation exhibits the effectiveness of GPUs in analyzing big data sets while the TIVs studied show that the a set of common inter-AS links create these TIVs. This result provides a new insight about the development of TIVs by analyzing a very large data set using GPGPUs. Overall our work presents numerous insights into the inner workings of the Internet’s peering ecosystem. Our measurements show the effects of exchange points on the evolving Internet and exhibits their importance to Internet routing.
4

Pricing in a Multiple ISP Environment with Delay Bounds and Varying Traffic Loads

Gabrail, Sameh January 2008 (has links)
In this thesis, we study different Internet pricing schemes and how they can be applied to a multiple ISP environment. We first take a look at the current Internet architecture. Then the different classes that make up the Internet hierarchy are discussed. We also take a look at peering among Internet Service Providers (ISPs) and when it is a good idea for an ISP to consider peering. Moreover, advantages and disadvantages of peering are discussed along with speculations of the evolution of the Internet peering ecosystem. We then consider different pricing schemes that have been proposed and study the factors that make up a good pricing plan. Finally, we apply some game theoretical concepts to discuss how different ISPs could interact together. We choose a pricing model based on a Stackelberg game that takes into consideration the effect of the traffic variation among different customers in a multiple ISP environment. It allows customers to specify their desired QoS in terms of maximum allowable end-to-end delay. Customers only pay for the portion of traffic that meet this delay bound. Moreover, we show the effectiveness of adopting this model through a comparison with a model that does not take traffic variation into account. We also develop a naïve case and compare it to our more sophisticated approach.
5

Pricing in a Multiple ISP Environment with Delay Bounds and Varying Traffic Loads

Gabrail, Sameh January 2008 (has links)
In this thesis, we study different Internet pricing schemes and how they can be applied to a multiple ISP environment. We first take a look at the current Internet architecture. Then the different classes that make up the Internet hierarchy are discussed. We also take a look at peering among Internet Service Providers (ISPs) and when it is a good idea for an ISP to consider peering. Moreover, advantages and disadvantages of peering are discussed along with speculations of the evolution of the Internet peering ecosystem. We then consider different pricing schemes that have been proposed and study the factors that make up a good pricing plan. Finally, we apply some game theoretical concepts to discuss how different ISPs could interact together. We choose a pricing model based on a Stackelberg game that takes into consideration the effect of the traffic variation among different customers in a multiple ISP environment. It allows customers to specify their desired QoS in terms of maximum allowable end-to-end delay. Customers only pay for the portion of traffic that meet this delay bound. Moreover, we show the effectiveness of adopting this model through a comparison with a model that does not take traffic variation into account. We also develop a naïve case and compare it to our more sophisticated approach.
6

Formulating Taiwan¡¦s Internet IP Peering Mechanism from Two-Sided Market Perspectives

Tai, Tzu-cheng 10 February 2010 (has links)
We propose that the industry structure in Taiwan broadband market is a two-sided market. In this framework, the networks need to be completely interconnected in order to ensure unhindered (or smoothly) information flow. Based on a two-sided market model, we analyze the IP peering mechanism for Taiwan Internet market. We show that the IP peering access charges should be a very low constant amount to reflect the unique Taiwan broadband industry structure. Furthermore, in attracting more Internet content providers (ICP) and end users to provide more content services and Internet applications, the Internet service providers (ISP) should provide free broadband services to ICPs. Though these results are contradictory with the ¡§user-pays¡¨ principle, it ensures more profitable for ISPs and ICPs. Most importantly, the impacts on the whole social welfare are improved. Last, we examine a more efficacious framework for ensuring network neutrality is Efficient Component Pricing Rule (ECPR) in a vertically-integrated monopoly market, as in Taiwan Broadband industry.
7

Network Formation and Economic Applications

Chakrabarti, Subhadip 29 September 2004 (has links)
Networks, generically, refer to any application of graph theory in economics. Consider an undirected graph where nodes represent players and links represent relationships between them. Players can both form and delete links by which we mean that they can both form new relationships and terminate existing ones. A stable network is one in which no incentives exist to change the network structure. There can be various forms of stability depending on how many links players are allowed to form or delete at a time. Under strong pairwise stability, each player is allowed to delete any number of links at a time while any pair of players can form one link at a time. We introduce a network-value function, which assigns to each possible network a certain value. The value is allocated according to the component-wise egalitarian allocation rule, which divides the value generated by a component equally among members of the component (where a component refers to a maximally connected subgraph). An efficient network is one that maximizes the network value function. We show that there is an underlying conflict between strong pairwise stability and efficiency. Efficient networks are not necessarily strongly pairwise stable. This conflict can be resolved only if value functions satisfy a certain property called "middlemen-security". We further find that there is a broad class of networks called "middlemen-free networks" for which the above condition is automatically satisfied under all possible value functions. We also look at three network applications. A peering contract is an arrangement between Internet Service Providers under which they exchange traffic with one another free of cost. We analyze incentives for peering contracts among Internet service providers using the notion of pairwise stability. A hierarchy is a directed graph with an explicit top-down structure where each pair of linked agents have a superior-subordinate relationship with each other. We apply the notion of conjunctive permission value to demonstrate the formation of hierarchical firms in a competitive labor market. Comparative or targeted advertising is defined as any form of advertising where a firm directly or indirectly names a competitor. We also examine a model of targeted advertising between oligopolistic firms using non-cooperative game theoretic tools. / Ph. D.
8

Proposition de nouveaux mécanismes de protection contre l'usurpation d'identité pour les fournisseurs de services Internet / Proposal for new protections against identity theft for ISPs

Biri, Aroua 25 February 2011 (has links)
De plus en plus d’organisations sont informatisées et plus une organisation est grande, plus elle peut être la cible d’attaques via Internet. On note également que les internautes utilisent de plus en plus Internet pour faire des achats sur des sites de commerce électronique, pour se connecter à l’administration en ligne, pour voter de manière électronique, etc. Par ailleurs, certains d’entre eux ont de plus en plus d'équipements électroniques qui peuvent être raccordés à Internet et ce dans divers sites (domicile, voiture, lieu de travail, etc.). Ces équipements forment ce qu’on appelle un réseau personnel qui permet la mise en place de nouvelles applications centrées sur l’internaute. Les fournisseurs de services Internet peuvent ainsi étoffer leurs offres de services en présentant une offre de sécurisation de ce genre de réseau. Selon le rapport du cabinet « Arbor Networks » intitulé « Worldwide Infrastructure Security Report », les menaces identifiées comme les plus sévères sont relatives aux attaques de déni de service distribué. Ce type d’attaque a pour but de rendre indisponible un service en empêchant les utilisateurs légitimes de l'utiliser. Il utilise la technique de l’usurpation d’identité qui consiste en la création de paquets (de type IP, ARP, etc.) avec une adresse source forgée et ce dans le but d’usurper un système informatique ou d’usurper l’identité de l’émetteur. La technique de l’usurpation d’identité permet ainsi de rendre un service indisponible, d’écouter, de corrompre, de bloquer le trafic des internautes ou de nuire au bon fonctionnement des protocoles de routage et des réseaux personnels des clients. De plus, la technique de l’usurpation d’identité est également utilisée pour des activités interdites par la loi « Hadopi » en rigueur en France comme le téléchargement illégal. De ce fait, les fournisseurs de services Internet se doivent de prémunir leurs clients des attaques basées sur la technique de l’usurpation d’identité. Ces dits fournisseurs comptent sur les protocoles de routage qu’ils déroulent pour participer au bon acheminement des données de leurs clients. Cependant, le protocole intra-domaine OSPF et le protocole inter-domaine BGP sont vulnérables aux attaques utilisant la technique de l’usurpation d’identité qui peuvent conduire à l’acheminement des paquets vers des destinataires non légitimes ou au déni de service. Nous proposons donc deux mécanismes dédiés respectivement au protocole intra-domaine OSPF et au protocole inter-domaine BGP. D’une part, afin de protéger les routeurs OSPF contre les attaques utilisant la technique d’usurpation d’identité, nous avons préconisé le stockage de l’identité et du matériel cryptographique dans un coffre-fort électronique que sont les cartes à puce. Les cartes déroulent ensuite un algorithme de dérivation de clés avec les cartes des routeurs voisins ainsi qu’avec celle du routeur désigné. Les clés dérivées entre les cartes à puce servent à signer les messages OSPF et à authentifier le niveau MAC. Nous avons décrit par la suite la plateforme du démonstrateur et les scénarios de tests adoptés pour évaluer les performances de notre prototype et les comparer avec ceux du logiciel Quagga sur la base de trois critères : le temps requis pour traiter une annonce d'état de liens, le temps de convergence ainsi que le temps de re-calcul d’une table de routage après un changement. Ces temps augmentent peu avec l’introduction de la carte à puce implémentant les fonctions de sécurité proposées. Ainsi, cette solution permet de renforcer la sécurité du protocole OSPF avec un impact raisonnable sur les performances. D’autre part, afin de protéger les routeurs BGP contre les attaques utilisant la technique d’usurpation d’identité, nous avons préconisé la « clustérisation » des domaines Internet et la sécurisation des liens entre les clusters ainsi qu’au sein de chacun d’eux grâce aux paradigmes de « web of trust » et de la cryptographie sans certificats […] / More and more organizations are computerized and more an organization is great, plus it can be the target of Internet attacks. Moreover, some of them have a growing number of electronic equipments that can be connected to the Internet from various locations (home, car, workplace, etc.). These devices form a so-called personal area network that allows the development of new applications centered on users. The ISPs can then expand their service offerings by providing a secure supply of such networks. According to the report of the firm “Arbor Networks”, entitled "Worldwide Infrastructure Security Report ", the most severe threats are related to distributed denial of service. This type of attack aims to make available a service by preventing legitimate users from using it. It uses the technique of identity theft that involves the creation of packages (like IP, ARP, etc.) with a forged source address and that in order to usurp the Identity of the issuer or of the computer system. Thus, the technique of identity theft allows to render a service unavailable, to listen, to corrupt, to block traffic from Internet users or to undermine the legitimate operation of routing protocols and personal networks. Moreover, the technique of identity theft is also used for prohibited activities by "HADOPI" law in France and related to illegal downloading issues. Thus, the ISPs have a duty to protect their customers from attacks based on the technique of identity theft. The mechanisms of protection against spoofing attacks for access networks are crucial for customer adoption of new applications offered by Internet service providers. This part of the doctoral thesis is part of the European project “MAGNET Beyond" whose vision is to put into practice the concept of personal networks, with the ultimate objective to design, develop, prototype and validate the concept. In the context of user equipment’s access to the network of an Internet services provider from a public place, we proposed a cross-layer protocol based on the principles of information theory. This protocol fixes the security hole not addressed by other proposals that is the attack of identity theft that occurs at the beginning of communication and thus protects users against the middle man attacks. We proposed that the person who wants to have secure access to the Internet must be on a specific circle has been called "RED POINT" so that the attacker is not able to be on the same circle at the same time. The proposed cross-layer protocol can be divided into three phases: the phase of checking the position of the user, the extraction phase of the shared secret of the physical layer and the phase of the derivation of the shared key at the MAC layer. We subsequently validated our solution through a formal tool AVISPA and presented the results of its implementation. In a private context, communication between devices convey users' personal data which may be confidential, so we must prevent equipment not belonging to the legitimate user to access its network. Thus, we proposed two mechanisms of protection against attacks based on spoofing so that illegitimate equipment is unable to impersonate legitimate equipment. The first phase will be dedicated to personal networks and the second will be dedicated to the particular case of medical networks. Regarding the mechanism dedicated to personal networks, we have proposed the use of a protocol based on out-of-band channel in order to provide certificates to user equipments. We derive bilateral key for personal network’s equipments of the same site and between equipments at remote sites. Concerning the particular case of medical networks, we proposed to cover their deployment phases and their operational phases. This proposal was submitted to the IEEE 802.15.6 working group that conducts research for the standardization of medical networks […]
9

An analysis of the economic performance of the Johannesburg's small internet service providers from 2002 - 2006

Tenene, Sime Gabriel 03 1900 (has links)
The following study about the economic performance of the Johannesburg’s small Internet service providers investigates the economic performance of the small Internet providers against the backdrop of regulatory conditions. The study departs from the view point that reports about previous studies have not given particular attention to the economic performance of the small Internet service providers and other impacting factors. The study employed the qualitative research approach with an aim of obtaining deeper understanding and internal view as reiterated by the respondents. The analysis presented follows a guide by Neuman (2006) which departs from the premises of themes or concepts. The results of this study provide a perspective of respondents and the conclusions drawn by the researcher. The study ends by providing suggestions and recommendations for future studies. Suggestions and recommendations provided at the end have been prompted by the results and experiences encountered during the study. / M.A. (Communication Science)
10

Essai d'une théorie sur l'architecture normative du réseau Internet / Essay on a theory of the normative architecture of the internet network

Bamdé, Aurélien 10 October 2013 (has links)
Complexe : tel est l’adjectif qui, sans aucun doute, résume le mieux la question de l’architecture normative du réseau internet. Complexe, cette question l’est pour deux raisons. La première tient à l’identification des normes qui constituent cette architecture ; la seconde tient à leur objet. Tout d’abord, s’agissant de l’identification de normes, cette entreprise s’avère éminemment complexe dans la mesure où voilà un concept, la norme, qui renvoie à des réalités si différentes, qu’il est peu aisé de le définir. Après avoir établi l’existence de normes qui règlent la conduite des bâtisseurs du réseau, il faudra, en outre, s’interroger sur la nature de ces normes. Là encore, cette problématique n’est pas aussi facile à résoudre qu’il y paraît. Il n’existe, en effet, aucun critère de distinction entre les différentes espèces de normes qui fasse l’unanimité chez les auteurs. Concernant, ensuite, la seconde raison pour laquelle la question de l’architecture normative de l’internet est placée sous le signe de la complexité, c’est vers l’objet des normes qui la composent qu’il conviendra de se tourner : l’organisation de la société numérique. Il s’agit là, d’un système complexe. Si l’on adhère à cette idée, il doit corrélativement être admis que le schéma auquel répondent les normes par l’entremise desquelles le contrôle de pareil système est effectué, est très différent de celui dans lequel s’inscrivent les normes qui nous sont les plus familières : les règles juridiques. Alors que la genèse des premières est sous-tendue par un mécanisme d’auto-organisation, la création des secondes procède d’un acte de volonté. La différence entre les deux schémas est de taille : dans un cas, c’est la spontanéité qui commande la production des règles de conduite, dans l’autre c’est la raison. Dans l’univers numérique, l’opposition entre ces deux schémas normatifs se retrouve : elle se traduit par la concurrence qui existe entre les ordres numériques et juridiques. Aussi, est-ce à travers cette concurrence à laquelle se livrent ces deux systèmes normatifs que sera décrite l’architecture normative du réseau internet. / Complex is undoubtedly the adjective that best summarises the issue of the normative architecture of the Internet network. This issue is complex for two reasons. The first one results from the identification of the rules that make up this architecture and the second one from their purpose. First of all, the identification of the rules proves to be an extremely complex matter, since this concept of the rule is not so easy to define, as it refers to such a wide range of realities. After establishing the existence of the rules which set the behaviour of network builders, it is necessary to raise the question of the the nature of the rules. Here again, solving this issue is not as easy as it seems. In fact, in literature there is no universal way to distinguish the various types of rules. Secondly, the rules that compose the normative architecture of the internet aim at organising the digital society. Yet, this is a complex system. If one accepts the idea, one has to correlatively claim that the rule-complying scheme that enables such a system to be controlled is very different from that which rules more common rules for us, such as the rules of law. While the former is underpinned by a self organising mechanism, the creation of the latter stems from an act of willing. The difference between both schemes is significant: in the first case spontaneity controls the setting up of rules of conduct, while in the second case reason does. The opposition between these two normative schemes can be found in the digital universe too. It is conveyed by the existing competition between the digital and the legal orders. That is why the normative architecture of the Internet network will be described through the competition between these two normative systems.

Page generated in 0.4531 seconds