• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 38
  • 38
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Empowering bystanders to facilitate Internet censorship measurement and circumvention

Burnett, Samuel Read 27 August 2014 (has links)
Free and open exchange of information on the Internet is at risk: more than 60 countries practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are on the rise. Understanding and mitigating these threats to Internet freedom is a continuous technological arms race with many of the most influential governments and corporations. By its very nature, Internet censorship varies drastically from region to region, which has impeded nearly all efforts to observe and fight it on a global scale. Researchers and developers in one country may find it very difficult to study censorship in another; this is particularly true for those in North America and Europe attempting to study notoriously pervasive censorship in Asia and the Middle East. This dissertation develops techniques and systems that empower users in one country, or bystanders, to assist in the measurement and circumvention of Internet censorship in another. Our work builds from the observation that there are people everywhere who are willing to help us if only they knew how. First, we develop Encore, which allows webmasters to help study Web censorship by collecting measurements from their sites' visitors. Encore leverages weaknesses in cross-origin security policy to collect measurements from a far more diverse set of vantage points than previously possible. Second, we build Collage, a technique that uses the pervasiveness and scalability of user-generated content to disseminate censored content. Collage's novel communication model is robust against censorship that is significantly more powerful than governments use today. Together, Encore and Collage help people everywhere study and circumvent Internet censorship.

Large-scale Peer-to-peer Streaming: Modeling, Measurements, and Optimizing Solutions

Wu, Chuan 26 February 2009 (has links)
Peer-to-peer streaming has emerged as a killer application in today's Internet, delivering a large variety of live multimedia content to millions of users at any given time with low server cost. Though successfully deployed, the efficiency and optimality of the current peer-to-peer streaming protocols are still less than satisfactory. In this thesis, we investigate optimizing solutions to enhance the performance of the state-of-the-art mesh-based peer-to-peer streaming systems, utilizing both theoretical performance modeling and extensive real-world measurements. First, we model peer-to-peer streaming applications in both the single-overlay and multi-overlay scenarios, based on the solid foundation of optimization and game theories. Using these models, we design efficient and fully decentralized solutions to achieve performance optimization in peer-to-peer streaming. Then, based on a large volume of live measurements from a commercial large-scale peer-to-peer streaming application, we extensively study the real-world performance of peer-to-peer streaming over a long period of time. Highlights of our measurement study include the topological characterization of large-scale streaming meshes, the statistical characterization of inter-peer bandwidth availability, and the investigation of server capacity utilization in peer-to-peer streaming. Utilizing in-depth insights from our measurements, we design practical algorithms that advance the performance of key protocols in peer-to-peer live streaming. We show that our optimizing solutions fulfill their design objectives in various realistic scenarios, using extensive simulations and experiments.


Palash, Mijanur R 01 May 2018 (has links)
Multipath TCP (MPTCP) is a new modification of TCP protocol which enables a client to transfer data over multiple paths simultaneously under a single TCP connection, for improved throughput and fault resilience. However, MPTCP is susceptible to some major drawbacks when applied in a wireless network. We found several cases where, despite improving individual MPTCP clients throughput, MPTCP reduces the capacity of the overall wireless network due to the mac level fairness and contention-based access schemes. Additionally, even if the bandwidth improves, employing Multipath TCP (MPTCP) in wireless networks can be energy inecient due to additional energy consumption by multiple interfaces. This creates a dilemma between bandwidth improvement and energy efficiency. This thesis research aims to solve these important issues for MPTCP in the wireless environment. We analyzed the root cause of these drawbacks and identified instances where they can arise. Two novel schemes denoted MPWiFi and kMPTCP, are developed to solve the bandwidth degradation and energy efficiency issues respectively, while maintaining the promised benefitts of MPTCP. The MPWiFi assigns dierent priorities to the subflows and aggressively suppresses some of them based on some design logic. Similarly, kMPTCP adds an additional multipath subflow only if the bandwidth requirement can't be fulllled by single path and the new subflow meets the data rate and signal strength condition. Moreover, kMPTCP keeps additional subflows as long as the signal strength remains in good range and this subflow remain mandatory to provide the necessary bandwidth to the application. These two schemes have been implemented along with Linux Kernel MPTCP implementation. Extensive real-world deployment and NS3 simulation show that the proposed schemes can eectively alleviate the adverse impacts of the MPTCP based multipath access in Wireless networks.

Examining Engineering & Technology Students Acceptance Of Network Virtualization Technology Using The Technology Acceptance Mode

Yousif, Wael K. 01 January 2010 (has links)
This causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also extended with enjoyment, external control, and computer self-efficacy as antecedents to perceived ease of use. In an attempt to further increase the explanatory power of the model, the Task-Technology Fit constructs (TTF) were included as antecedents to perceived usefulness. The model was also expanded with subjective norms and voluntariness to assess the degree to which social influences affect students decision for adoption and utilization. This study was conducted during the fall term of 2009, using 11 instruments: (1) VMware Tools Functions Instrument; (2) Computer Networking Tasks Characteristics Instrument; (3) Perceived Usefulness Instrument; (4) Voluntariness Instrument; (5) Subjective Norms Instrument; (6) Perceived Enjoyment Instrument; (7) Computer Self-Efficacy Instrument; (8) Perception of External Control Instrument; (9) Perceived Ease of Use Instrument; (10) Intention Instrument; and (11) a Utilization Instrument. The 11 instruments collectively contained 58 items. Additionally, a demographics instrument of six items was included to investigate the influence of age, prior experience with the technology, prior experience in computer networking, academic enrollment status, and employment status on student intentions and behavior with regard to VMware as a network virtualization technology. Data were analyzed using path analysis, regressions, and univariate analysis of variance in SPSS and AMOS for Windows. The results suggest that perceived ease of use was found to be the strongest determinant of student intention. The analysis also suggested that external control, measuring the facilitating conditions (knowledge, resources, etc) necessary for adoption was the highest predictor of perceived ease of use. Consistent with previous studies, perceived ease of use was found to be the strongest predictor of perceived usefulness followed by subjective norms as students continued to use the technology. Even though the integration of the task-technology fit construct was not helpful in explaining the variance in student perceived usefulness of the target technology, it was statistically significant in predicting student perception of ease of use. The study concluded with recommendations to investigate other factors (such as service quality and ease of implementation) that might contribute to explaining the variance in perceived ease of use as the primary driving force in influencing student decision for adoption. A recommendation was also made to modify the task-technology fit construct instruments to improve the articulation and the specificity of the task. The need for further examination of the influence of the instructor on student decision for adoption of a target technology was also emphasized.

Cell identity allocation and optimisation of handover parameters in self-organised LTE femtocell networks

Zhang, Xu January 2013 (has links)
Femtocell is a small cellular base station used by operators to extend indoor service coverage and enhance overall network performance. In Long Term Evolution (LTE), femtocell works under macrocell coverage and combines with the macrocell to constitute the two-tier network. Compared to the traditional single-tier network, the two-tier scenario creates many new challenges, which lead to the 3rd Generation Partnership Project (3GPP) implementing an automation technology called Self-Organising Network (SON) in order to achieve lower cost and enhanced network performance. This thesis focuses on the inbound and outbound handovers (handover between femtocell and macrocell); in detail, it provides suitable solutions for the intensity of femtocell handover prediction, Physical Cell Identity (PCI) allocation and handover triggering parameter optimisation. Moreover, those solutions are implemented in the structure of SON. In order to e ciently manage radio resource allocation, this research investigates the conventional UE-based prediction model and proposes a cell-based prediction model to predict the intensity of a femtocell's handover, which overcomes the drawbacks of the conventional models in the two-tier scenario. Then, the predictor is used in the proposed dynamic group PCI allocation approach in order to solve the problem of PCI allocation for the femtocells. In addition, based on SON, this approach is implemented in the structure of a centralised Automated Con guration of Physical Cell Identity (ACPCI). It overcomes the drawbacks of the conventional method by reducing inbound handover failure of Cell Global Identity (CGI). This thesis also tackles optimisation of the handover triggering parameters to minimise handover failure. A dynamic hysteresis-adjusting approach for each User Equipment (UE) is proposed, using received average Reference Signal-Signal to Interference plus Noise Ratio (RS-SINR) of the UE as a criterion. Furthermore, based on SON, this approach is implemented in the structure of hybrid Mobility Robustness Optimisation (MRO). It is able to off er the unique optimised hysteresis value to the individual UE in the network. In order to evaluate the performance of the proposed approach against existing methods, a System Level Simulation (SLS) tool, provided by the Centre for Wireless Network Design (CWiND) research group, is utilised, which models the structure of two-tier communication of LTE femtocell-based networks.

Ensuring Network Designs Meet Performance Requirements under Failures

Yiyang Chang (6872636) 13 August 2019 (has links)
<div> <div> <div> <p>With the prevalence of web and cloud-based services, there is an ever growing requirement on the underlying network infrastructure to ensure that business critical traffic is continually serviced with acceptable performance. Networks must meet their performance requirements under failures. The global scale of cloud provider networks and the rapid evolution of these networks imply that failures are the norm in production networks today. Unplanned downtime can cost billions of dollars, and cause catastrophic consequences. The thesis is motivated by these challenges and aims to provide a principled solution to certifying network performance under failures. Network performance certification is complicated, due to both the variety of ways a network can fail, and the rich ways a network can respond to failures. The key contributions of this thesis are: (i) a general framework for robustly certifying the worst-case performance of a network across a given set of uncertain scenarios. A key novelty is that the framework models flexible network response enabled by recent emerging trends such as Software-Defined Networking; (ii) a toolkit which automates the key steps needed in robust certification making it suitable for use by a network architect, and which enables experimentation on a wide range of robust certification of practical interest; (iii) Slice, a general framework which efficiently classifies failure scenarios based on whether network performance is acceptable for those scenarios, and which allows reasoning about performance requirements that must be met over a given percentage of scenarios. We also show applications of our frameworks in synthesizing designs that are guaranteed to meet a performance goal over all or a desired percentage of a given set of scenarios. The thesis focuses on wide-area networks, but the approaches apply to data-center networks as well.</p></div></div></div>

Graph Algorithms for Network Tomography and Fault Tolerance

Gopalan, Abishek January 2013 (has links)
The massive growth and proliferation of media, content, and services on the Internet are driving the need for more network capacity as well as larger networks. With increasing bandwidth and transmission speeds, even small disruptions in service can result in a significant loss of data. Thus, it is becoming increasingly important to monitor networks for their performance and to be able to handle failures effectively. Doing so is beneficial from a network design perspective as well as in being able to provide a richer experience to the users of such networks. Network tomography refers to inference problems in large-scale networks wherein it is of interest to infer individual characteristics, such as link delays, through aggregate measurements, such as end-to-end path delays. In this dissertation, we establish a fundamental theory for a class of network tomography problems in which the link metrics of a network are modeled to be additive. We establish the necessary and sufficient conditions on the network topology, provide polynomial time graph algorithms that quantify the extent of identifiability, and algorithms to identify the unknown link metrics. We develop algorithms for all graph topologies classified on the basis of their connectivity. The solutions developed in this dissertation extend beyond networking and are applicable in areas such as nano-electronics and power systems. We then develop graph algorithms to handle link failures effectively and to provide multipath routing capabilities in IP as well as Ethernet based networks. Our schemes guarantee recovery and are designed to pre-compute alternate next hops that can be taken upon link failures. This allows for fast re-routing as we avoid the need to wait for (control plane) re-computations.

Data-Driven Network Analysis and Applications

Tao, Narisu 14 September 2015 (has links)
No description available.

On Switchover Performance in Multihomed SCTP

Eklund, Johan January 2010 (has links)
The emergence of real-time applications, like Voice over IP and video conferencing, in IP networks implies a challenge to the underlying infrastructure. Several real-time applications have requirements on timeliness as well as on reliability and are accompanied by signaling applications to set up, tear down and control the media sessions. Since neither of the traditional transport protocols responsible for end-to-end transfer of messages was found suitable for signaling traffic, the Stream Control Transmission Protocol (SCTP) was standardized. The focus for the protocol was initially on telephony signaling applications, but it was later widened to serve as a general purpose transport protocol. One major new feature to enhance robustness in SCTP is multihoming, which enables for more than one path within the same association. In this thesis we evaluate some of the mechanisms affecting transmission performance in case of a switchover between paths in a multihomed SCTP session. The major part of the evaluation concerns a failure situation, where the current path is broken. In case of failure, the endpoint does not get an explicit notification, but has to react upon missing acknowledgements. The challenge is to distinguish path failure from temporary congestion to decide  when to switch to an alternate path. A too fast switchover may be spurious, which could reduce transmission performance, while a too late switchover also results in reduced transmission performance. This implies a tradeoff which involves several protocol as well as network parameters and we elaborate among these to give a coherent view of the parameters and their interaction. Further, we present a recommendation on how to tune the parameters to meet  telephony signaling requirements, still without violating fairness to other traffic. We also consider another angle of switchover performance, the startup on the alternate path. Since the available capacity is usually unknown to the sender, the transmission on a new path is started at a low rate and then increased as acknowledgements of successful transmissions return. In case of switchover in the middle of a media session the startup phase after a switchover could cause problems to the application. In multihomed SCTP the availability of the alternate path makes it feasible for the end-host to estimate the available capacity on the alternate path prior to the switchover. Thus, it would be possible to implement a more efficient startup scheme. In this thesis we combine different switchover scenarios with relevant traffic. For these combinations, we analytically evaluate and quantify the potential performance gain from utilizing an ideal startup mechanism as compared to the traditional startup procedure.

Named Data Networking in Local Area Networks

Shi, Junxiao, Shi, Junxiao January 2017 (has links)
The Named Data Networking (NDN) is a new Internet architecture that changes the network semantic from packet delivery to content retrieval and promises benefits in areas such as content distribution, security, mobility support, and application development. While the basic NDN architecture applies to any network environment, local area networks (LANs) are of particular interest because of their prevalence on the Internet and the relatively low barrier to deployment. In this dissertation, I design NDN protocols and implement NDN software, to make NDN communication in LAN robust and efficient. My contributions include: (a) a forwarding behavior specification required on every NDN node; (b) a secure and efficient self-learning strategy for switched Ethernet, which discovers available contents via occasional flooding, so that the network can operate without manual configuration, and does not require a routing protocol or a centralized controller; (c) NDN-NIC, a network interface card that performs name-based packet filtering, to reduce CPU overhead and power consumption of the main system during broadcast communication on shared media; (d) the NDN Link Protocol (NDNLP), which allows the forwarding plane to add hop-by-hop headers, and provides a fragmentation-reassembly feature so that large NDN packets can be sent directly over Ethernet with limited MTU.

Page generated in 0.0944 seconds