• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • Tagged with
  • 22
  • 22
  • 10
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SIMPLIFYING END POINT NETWORK MEASUREMENT ON INTERNET

Wen, Zhihua January 2009 (has links)
No description available.
2

Network Performance Evaluation within the Web Browser Sandbox

Janc, Artur Adam 19 January 2009 (has links)
With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing “speedtest” services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies.
3

Measuring and Understanding TTL Violations in DNS Resolvers

Bhowmick, Protick 02 January 2024 (has links)
The Domain Name System (DNS) is a scalable-distributed caching architecture where each DNS records are cached around several DNS servers distributed globally. DNS records include a time-to-live (TTL) value that dictates how long the record can be stored before it's evicted from the cache. TTL holds significant importance in aspects of DNS security, such as determining the caching period for DNSSEC-signed responses, as well as performance, like the responsiveness of CDN-managed domains. On a high level, TTL is crucial for ensuring efficient caching, load distribution, and network security in Domain Name System. Setting appropriate TTL values is a key aspect of DNS administration to ensure the reliable and efficient functioning of the Domain Name System. Therefore, it is crucial to measure how TTL violations occur in resolvers. But, assessing how DNS resolvers worldwide handle TTL is not easy and typically requires access to multiple nodes distributed globally. In this work, we introduce a novel methodology for measuring TTL violations in DNS resolvers leveraging a residential proxy service called Brightdata, enabling us to evaluate more than 27,000 resolvers across 9,500 Autonomous Systems (ASes). We found that 8.74% arbitrarily extends TTL among 8,524 resolvers that had atleast five distinct exit nodes. Additionally, we also find that the DNSSEC standard is being disregarded by 44.1% of DNSSEC-validating resolvers, as they continue to provide DNSSEC-signed responses even after the RRSIGs have expired. / Master of Science / The Domain Name System (DNS) works as a global phonebook for the internet, helping your computer find websites by translating human-readable names into numerical IP addresses. This system uses a smart caching system spread across various servers worldwide to store DNS records. Each record comes with a time-to-live (TTL) value, essentially a timer that decides how long the information should stay in the cache before being replaced. TTL is crucial for both security and performance in the DNS world. It plays a role in securing responses and determines the responsiveness of load balancing schemes employed at Content Delivery Networks (CDNs). In simple terms, TTL ensures efficient caching, even network load, and overall security in the Domain Name System. For DNS to work smoothly, it's important to set the right TTL values and the resolvers to strictly honor the TTL. However, figuring out how well DNS servers follow these rules globally is challenging. In this study, we introduce a new way to measure TTL violations in DNS servers using a proxy service called Brightdata. This allows us to check over 27,000 servers across 9,500 networks. Our findings reveal that 8.74% of these servers extend TTL arbitrarily. Additionally, we discovered that 44.1% of servers that should be following a security standard (DNSSEC) are not doing so properly, providing signed responses even after they are supposed to expire. This research sheds light on how DNS servers around the world extend TTL and the potential performance and security risks involved.
4

Feasibility Study of a SLA Driven Transmission Service

Sun, Zhichao January 2015 (has links)
Network based services are expanding scale at an unprecedented speed currently. With the continuously strengthen of user’s dependence on these, performance issues are becoming more and more important. Service Level Agreement (SLA) is a negotiated contract between service provider and customer in the way of service quality, priority, responsibility, etc. In this thesis, we designed and implemented a prototype for a SLA driven transmission service, which can deliver a file from one host to another, using a combination of different transport protocols. The proposed service measures the network conditions, and based on these and user’s requirement, it dynamically evaluates if it can meet the user SLA. Once a transmission has been accepted, it uses this information to adjust the usage of different transfer layer protocols, in order to meet the agreed SLA. The thesis work is based on the investigating of network theory and experimental results. We research how the SLA driven transmission service is affected by various factors, they include user’s requirements, network conditions, and service performance, etc. We design and implement an evaluation model for the network performance. It reveals how network performance is influenced by different network metrics, such as Round-Trip-Time (RTT), Throughput, and Packet Loss Rate (PLR), etc. We implement a transmission service on real test-bed, which is a controllable environment. We can alter the network metrics and measuring frequency of our evaluation model. Then, we evaluate these changes with our evaluation model and improve the performance of the transmission service. After that, we propose a calculating method for the service cost. At last, we can summarize the feasibility of this SLA driven transmission service. In the experiments, we obtain the variable delivery time and packet loss of the transmission service, which are changed with RTT and PLR of network. We analyze the different performance of transmission service, which uses TCP, UDP, and SCTP separately. Also a suitable measuring frequency and the cost for the usage of transmission service on this frequency are pointed out. Statistical analysis on the experiment results show that such SLA driven transmission service is feasible. It brings improved performance for user’s requirements. In addition, we come up with some useful suggestions and future work for the transmission service.
5

User-centric traffic engineering in software defined networks

Bakhshi, Taimur January 2017 (has links)
Software defined networking (SDN) is a relatively new paradigm that decouples individual network elements from the control logic, offering real-time network programmability, translating high level policy abstractions into low level device configurations. The framework comprises of the data (forwarding) plane incorporating network devices, while the control logic and network services reside in the control and application planes respectively. Operators can optimize the network fabric to yield performance gains for individual applications and services utilizing flow metering and application-awareness, the default traffic management method in SDN. Existing approaches to traffic optimization, however, do not explicitly consider user application trends. Recent SDN traffic engineering designs either offer improvements for typical time-critical applications or focus on devising monitoring solutions aimed at measuring performance metrics of the respective services. The performance caveats of isolated service differentiation on the end users may be substantial considering the growth in Internet and network applications on offer and the resulting diversity in user activities. Application-level flow metering schemes therefore, fall short of fully exploiting the real-time network provisioning capability offered by SDN instead relying on rather static traffic control primitives frequent in legacy networking. For individual users, SDN may lead to substantial improvements if the framework allows operators to allocate resources while accounting for a user-centric mix of applications. This thesis explores the user traffic application trends in different network environments and proposes a novel user traffic profiling framework to aid the SDN control plane (controller) in accurately configuring network elements for a broad spectrum of users without impeding specific application requirements. This thesis starts with a critical review of existing traffic engineering solutions in SDN and highlights recent and ongoing work in network optimization studies. Predominant existing segregated application policy based controls in SDN do not consider the cost of isolated application gains on parallel SDN services and resulting consequence for users having varying application usage. Therefore, attention is given to investigating techniques which may capture the user behaviour for possible integration in SDN traffic controls. To this end, profiling of user application traffic trends is identified as a technique which may offer insight into the inherent diversity in user activities and offer possible incorporation in SDN based traffic engineering. A series of subsequent user traffic profiling studies are carried out in this regard employing network flow statistics collected from residential and enterprise network environments. Utilizing machine learning techniques including the prominent unsupervised k-means cluster analysis, user generated traffic flows are cluster analysed and the derived profiles in each networking environment are benchmarked for stability before integration in SDN control solutions. In parallel, a novel flow-based traffic classifier is designed to yield high accuracy in identifying user application flows and the traffic profiling mechanism is automated. The core functions of the novel user-centric traffic engineering solution are validated by the implementation of traffic profiling based SDN network control applications in residential, data center and campus based SDN environments. A series of simulations highlighting varying traffic conditions and profile based policy controls are designed and evaluated in each network setting using the traffic profiles derived from realistic environments to demonstrate the effectiveness of the traffic management solution. The overall network performance metrics per profile show substantive gains, proportional to operator defined user profile prioritization policies despite high traffic load conditions. The proposed user-centric SDN traffic engineering framework therefore, dynamically provisions data plane resources among different user traffic classes (profiles), capturing user behaviour to define and implement network policy controls, going beyond isolated application management.
6

A Longitudinal Evaluation of HTTP Traffic

Callahan, Thomas Richard 22 May 2012 (has links)
No description available.
7

A Network Measurement Tool for Handheld Devices

Tan, SiewYeen Agnes 04 June 2003 (has links)
This thesis describes a performance measurement tool that allows a user to measure network performance using a handheld device. The measurement tool consists of a client program that runs on a Microsoft Pocket PC device and a server program that runs on a regular Microsoft Windows computer. Both programs are Windows applications implemented in C/C++ using the Microsoft Embedded Visual Tool and Microsoft Visual Studio. The use of a Pocket PC device provides mobility to users, which can save time and energy when performing experiments. The thesis describes the design of the performance measurement application, implementation issues, and tests conducted using the tool. / Master of Science
8

On the Bleeding Edge : Debloating Internet Access Networks

Høiland-Jørgensen, Toke January 2016 (has links)
As ever more devices are connected to the internet, and applications turn ever more interactive, it becomes more important that the network can be counted on to respond reliably and without unnecessary delay. However, this is far from always the case today, as there can be many potential sources of unnecessary delay. In this thesis we focus on one of them: Excess queueing delay in network routers along the path, also known as bufferbloat. We focus on the home network, and treat the issue in three stages. We examine latency variation and queueing delay on the public internet and show that significant excess delay is often present. Then, we evaluate several modern AQM algorithms and packet schedulers in a residential setting, and show that modern AQMs can almost entirely eliminate bufferbloat and extra queueing latency for wired connections, but that they are not as effective for WiFi links. Finally, we go on to design and implement a solution for bufferbloat at the WiFi link, and also design a workable scheduler-based solution for realising airtime fairness in WiFi. Also included in this thesis is a description of Flent, a measurement tool used to perform most of the experiments in the other papers, and also used widely in the bufferbloat community. / HITS, 4707
9

Robust and Scalable Sampling Algorithms for Network Measurement

Wang, Xiaoming 2009 August 1900 (has links)
Recent growth of the Internet in both scale and complexity has imposed a number of difficult challenges on existing measurement techniques and approaches, which are essential for both network management and many ongoing research projects. For any measurement algorithm, achieving both accuracy and scalability is very challenging given hard resource constraints (e.g., bandwidth, delay, physical memory, and CPU speed). My dissertation research tackles this problem by first proposing a novel mechanism called residual sampling, which intentionally introduces a predetermined amount of bias into the measurement process. We show that such biased sampling can be extremely scalable; moreover, we develop residual estimation algorithms that can unbiasedly recover the original information from the sampled data. Utilizing these results, we further develop two versions of the residual sampling mechanism: a continuous version for characterizing the user lifetime distribution in large-scale peer-to-peer networks and a discrete version for monitoring flow statistics (including per-flow counts and the flow size distribution) in high-speed Internet routers. For the former application in P2P networks, this work presents two methods: ResIDual-based Estimator (RIDE), which takes single-point snapshots of the system and assumes systems with stationary arrivals, and Uniform RIDE (U-RIDE), which takes multiple snapshots and adapts to systems with arbitrary (including non-stationary) arrival processes. For the latter application in traffic monitoring, we introduce Discrete RIDE (D-RIDE), which allows one to sample each flow with a geometric random variable. Our numerous simulations and experiments with P2P networks and real Internet traces confirm that these algorithms are able to make accurate estimation about the monitored metrics and simultaneously meet the requirements of hard resource constraints. These results show that residual sampling indeed provides an ideal solution to balancing between accuracy and scalability.
10

On the Quality of Computer Network Measurements / Om kvaliteten på datornätverks mätningar

Arlos, Patrik January 2005 (has links)
Due to the complex diversity of contemporary Internet-services, computer network measurements have gained considerable interest during recent years. Since they supply network research, development and operations with data important for network traffic modelling, performance and trend analysis, etc. The quality of these measurements affect the results of these activities and thus the perception of the network and its services. This thesis contains a systematic investigation of computer network measurements and a comprehensive overview of factors influencing the quality of performance parameters obtained from computer network measurements. This is done using a novel network performance framework consisting of four modules: Generation, Measurement, Analysis and Visualization. These modules cover all major aspects controlling the quality of computer network measurements and thus the validity of all kinds of conclusions based on them. One major source of error is the timestamp accuracy obtained from measurement hardware and software. Therefore, a method is presented that estimates the timestamp accuracy obtained from measurement hardware and software. The method has been used to evaluate the timestamp accuracy of some commonly used hardware (Agilent J6800/J6830A and Endace DAG 3.5E) and software (Packet Capture Library). Furthermore, the influence of analysis on the quality of performance parameters is discussed. An example demonstrates how the quality of a performance metric (bitrate) is affected by different measurement tools and analysis methods. The thesis also contains performance evaluations of traffic generators, how accurately application-level measurements describe network behaviour, and of the quality of performance parameters obtained from PING and J-OWAMP. The major conclusion is that measurement systems and tools must be calibrated, verified and validated for the task of interest before using them for computer network measurements. A guideline is presented on how to obtain performance parameters at a desired quality level. / Datornät används i mer och mer i vårt dagliga liv, de används för att telefonera, läsa tidningar, se på TV, handla, boka resor etc. På grund av denna diversiteten bland tjänsterna så har mätningar blivit populära under senare år. Detta då de förser nätverksforskningen, utvecklingen och driften med data som används för trafik modellering, prestanda och trend analys. Kvaliteten på dessa mätningar kommer därför direkt påverka resultaten av dessa aktiviteter och därför vår uppfattning av nätverket och dess tjänster. I denna avhandling ger vi en systematisk översikt av datornätverks mätningar och en omfattande översikt av de faktorer som påverkar kvaliteten av prestanda parametrar som tas fram via mätningar. Detta görs genom ett nytt ramverks som beskriver de fyra moduler som påverkar mätningarnas kvalitet: generering, mätning, analys och visualisering. En av de stora källorna till kvalitets problem är noggrannheten på tidstämplar. Dessa tidstämplar beskriver när händelser skedde i nätverket. På grund av detta så presenterar vi en metod som kan uppskatta den tidstämpling noggrannhet som man kan få från mätverktyg, både hård och mjukvara. Metoden används för att utvärdera noggrannheten på några vanliga verktyg, två hårdvarubaserade system (Agilent J6800/J6830A och Endace DAG 3.5E) samt mjukvarubaserade system (Packet Capture Library). Vidare så diskuteras påverkan som analysen har på kvaliteten, och ett exempel ges på hur ett prestanda mått (bitrate) påverkas av mätsystem (hård/mjukvara) och analys metod. Avhandlingen innehåller dessutom utvärderingar av trafik generatorer, applikations mätningar och kvaliteten på mättningar gjorda med PING och J-OWAMP. Huvudslutsatsen i arbetet är att mätsystem och verktyg måste kalibreras, verifieras och valideras innan de används. Baserat på detta så presenterar vi en riktlinje över hur man gör detta.

Page generated in 0.0874 seconds