• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 14
  • 14
  • 11
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Predicting Network Performance for Internet Activities Using a Web Browser

Zeljkovic, Mihajlo 26 April 2012 (has links)
Internet measurements have previously been done mostly from research labs and universities. The number of home users is growing rapidly and we need a good way to measure their network performance. This thesis focuses on building a web application that allows users to check what their network is like for online activities they are interested in. The application has minimal impediment to use by only requiring a Web browser. A list of online activities we offer users to choose from includes browsing web site categories such as news or social networks, having voice and video conferences, playing online games and other activities.
2

Network-Based Monitoring of Quality of Experience

Junaid, Junaid January 2015 (has links)
The recent years have observed a tremendous shift from the technology-centric assessment to the user-centric assessment of network services. Consequently, measurement and modelling of Quality of Experience (QoE) attracted many contributions from researchers and practitioners. Generally, QoE is assessed via active and passive measurements. While the former usually allows QoE assessment on the test traffic, the latter opens avenues for continuous QoE assessment on the real traffic generated by the users. This thesis contributes towards passive assessment of QoE. This thesis document begins with a background on the fundamentals of network management and objective QoE assessment. It extends the discussion further to the QoE-centric monitoring and management of network, complimented by the details about QoE estimator agent developed within the Celtic project QuEEN (Quality of Experience Estimators in Network). The discussion on findings starts with results from subjective tests to understand the relationship between waiting times and user subjective feedback over time. These results strengthen the understanding of timescales on which users react, as well as, the effect of user memory on QoE. The findings show that QoE drops significantly when the user faces recurring waiting times of 0.5 s to 4 s durations in case of video streaming and web browsing services. With recurring network disturbances within every 8 s – 16 s time intervals, the user tolerance to waiting times decreases constantly, showing the sign of user memory of recent disturbances. Subsequently, this document introduces and evaluates a passive wavelet-based QoE monitoring method. The method detects timescales on which transient outages occur frequently. A study presents results from qualitative measurements, showing the ability of wavelet to differentiate on-fly between “Good” and “Bad” traffic streams. In sequel, a quantitative study systematically evaluates wavelet-based metrics. Subsequently, the subjective evaluation and wavelet analysis of 5 – 6 minutes long video streaming sessions on mobile networks show that wavelet-based metrics is indeed useful for passive monitoring of QoE issues. Finally, this thesis investigates a method for passive monitoring of user reactions to degrading network performance. The method is based on the TCP termination flags. With a systematic evaluation in a test environment, the results characterise termination of data transfers in case of different user actions in the web browser.
3

Head into the Cloud: An Analysis of the Emerging Cloud Infrastructure

Chandrasekaran, Balakrishnan January 2016 (has links)
<p>We are witnessing a paradigm shift in computing---people are increasingly using Web-based software for tasks that only a few years ago were carried out using software running locally on their computers. The increasing use of mobile devices, which typically have limited processing power, is catalyzing the idea of offloading computations to the cloud. It is within this context of cloud computing that this thesis attempts to address a few key questions: (a) With more computations moving to the cloud, what is the state of the Internet's core? In particular, do routing changes and consistent congestion in the Internet's core affect end users' experiences? (b) With software-defined networking (SDN) principles increasingly being used to manage cloud infrastructures, are the software solutions robust (i.e., resilient to bugs)? With service outage costs being prohibitively expensive, how can we support network operators in experimenting with novel ideas without crashing their SDN ecosystems? (c) How can we build a large-scale passive IP geolocation system to geolocate the entire IP address space at once so that cloud-based software can utilize the geolocation database in enhancing the end-user experience? (d) Why is the Internet so slow? Since a low-latency network allows more offloading of computations to the cloud, how can we reduce the latency in the Internet?</p> / Dissertation
4

Sharing network measurements on peer-to-peer networks

Fan, Bo, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
With the extremely rapid development of the Internet in recent years, emerging peer-to-peer network overlays are meeting the requirements of a more sophisticated communications environment, providing a useful substrate for applications such as scalable file sharing, data storage, large-scale multicast, web-cache, and publish-subscribe services. Due to its design flexibility, peer-to-peer networks can offer features including self-organization, fault-tolerance, scalability, load-balancing, locality and anonymity. As the Internet grows, there is an urgent requirement to understand real-time network performance degradation. Measurement tools currently used are ping, traceroute and variations of these. SNMP (Simple Network Management Protocol) is also used by network administrators to monitor local networks. However, ping and traceroute can only be used temporarily, SNMP can only be deployed at certain points in networks and these tools are incapable of sharing network measurements among end-users. Due to the distributed nature of networking performance data, peer-to-peer overlay networks present an attractive platform to distribute this information among Internet users. This thesis aims at investigating the desirable locality property of peer-to-peer overlays to create an application to share Internet measurement performance. When measurement data are distributed amongst users, it needs to be localized in the network allowing users to retrieve it when external Internet links fail. Thus, network locality and robustness are the most desirable properties. Although some unstructured overlays also integrate locality in design, they fail to reach rarely located data items. Consequently, structured overlays are chosen because they can locate a rare data item deterministically and they can perform well during network failures. In structured peer-to-peer overlays, Tapestry, Pastry and Chord with proximity neighbour selection, were studied due to their explicit notion of locality. To differentiate the level of locality and resiliency in these protocols, P2Psim simulations were performed. The results show that Tapestry is the more suitable peer-to-peer substrate to build such an application due to its superior localizing data performance. Furthermore, due to the routing similarity between Tapestry and Pastry, an implementation that shares network measurement information was developed on freepastry, verifying the application feasibility. This project also contributes to the extension of P2Psim to integrate with GT-ITM and link failures.
5

Correct, Efficient, and Realistic Wireless Network Simulations

Subbareddy, Dheeraj Reddy 10 January 2007 (has links)
Simulating wireless networks accurately is a non-trivial task because of the large parameter space that affects the performance of such networks. Increasing the amount of detail in the simulation model increases these requirements by many times. Hence there is a need to develop suitable abstractions that maintain the accuracy of the simulation while keeping the computational resource requirements low. The topic of wireless network simulation models is explored in this research, concentrating on the medium access control and the physical layers. In the recent years, a large amount of research has focussed on various kinds of wireless networks to fit various application domains. Mobile Ad-Hoc Networks (MANETs), Wire- less Local Area Networks (WLANs), and Sensor Networks are a few examples.The IEEE 802.11 Physical layer(PHY) and Medium Access Control (MAC) layer are the most popular wireless technologies in practice. Consequently, most implementations use the IEEE 802.11 specifications as the basis for higher layer protocol design and analyses. In this dissertation, we explore the correctness, efficiency, and realism of wireless network simulations. We concentrate on the 802.11-based wireless network simulations, although the methods and results can also be used for various other wireless network simulations too. While many simulators model the IEEE 802.11 wireless networks, almost all of them tend to make some abstractions to lessen the computation burden and to obtain reasonable results. A comparitive study of three wireless simulators is made with respect to the correctness of their ideal behavior as well as their behavior under a high degree of load. Further, the physical-layer abstraction in wireless network simulations tends to be very simplistic because of the huge computational requirements that are needed to accurately model the various propagation, fading, and shadowing models. When mobility is taken into account several other issues like the Doppler effect should also be accounted for. This dissertation explores an empirical way to model the physical layer which cumula- tively accounts for all these effects. From a network protocol designers perspective, it is the cumulative effect of all these parameters that is of interest. Our major contribution has been the investigation of novel empirical models of the wireless physical layer, which account for node mobility and other effects in an outdoor environment. These models are relatively more realistic and efficient when implemented in a simulation environment. Our simulation experiments validate the models and pro- vide simulation results which closely match our outdoor experiments. Another significant contribution is in understanding and design of wireless network simulation models.
6

On the Quality of Computer Network Measurements / Om kvaliteten på datornätverks mätningar

Arlos, Patrik January 2005 (has links)
Due to the complex diversity of contemporary Internet-services, computer network measurements have gained considerable interest during recent years. Since they supply network research, development and operations with data important for network traffic modelling, performance and trend analysis, etc. The quality of these measurements affect the results of these activities and thus the perception of the network and its services. This thesis contains a systematic investigation of computer network measurements and a comprehensive overview of factors influencing the quality of performance parameters obtained from computer network measurements. This is done using a novel network performance framework consisting of four modules: Generation, Measurement, Analysis and Visualization. These modules cover all major aspects controlling the quality of computer network measurements and thus the validity of all kinds of conclusions based on them. One major source of error is the timestamp accuracy obtained from measurement hardware and software. Therefore, a method is presented that estimates the timestamp accuracy obtained from measurement hardware and software. The method has been used to evaluate the timestamp accuracy of some commonly used hardware (Agilent J6800/J6830A and Endace DAG 3.5E) and software (Packet Capture Library). Furthermore, the influence of analysis on the quality of performance parameters is discussed. An example demonstrates how the quality of a performance metric (bitrate) is affected by different measurement tools and analysis methods. The thesis also contains performance evaluations of traffic generators, how accurately application-level measurements describe network behaviour, and of the quality of performance parameters obtained from PING and J-OWAMP. The major conclusion is that measurement systems and tools must be calibrated, verified and validated for the task of interest before using them for computer network measurements. A guideline is presented on how to obtain performance parameters at a desired quality level. / Datornät används i mer och mer i vårt dagliga liv, de används för att telefonera, läsa tidningar, se på TV, handla, boka resor etc. På grund av denna diversiteten bland tjänsterna så har mätningar blivit populära under senare år. Detta då de förser nätverksforskningen, utvecklingen och driften med data som används för trafik modellering, prestanda och trend analys. Kvaliteten på dessa mätningar kommer därför direkt påverka resultaten av dessa aktiviteter och därför vår uppfattning av nätverket och dess tjänster. I denna avhandling ger vi en systematisk översikt av datornätverks mätningar och en omfattande översikt av de faktorer som påverkar kvaliteten av prestanda parametrar som tas fram via mätningar. Detta görs genom ett nytt ramverks som beskriver de fyra moduler som påverkar mätningarnas kvalitet: generering, mätning, analys och visualisering. En av de stora källorna till kvalitets problem är noggrannheten på tidstämplar. Dessa tidstämplar beskriver när händelser skedde i nätverket. På grund av detta så presenterar vi en metod som kan uppskatta den tidstämpling noggrannhet som man kan få från mätverktyg, både hård och mjukvara. Metoden används för att utvärdera noggrannheten på några vanliga verktyg, två hårdvarubaserade system (Agilent J6800/J6830A och Endace DAG 3.5E) samt mjukvarubaserade system (Packet Capture Library). Vidare så diskuteras påverkan som analysen har på kvaliteten, och ett exempel ges på hur ett prestanda mått (bitrate) påverkas av mätsystem (hård/mjukvara) och analys metod. Avhandlingen innehåller dessutom utvärderingar av trafik generatorer, applikations mätningar och kvaliteten på mättningar gjorda med PING och J-OWAMP. Huvudslutsatsen i arbetet är att mätsystem och verktyg måste kalibreras, verifieras och valideras innan de används. Baserat på detta så presenterar vi en riktlinje över hur man gör detta.
7

Sharing network measurements on peer-to-peer networks

Fan, Bo, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
With the extremely rapid development of the Internet in recent years, emerging peer-to-peer network overlays are meeting the requirements of a more sophisticated communications environment, providing a useful substrate for applications such as scalable file sharing, data storage, large-scale multicast, web-cache, and publish-subscribe services. Due to its design flexibility, peer-to-peer networks can offer features including self-organization, fault-tolerance, scalability, load-balancing, locality and anonymity. As the Internet grows, there is an urgent requirement to understand real-time network performance degradation. Measurement tools currently used are ping, traceroute and variations of these. SNMP (Simple Network Management Protocol) is also used by network administrators to monitor local networks. However, ping and traceroute can only be used temporarily, SNMP can only be deployed at certain points in networks and these tools are incapable of sharing network measurements among end-users. Due to the distributed nature of networking performance data, peer-to-peer overlay networks present an attractive platform to distribute this information among Internet users. This thesis aims at investigating the desirable locality property of peer-to-peer overlays to create an application to share Internet measurement performance. When measurement data are distributed amongst users, it needs to be localized in the network allowing users to retrieve it when external Internet links fail. Thus, network locality and robustness are the most desirable properties. Although some unstructured overlays also integrate locality in design, they fail to reach rarely located data items. Consequently, structured overlays are chosen because they can locate a rare data item deterministically and they can perform well during network failures. In structured peer-to-peer overlays, Tapestry, Pastry and Chord with proximity neighbour selection, were studied due to their explicit notion of locality. To differentiate the level of locality and resiliency in these protocols, P2Psim simulations were performed. The results show that Tapestry is the more suitable peer-to-peer substrate to build such an application due to its superior localizing data performance. Furthermore, due to the routing similarity between Tapestry and Pastry, an implementation that shares network measurement information was developed on freepastry, verifying the application feasibility. This project also contributes to the extension of P2Psim to integrate with GT-ITM and link failures.
8

Analysis of Passive End-to-End Network Performance Measurements

Simpson, Charles Robert, Jr. 02 January 2007 (has links)
NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.
9

Performance of 3G data services over Mobile Networks in Sweden

Kommalapati, Ravichandra January 2010 (has links)
The emerging technologies in the field of telecommunications enable us to access high speed data services through mobile handsets and portable modems over the mobile networks. The recent statistics also shows the use of mobile broad band services are increasing and gaining popularity. In this thesis we have investigated the impact of payload size and data rate on one-way delay and packet loss in operational 3G mobile networks, through network level measurements. To collect the network level traces an experimental testbed is developed. For accurate measurement Endace DAG cards together with GPS synchronization is implemented. Results are gathered from three different commercial mobile operators in Sweden. From the results it is concluded that the combination of maximum payload size and data rate resulted in minimum one-way delay. It is also observed within the big payload size category, that the percentage of packet loss is less as compared to the smaller payload sizes. Such findings will improve efficiency of application developers to meet the challenges of UMTS network conditions.
10

Mapping of User Quality-of-Experience to Application Perceived Performance for Web Application. / Mapping of User Quality-of-Experience to Application Perceived Performance for Web Application.

Shinwary, Ashfaq Ahmad January 2010 (has links)
Web browsing posses a major share among the activities on the Internet. Heavy usage of web browsing makes the Web Quality of Experience (QoE) one of the critical factor in deciding the overall success of network services. Amongst others, Web QoE can be effected by the delays in network that can result in higher application download times. In this thesis work, an effort has been made to map applications level download times to Quality of Experience. A subjective analysis on how the user takes into account the domain of web browsing has been carried out. For this purpose a testbed was developed at Blekinge Institute of Technology on which different users were tested. Specific sequences of delays were introduced on the network which resulted in desired application download times. Regression analysis was performed and a mapping between user QoE and application download times was carried out. Based on the results conclusions were made which are presented in this thesis report. / maxi_aks@hotmail.com Skype: ashfaq84 LinkedIn: www.linkedin.com/in/ashfaqahmadshinwary

Page generated in 0.0822 seconds