• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 13
  • 9
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 97
  • 73
  • 62
  • 58
  • 35
  • 28
  • 26
  • 23
  • 21
  • 21
  • 20
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Effect of Network OFF Times on Web Browsing QoE / Effect of Network OFF Times on Web Browsing QoE

Rajasekaran, Arunkumar, Cherry, Velanginichakravarthy January 2013 (has links)
The web user usually expects a better Quality of Service (QoS) from the Internet Service Provider (ISP) for the best Quality of Experience (QoE). User satisfaction and feedback is one of the most important factors for the service providers to determine their QoS and improve the network performance. Service providers are more interested in QoE to provide a better service to their users to maintain their customers in the competitive market. Since there is no much study work conducted in the QoE on web browsing, only a few studies are available for getting user feedbacks. So the ISP is facing a difficulty in the assessment of the user experience in the real time network. Network level performance can be measured by the ISP for QoS and user feedback can be measured for QoE. There is no study available on relating both the QoS and QoE. Relating the network level performance and the user perception is a difficult task for the service providers. In this study we have correlated both the network level traffic performance and user experience. In our experiment the user QoE is tested by applying various off times applied to some specific packets. Our main aim is to evaluate the network level performance and correlate it with the user feedback. Later, on focusing the network level performance network traffic is analyzed for different sessions with off times applied in DNS response, Base file response and Object response. We have discussed in the results by correlating the different sessions of off times that we applied and user feedback MOS. We have also discussed the relation of the network off time in the network with the number of requests sent from client to server and the number of flag bits like SYN & ACK, FIN & ACK and RST flags between the client and server. In this study we also discussed about the user feedback and how the user suffers on varying long response time. Finally, we conclude from our results about the major factor that affects the user feedback and the user interest in using the service again. / chakri PH: +918008316269
92

Evaluation of Smartphone Network Performance

Tekelmariam, Hailay, Chane, Mekides January 2012 (has links)
Nowadays most Desktop based softwares, operating system (OS) and applications features are being adapted to Smartphones (SMPs). The simplicity and mobility of SMPs are some of the qualities which make them interesting to run different network applications. In order to develop mobile applications with efficient functionality and competitive in marketability, application developers need to have knowledge on network performance of SMPs. In this thesis, an experimental based methodology is provided to evaluate the effect of transmission patterns on One Way Delay (OWD), throughput and packet loss across designed setup for different SMPs. Based on these metrics the SMPs have been compared with each other under the same experimental setting with relatively higher accuracy of measurement techniques. For accurate measurement DAG 3.6E card together with GPS for synchronization is used to capture traffic for further analysis. To avoid undeterministic inputs in the network, the experiment is made to be in a controlled and continuously monitored wireless local area network. Moreover the nodes or sub networks constituting the entire network are evaluated to conceive the effects on the estimated output. From this work application developers will have the opportunity to design their application according to the network performance of SMPs and users are also able to select suitable applications and SMPs.
93

The mining and visualisation of application services data

Knoetze, Ronald Morgan January 2005 (has links)
Many network monitoring tools do not provide sufficiently in-depth and useful reports on network usage, particularly in the domain of application services data. The optimisation of network performance is only possible if the networks are monitored effectively. Techniques that identify patterns of network usage can assist in the successful monitoring of network performance. The main goal of this research was to propose a model to mine and visualise application services data in order to support effective network management. To demonstrate the effectiveness of the model, a prototype, called NetPatterns, was developed using data for the Integrated Tertiary Software (ITS) application service collected by a network monitoring tool on the NMMU South Campus network. Three data mining algorithms for application services data were identified for the proposed model. The data mining algorithms used are classification (decision tree), clustering (K-Means) and association (correlation). Classifying application services data serves to categorise combinations of network attributes to highlight areas of poor network performance. The clustering of network attributes serves to indicate sparse and dense regions within the application services data. Association indicates the existence of any interesting relationships between different network attributes. Three visualisation techniques were selected to visualise the results of the data mining algorithms. The visualisation techniques selected were the organisation chart, bubble chart and scatterplots. Colour and a variety of other visual cues are used to complement the selected visualisation techniques. The effectiveness and usefulness of NetPatterns was determined by means of user testing. The results of the evaluation clearly show that the participants were highly satisfied with the visualisation of network usage presented by NetPatterns. All participants successfully completed the prescribed tasks and indicated that NetPatterns is a useful tool for the analysis of network usage patterns.
94

Convergence des réseaux de télécommunications mobiles et de télédiffusion : modélisation et évaluation des performances d’un réseau hybride LTE/DVB-T2 / Convergence of broadcastand mobile broadband : modeling and performance evaluation of a LTE/DVB-T2 hybrid network

Cornillet, Nicolas 18 December 2013 (has links)
Ces dernières années, la popularité croissante de terminaux mobiles de plus en plus intelligents a provoqué une hausse considérable du trafic supporté par les réseaux de télécommunications cellulaires. Dans ce contexte, le déploiement de réseaux dits de quatrième génération basés sur le standard LTE (Long Term Evolution) et offrant des capacités significativement plus élevées que les réseaux de générations précédentes peut sembler être une solution idéale. Cependant, dans le cas d’un service à délivrer à un très grand nombre d’utilisateurs, ce standard, malgré la disponibilité de la technologie eMBMS (Evolved Multimedia Broadcast Multicast Services), n’est pas forcément le plus adapté. Dans le même temps, la télévision a achevé dans de nombreux pays sa transition vers le numérique. L’abandon de la transmission analogique a permis non seulement d’améliorer la qualité du service mais aussi d’occuper moins de spectre. En France, une partie du spectre libéré a déjà été attribué au déploiement de réseaux LTE. Une autre méthode permettant d’exploiter ce spectre au profit des réseaux de communications mobiles est proposée dans cette thèse : le réseau hybride. Le réseau hybride est constitué d’un réseau cellulaire au standard LTE accompagné d’un émetteur DVB-T2 (Digital Video Broadcasting – Second Generation Terrestrial). Les aires de couverture de ces deux composantes se superposent et un service peut être transmis aux utilisateurs présents dans ces aires indifféremment par l’une ou l’autre des composantes. Ce concept permet de pallier un point faible du standard LTE, c’està- dire la capacité à délivrer efficacement un même service à un grand nombre d’utilisateurs simultanément. Après une étude approfondie des deux standards utilisés, un modèle mathématique du réseau hybride est proposé. Ce modèle se base sur les propriétés géométriques du réseau hybride, les performances des deux types de signaux utilisés, et sur plusieurs types de répartition des utilisateurs pour évaluer les performances du réseau hybride selon différents critères. Le premier critère étudié est un critère énergétique. Le modèle proposé permet de comparer les efficacités énergétiques des deux composantes pour transmettre un service en fonction de son nombre d’utilisateurs. L’efficacité de la composante DVBT2 dépasse celle de la composante LTE à partir d’un seuil dont la valeur varie avec les propriétés géométriques du réseau et le type d’affaiblissement de propagation auquel sont soumis les signaux. Il est de plus possible, dans certaines circonstances, d’améliorer encore l’efficacité énergétique du système en utilisant conjointement les deux composantes. Le second critère étudié est celui de l’encombrement du réseau cellulaire. En effet, un service consommé par un grand nombre d’utilisateurs peut avoir un impact conséquent sur le trafic à transmettre par un réseau LTE. Utiliser la composante DVB-T2 pour distribuer un tel service permet de réduire la charge du réseau cellulaire, et ceci même dans le cas où la composante DVB-T2 ne couvre pas l’ensemble de la zone à desservir globalement. Ces différentes études ont permis de bien mettre en évidence les avantages et inconvénients des deux types de réseaux broadcast et unicast. En particulier, l’intérêt d’une approche reposant sur l’utilisation d’un réseau hybride exploitant la complémentarité des deux composantes broadcast et unicast a été démontré. / During the last few years, the growing popularity of smarter and smarter mobile devices has led to a tremendous growth of cellular data traffic. In such a context, the deployment of fourth generation networks based on the LTE (Long Term Evolution) standard and with capacities significantly higher than previous generations networks can be seen as an ideal solution. However, when the number of users requiring a given service is large, this standard, despite the availability of the eMBMS (Evolved Multimedia Broadcast Multicast Services) technology, is not necessarily the most suitable. Meanwhile, television has completed its transition to the digital transmission in many countries. The analog switch-off has not only allowed a better quality of service but has also freed some spectrum. In France, some of this spectrum has already been used for the deployment of LTE networks. This thesis introduces another way to use this spectrum to the benefit of mobile data networks: the hybrid network. The hybrid network consists of a LTE cellular network and a DVB-T2 (Digital Video Broadcasting – Second Generation Terrestrial) transmitter. The coverage areas of the two components are overlapping and a service can be delivered to the users located in these areas by either one or the other of the components. This concept can compensate one weakness of the LTE standard, which is the ability to deliver efficiently the same service to a large number of users simultaneously. After a thorough study of the two standards in use, a mathematical model of the hybrid network is proposed. This model is based on the geometrical properties of the network, the performances of the two types of signal, and different types of users distribution to measure the performances of the hybrid network using different criteria. The first criterion is the energy efficiency. The proposed model allows the comparison of the two components in terms of energy efficiency for one service depending of the number of its users. The DVB-T2 component outperforms the LTE component when the number of users exceeds a given threshold whose value depends on the geometric properties of the network and the type of path loss attenuating the signals. It is possible, in some cases, to further improve the energy efficiency of the system by using both components together. The second criterion is the network congestion. Indeed, a service with a great number of users can induce significant data traffic for the LTE network. Transmitting such a service through the DVB-T2 component can decrease the cellular data traffic even if the DVB-T2 component does not cover the whole area of interest. These studies have brought out the benefits and drawbacks of both broadcast and unicast networks. Especially, the interest of the hybrid network based on the complementarities between the two components has been demonstrated.
95

Enhancing Mobility Support in Cellular Networks With Device-Side Intelligence

Haotian Deng (9451796) 16 December 2020 (has links)
Internet goes mobile as billions of users are accessing the Internet through their smartphones. Cellular networks play an essential role in providing “anytime, anywhere” network access as the only large-scale wireless network infrastructure in operation. Mobility support is the salient feature indispensable to ensure seamless Internet connectivity to mobile devices wherever the devices go or are. Cellular network operators deploy a huge number of cell towers over geographical areas each with limited radio coverage. When the device moves out of the radio coverage of its serving cell(s), mobility support is performed to hand over its serving cell(s) to another, thereby ensuring uninterrupted network access.<br>Despite a large success at most places, we uncover that state-of-the-practice mobility support in operational cellular networks suffers from a variety of issues which result in unnecessary performance degradation to mobile devices. In this thesis, we dive into these issues in today’s mobility support and explore possible solutions with no or small changes to the existing network infrastructure.<br>We take a new perspective to study and enhance mobility support. We directly examine, troubleshoot and enhance the underlying procedure of mobility support, instead of higher-layer (application/transport) exploration and optimization in other existing studies. Rather than clean slate network-side solutions, we focus on device-side solutions which are compatible with 3GPP standards and operational network infrastructure, promising immediate benefits without requiring any changes on network side.<br>In particular, we address three technical questions by leveraging the power of the devices. First, how is mobility support performed in reality? We leverage device-side observation to monitor the handoff procedures that happen between the network and the device. We unveil that operator-specific configurations and policies play a decisive role under the standard mechanism and conduct a large-scale measurement study to characterize the extremely complex and diverse handoff configurations used by global operators over the world. Second, what is wrong with the existing mobility support? We conduct model-based reasoning and empirical study to examine network performance issues (e.g., handoff instability and unreachability, missed performance) which are caused by improper handoffs. Finally, how to enhance mobility support? We turn passive devices to proactive devices to enhance mobility support. Specifically, we make a showcase solution which exploits device-side inputs to intervene the default handoff procedure and thus indirectly influence the cell selection decision, thereby improving data speed to mobile devices. All the results in this thesis have been validated or evaluated in reality (over top-tier US carriers like AT&T, Verizon, T-Mobile, some even in global carrier networks).
96

Measurement and Analysis of Networking Performance in Virtualised Environments

Chauhan, Maneesh January 2014 (has links)
Mobile cloud computing, having embraced the ideas like computation ooading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmware virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network throughput and bandwidth metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Driftanalyser, VirtoCalc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM's CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing patterns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, workload profiling and optimum processor pinning mechanism can be e ectively utilised to regulate network performance of the VMs. The ndings from this research work are applicable to optimising latency oriented VM provisioning in the cloud data centres, which would benefit most latency sensitive mobile cloud applications. / Mobil cloud computing, har anammat ideerna som beräknings avlastning, att en låg latens, höghastighetsnät för att tillfredsställa tjänsternas kvalitet och användbarhet garantier för mobila applikationer. Nätverks prestanda moln baserade på Xen och VMware virtualiseringslösningar har studerats av forskare, även om de har mestadels fokuserat på nätverksgenomströmning och bandbredd statistik. Arbetet är inriktat på mätning och analys av nätverksprestanda i virtuella maskiner i en liten, KVM baserade datacenter, betonar betydelsen av virtualiserings omkostnader i värd-VM latens och så småningom till den totala fördröjningen upplevs av fjärrklienter. Wealso presentera några användbara verktyg som Driftanalyser, VirtoCalc och Trotter som vi utvecklat för att utföra specifika mätningar och analyser. Vårt arbete visar att en ökning av en VM processor arbetsbelastning har direkta konsekvenser för nätverket Round restider. Vi visar också att Virtualiserings omkostnader (VO) har stor betydelse för början till slut latens och kan bidra med upp till 70 % av rundtrippstid mellan värd och VM. Dessutom är vi noga studera Latency grund Virtualiserings Omkostnader som en nätverksprestanda och undersöka effekterna av CPU-belastning och nätverks arbetsbelastning på den. Vi analyserar också de resursdelningsmönster och deras effekter bland virtuella maskiner i olika storlekar på samma värd. Slutligen, efter att ha observerat ett beroende mellan nätverksprestanda i ett VM och värd CPU belastning, föreslar vi att i en KVM baserad moln installation, arbetsbelastning profilering och optimal processor pinning mekanism kan anvandas effektivt för att reglera VM nätverksprestanda. Resultaten från denna forskning gäller att optimera latens orienterade VM provisione i molnet datacenter, som skulle dra störst latency känsliga mobila molnapplikationer.
97

Quality of service routing using decentralized learning

Heidari, Fariba. January 2009 (has links)
No description available.
98

Relating Naturalistic Global Positioning System (GPS) Driving Data with Long-Term Safety Performance of Roadways

Loy, James Michael 01 August 2013 (has links) (PDF)
This thesis describes a research study relating naturalistic Global Positioning System (GPS) driving data with long-term traffic safety performance for two classes of roadways. These two classes are multilane arterial streets and limited access highways. GPS driving data used for this study was collected from 33 volunteer drivers from July 2012 to March 2013. The GPS devices used were custom GPS data loggers capable of recording speed, position, and other attributes at an average rate of 2.5 hertz. Linear Referencing in ESRI ArcMAP was performed to assign spatial and other roadway attributes to each GPS data point collected. GPS data was filtered to exclude data with high horizontal dilution of precision (HDOP), incorrect heading attributes or other GPS communication errors. For analysis of arterial roadways, the Two-Fluid model parameters were chosen as the measure for long-term traffic safety analysis. The Two-Fluid model was selected based on previous research which showed correlation between the Two-Fluid model parameters n and Tm and total crash rate along arterial roadways. Linearly referenced GPS data was utilized to obtain the total travel time and stop time for several half-mile long trips along two arterial roadways, Grand Avenue and California Boulevard, in San Luis Obispo. Regression between log transformed values of these variables (total travel time and stop time) were used to derive the parameters n and Tm. To estimate stop time for each trip, a vehicle “stop” was defined when the device was traveling at less than 2 miles per hour. Results showed that Grand Avenue had a higher value for n and a lower value for Tm, which suggests that Grand Avenue may have worse long-term safety performance as characterized by long-term crash rates. However, this was not verified with crash data due to incomplete crash data in the TIMS database. Analysis of arterial roadways concluded by verifying GPS data collected in the California Boulevard study with sample data collected utilizing a traditional “car chase” methodology, which showed that no significant difference in the two data sources existed when trips included noticeable stop times. For analysis of highways the derived measurement of vehicle jerk, or rate of change of acceleration, was calculated to explore its relationship with long-term traffic safety performance of highway segments. The decision to use jerk comes from previous research which utilized high magnitude jerk events as crash surrogate, or near-crash events. Instead of using jerk for near-crash analysis, the measurement of jerk was utilized to determine the percentage of GPS data observed below a certain negative jerk threshold for several highway segments. These segments were ¼-mile and ½-mile long. The preliminary exploration was conducted with 39 ¼-mile long segments of US Highway 101 within the city limits of San Luis Obispo. First, Pearson’s correlation coefficients were estimated for rate of ‘high’ jerk occurrences on these highway segments (with definitions of ‘high’ depending on varying jerk thresholds) and an estimate of crash rates based on long-term historical crash data. The trends in the correlation coefficients as the thresholds were varied led to conducting further analysis based on a jerk threshold of -2 ft./sec3 for the ¼-mile segment analysis and -1 ft./sec3 for the ¼-mile segment analysis. Through a negative binomial regression model, it was shown that utilizing the derived jerk percentage measure showed a significant correlation with the total number of historical crashes observed along US Highway 101. Analysis also showed that other characteristics of the roadway, including presences of a curve, presence of weaving (indicated by the presence of auxiliary lanes), and average daily traffic (ADT) did not have a significant correlation with observed crashes. Similar analysis was repeated for 19 ½-mile long segments in the same study area, and it was found the percentage of high negative jerk metric was again significant with historical crashes. The ½-mile negative binomial regression for the presence of curve was also a significant variable; however the standard error for this determination was very high due to a low sample size of analysis segments that did not contain curves. Results of this research show the potential benefit that naturalistic GPS driving data can provide for long-term traffic safety analysis, even if data is unaccompanied with any additional data (such as live video feed) collected with expensive vehicle instrumentation. The methodologies of this study are repeatable with many GPS devices found in certain consumer electronics, including many newer smartphones.
99

Building more performant large scale networks for the Internet of Things

Ghosh, Saibal January 2022 (has links)
No description available.
100

A Peer-to-Peer Internet Measurement Platform and Its Applications in Content Delivery Networks

Triukose, Sipat 21 February 2014 (has links)
No description available.

Page generated in 0.4041 seconds