• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Masquage de pertes de paquets en voix sur IP / Packet loss concealment on voice over IP

Koenig, Lionel 28 January 2011 (has links)
Les communications téléphoniques en voix sur IP souffrent de la perte de paquets causée par les problèmes d'acheminement dus aux nœuds du réseau. La perte d'un paquet de voix induit la perte d'un segment de signal de parole (généralement 10ms par paquet perdu). Face à la grande diversité des codeurs de parole, nous nous sommes intéressés dans le cadre de cette thèse à proposer une méthode de masquage de pertes de paquets générique, indépendante du codeur de parole utilisé. Ainsi, le masquage de pertes de paquets est appliqué au niveau du signal de parole reconstruit, après décodage, s'affranchissant ainsi du codeur de parole. Le système proposé repose sur une modélisation classique de type « modèles de Markov cachés » afin de suivre l'évolution acoustique de la parole. À notre connaissance, une seule étude a proposé l'utilisation des modèles de Markov cachés dans ce cadre [4]. Toutefois, Rødbro a utilisé l'utilisation de deux modèles, l'un pour la parole voisée, l'autre pour les parties non voisées, posant ainsi le problème de la distinction voisée/non voisée. Dans notre approche, un seul modèle de Markov caché est mis en œuvre. Aux paramètres classiques (10 coefficients de prédiction linéaire dans le domaine cepstral (LPCC) et dérivées premières) nous avons adjoint un nouvel indicateur continu de voisement [1, 2]. La recherche du meilleur chemin avec observations manquantes conduit à une version modifiée de l'algorithme de Viterbi pour l'estimation de ces observations. Les différentes contributions (indice de voisement, décodage acoutico-phonétique et restitution du signal) de cette thèse sont évaluées [3] en terme de taux de sur et sous segmentation, taux de reconnaissance et distances entre l'observation attendue et l'observation estimée. Nous donnons une indication de la qualité de la parole au travers d'une mesure perceptuelle : le PESQ. / Packet loss due to misrouted or delayed packets in voice over IP leads to huge voice quality degradation. Packet loss concealment algorithms try to enhance the perceptive quality of the speech. The huge variety of vocoders leads us to propose a generic framework working directly on the speech signal available after decoding. The proposed system relies on one single "hidden Markov model" to model time evolution of acoustic features. An original indicator of continuous voicing is added to conventional parameters (Linear Predictive Cepstral Coefficients) in order to handle voiced/unvoiced sound. Finding the best path with missing observations leads to one major contribution: a modified version of the Viterbi algorithm tailored for estimating missing observations. All contributions are assessed using both perceptual criteria and objective metrics.
32

BSM Message and Video Streaming Quality Comparative Analysis Using Wave Short Message Protocol (WSMP)

Win, Htoo Aung 08 1900 (has links)
Vehicular ad-hoc networks (VANETs) are used for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. The IEEE 802.11p/WAVE (Wireless Access in Vehicular Environment) and with WAVE Short Messaging Protocol (WSMP) has been proposed as the standard protocol for designing applications for VANETs. This communication protocol must be thoroughly tested before reliable and efficient applications can be built using its protocols. In this paper, we perform on-road experiments in a variety of scenarios to evaluate the performance of the standard. We use commercial VANET devices with 802.11p/WAVE compliant chipsets for both BSM (basic safety messages) as well as video streaming applications using WSMP as a communication protocol. We show that while the standard performs well for BSM application in lightly loaded conditions, the performance becomes inferior when traffic and other performance metric increases. Furthermore, we also show that the standard is not suitable for video streaming due to the bursty nature of traffic and the bandwidth throttling, which is a major shortcoming for V2X applications.
33

Optimalizace a analýza závislostí komunikačních služeb na zpoždění / Optimalization and analysis of communication services latency dependencies

Zikmund, Lukáš January 2014 (has links)
This master’s thesis is focused on ensuring of Quality of Service (QoS) in wireless network for standards IEEE 802.11 a/b/g/n. First part of this thesis is focused on the theory of this issue. It covers methods of data transfer in computer networks and individual transfer parameters especially on parameters needed to ensure quality of service. It also describes standards for wireless data transmission and protocols for real time data transmission. The second part is devoted to OPNET modeler and to simulations created on this program. Simulations are focused on real-time data transfer and compare the standards in terms of delay and jitter.
34

Podpora kvalitativních požadavků služeb v operačních systémech unixového typu pro provoz v bezdrátových sítích WiFi / Quality of service support in unix-like operating system for communication in WiFi networks

Mizera, Josef January 2014 (has links)
Diploma thesis is focused on the supporting of Quality of Services in wireless networks, especially in the Linux operation systems. The topic is connected not only with OS, but also with the wireless standard, which supports QoS in wireless networks called IEEE 802.11e. QoS is needed especially for time-consuming data transfers in real time. The theoretical part deals with the theoretical analysis of the issue of the QoS support. There are described parameters, which occurred in quality of services support. This section also deals with the division of services that are used to transmit data across computer networks. It also describes the QoS support in wireless networks according 802.11e, its implementation and methods of accessing a medium with a without possibility of traffic. This part is followed by a description of QoS support in UNIX operating systems. The chapter describes how is the QoS support designed in these operating systems. There are also characterized concrete tools which are used for control the data flow in the operating systems using Linux. At the end the theoretical part deals with different types of queues and methods used in linux OS. In the practical part of the thesis, there are various designed topologies and scenarios to verify the functionality of QoS support in wireless networks using a Unix system. These chapters show the results of different tests at selected transmission data streams that are sensitive to transmission time. There is also verified cooperation of QoS support between devices operating on the network and data link layers. The output of this work is to design a laboratory exercise for the subject Network Architecture. This exercise is focused on familiarization with the QoS support functionality in wireless networks and in Unix-like operating systems. This chapter also describes the devices and programs that are needed to measure this task. The last part of the chapter describes the procedure for the preparation of the measuring station. For this laboratory task, there is an inserted manual in the annex.
35

Characterization of SIP Signaling-Messages Over OpenSIPS Running On Multicore Server

Awan, Naser Saeed January 2012 (has links)
Over the course of last decade, the demand for VoIP (Voice over Internet Protocol) applications has increased significantly among enterprises and individuals due to its low cost. This increasing demand resulted in a significant increase in users who require reliable VoIP communication systems. QoS (Quality of Service) is a major issue in VoIP implementation and is a method to impel the development of real-time multimedia services like VoIP and videoconferencing. However, there are certain challenges in achieving QoS for VoIP application, which need special attentions; like latency and packet loss. The VoIP servers which are functioning on single core software/hardware model have high latency and packet loss issues due to their limited processing bandwidth. A multicore software/hardware model is the solution to cope up with the increasing demands of VoIP and yet an active research area in telecommunication. Using a multicore software/hardware model for VoIP has several challenges, one of the challenges is to design and implement QoS Benchmarking module for VoIP client and server on multicore. In this thesis the focus is on latency and packet loss of SIP messages on OpenSIPS server. This is done by performing stress testing for QoS benchmarking, where delay and call drop rate is calculated for SIP (Session Initiation Protocol) signaling messages on parallel VoIP client server model. The model is built in C for multicore and is used as a simulation tool. SIP is widely deployed protocol for call establishment, maintenance and termination in VoIP.
36

Investigating Users' Quality of Experience in Mobile Cloud Games

Blomqvist, Markus January 2023 (has links)
Mobile cloud gaming (MCG) is an emerging concept which aims to deliver video games on-demand to users with the use of cloud technologies. Cloud technology allows the offloading of computation from a less powerful user device or thin client to more robust cloud servers to minimize power consumption and provide additional cloud services such as storage. MCG is therefore very helpful that can reduce the costs of expensive hardware, but the challenge is that it requires a high Quality of Service (QoS) in order to stream and play the games where the users have a high Quality of Experience (QoE). The goal of the study is to investigate how users' QoE is affected by network conditions while playing MCG and compare the results from a previous study. A testbed was made in order to conduct subjective tests where users are going to play Counter Strike: Global Offensive (CS: GO) on a smartphone using Steam Remote Play. The testbed consists of a router, tablet, smartphone, headset, Xbox controller, USB-C multi-port adapter and four different PC's. Participants on campus, both students and non-students, were invited to participate in the experiment. A total of 24 participants completed the tests; however, results from two participants were excluded due to software issues. There were 23 network conditions that was tested for each user and included factors such as round-trip time (RTT), packet losses, bursty jitter, random jitter or combinations of different factors. A multi-platform tool, ALTRUIST, was used to control the applications and facilitate the data collection from the devices and NetEm changed the network conditions. The results showed that the network condition [bj(rtt200i15)] had the highest mean opinion score (MOS) of the QoE of 4.5 for the users with 200 milliseconds of bursty jitter every 15 seconds. The worst network condition tested with the lowest QoE rating of 1.4 was network condition [rtt25pl12] that had 25 milliseconds of RTT and 12% packet losses. There were differences between the male and female participants where the MOS of the QoE results was significantly higher with up to 1.5 MOS QoE rating differences for the females compared to the males in network conditions with RTT with packet losses. However, the sample size was low with only 5 female participants compared to 18 male participants. The MOS of the QoE results separating play time under 10 hours per week and 10 or more hours per week showed no significant changes, where the largest QoE rating difference was 0.5 points. Network condition [rtt25pl12] and [rtt2pl35] had the largest differences in the MOS QoE ratings compared to the previous study, while both was not compared to the same corresponding network condition. The largest difference comparing the same network condition to the previous study was network condition [bj(rtt200i15)] with a difference of 1.1 points higher in the MOS QoE rating.
37

Capacity Enhancement Approaches for Long Term Evolution networks: Capacity Enhancement-Inspired Self-Organized Networking to Enhance Capacity and Fairness of Traffic in Long Term Evolution Networks by Utilising Dynamic Mobile Base-Stations

Alrowili, Mohammed F.H. January 2018 (has links)
The long-term evolution (LTE) network has been proposed to provide better network capacity than the earlier 3G network. Driven by the market, the conventional LTE (3G) network standard could not achieve the expectations of the international mobile telecommunications advanced (IMT-Advanced) standard. To satisfy this gap, the LTE-Advanced was introduced with additional network functionalities to meet up with the IMT-Advanced Standard. In addition, due to the need to minimize operational expenditure (OPEX) and reduce human interventions, the wireless cellular networks are required to be self-aware, self-reconfigurable, self-adaptive and smart. An example of such network involves transceiver base stations (BTSs) within a self-organizing network (SON). Besides these great breakthroughs, the conventional LTE and LTE-Advanced networks have not been designed with the intelligence of scalable capacity output especially in sudden demographic changes, namely during events of football, malls, worship centres or during religious and cultural festivals. Since most of these events cannot be predicted, modern cellular networks must be scalable in terms of capacity and coverage in such unpredictable demographic surge. Thus, the use of dynamic BTSs is proposed to be used in modern and future cellular networks for crowd and demographic change managements. Dynamic BTSs are complements of the capability of SONs to search, determine and deploy less crowded/idle BTSs to densely crowded cells for scalable capacity management. The mobile BTSs will discover areas of dark coverages and fill-up the gap in terms of providing cellular services. The proposed network relieves the LTE network from overloading thus reducing packet loss, delay and improves fair load sharing. In order to trail the best (least) path, a bio-inspired optimization algorithm based on swarm-particle optimization is proposed over the dynamic BTS network. It uses the ant-colony optimization algorithm (ACOA) to find the least path. A comparison between an optimized path and the un-optimized path showed huge gain in terms of delay, fair load sharing and the percentage of packet loss.
38

VIRTUAL PRIVATE NETWORKS : An Analysis of the Performance in State-of-the-Art Virtual Private Network solutions in Unreliable Network Conditions

Habibovic, Sanel January 2019 (has links)
This study aimed to identify the differences between state-of-the-art VPN solutions on different operating systems. It was done because a novel VPN protocol is in the early stages of release and a comparison of it, to other current VPN solutions is interesting. It is interesting because current VPN solutions are well established and have existed for a while and the new protocol stirs the pot in the VPN field. Therefore a contemporary comparison between them could aid system administrators when choosing which VPN to implement. To choose the right VPN solution for the occasion could increase performance for the users and save costs for organizations who wish to deploy VPNs. With the remote workforce increasing issues of network reliability also increases, due to wireless connections and networks beyond the control of companies. This demands an answer to the question how do VPN solutions differ in performance with stable and unstable networks? This work attempted to answer this question. This study is generally concerning VPN performance but mainly how the specific solutions perform under unreliable network conditions.It was achieved by researching past comparisons of VPN solutions to identify what metrics to analyze and which VPN solutions have been recommended. Then a test bed was created in a lab network to control the network when testing, so the different VPN implementations and operating systems have the same premise. To establish baseline results, performance testing was done on the network without VPNs, then the VPNs were tested under reliable network conditions and then with unreliable network conditions. The results of that were compared and analyzed. The results show a difference in the performance of the different VPNs, also there is a difference on what operating system is used and there are also differences between the VPNs with the unreliability aspects switched on. The novel VPN protocol looks promising as it has overall good results, but it is not conclusive as the current VPN solutions can be configured based on what operating system and settings are chosen. With this set-up, VPNs on Linux performed much better under unreliable network conditions when compared to setups using other operating systems. The outcome of this work is that there is a possibility that the novel VPN protocol is performing better and that certain combinations of VPN implementation and OS are better performing than others when using the default configuration. This works also pointed out how to improve the testing and what aspects to consider when comparing VPN implementations.
39

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
<p>The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the <i>Transmission Control Protocol (TCP)</i> is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. </p><p>In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. </p><p>The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment.</p><p>Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator <i>ns-2</i>. </p><p>This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.</p>
40

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the Transmission Control Protocol (TCP) is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment. Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator ns-2. This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.

Page generated in 0.0433 seconds