• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 109
  • 60
  • 54
  • 52
  • 25
  • 20
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 761
  • 256
  • 227
  • 150
  • 141
  • 121
  • 103
  • 89
  • 79
  • 73
  • 71
  • 70
  • 68
  • 61
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Resource Allocation for Multiple Access and Broadcast Channels under Quality of Service Requirements Based on Strategy Proof Pricing

Shen, Fei 14 November 2014 (has links)
The efficient allocation of power is a major concern in today’s wireless communications systems. Due to the high demand in data rate and the scarcity of wireless resources such as power, the multi-user communication systems like the multiple access channel (MAC) and broadcast channel (BC) have become highly competitive environments for the users as well as the system itself. Theory of microeconomics and game theory provide the good analytical manner for the selfish and social welfare conflict problems. Instead of maximizing the system sum rate, our proposed system deals with fulfilling the utility (rate) requirement of all the users with efficient power allocation. The users formulate the signal to interference-plus-noise ratio (SINR) based quality-of-service (QoS) requirements. We propose the framework to allocate the power to each user with universal pricing mechanisms. The prices act as the control signal and are assumed to be some virtual currency in the wireless system. They can influence the physical layer operating points to meet the desired utility requirements. Centralized and distributed power allocation frameworks are discussed separately in the thesis with different pricing schemes. In wireless systems we have users that are rational in the game theoretic sense of making decisions consistently in pursuit of their own individual objectives. Each user’s objective is to maximize the expected value of its own payoff measured on a certain utility scale. Selfishness or self-interest is an important implication of rationality. Therefore, the mobiles which share the same spectrum have incentives to misinterpret their private information in order to obtain more utility. They might behave selfishly and show also malicious behavior by creating increased interference for other mobiles. Therefore, it is important to supervise and influence the operation of the system by pricing and priority (weights) optimization. In the centralized resource allocation, we study the general MAC and BC (with linear and nonlinear receiver) with three types of agents: the regulator, the system optimizer and the mobile users. The regulator ensures the QoS requirements of all users by clever pricing and prevents cheating. The simple system optimizer solves a certain system utility maximization problem to allocate the power with the given prices and weights (priorities). The linear and nonlinear pricing mechanisms are analyzed, respectively. It is shown that linear pricing is a universal pricing only if successive interference cancellation (SIC) for uplink transmission or dirty paper coding (DPC) for downlink transmission is applied at the base station (BS). For MAC without SIC, nonlinear pricing which is logarithmic in power and linear in prices is a universal pricing scheme. The prices, the resulting cost terms, the optimal power allocation to achieve the QoS requirement of each user in the feasible rate region are derived in closed form solutions for MAC with and without SIC using linear and nonlinear pricing frameworks, respectively. The users are willing to maximize their achievable rate and minimize their cost on power by falsely reporting their channel state information (CSI). By predicting the best cheating strategy of the malicious users, the regulator is able to detect the misbehavior and punish the cheaters. The infinite repeated game (RG) is proposed as a counter mechanism with the trigger strategy using the trigger price. We show that by anticipating the total payoff of the proposed RG, the users have no incentive to cheat and therefore our framework is strategy-proof. In the distributed resource allocation, each user allocates its own power by optimizing the individual utility function. The noncooperative game among the users is formulated. The individual prices are introduced to the utility function of each user to shift the Nash equilibrium (NE) power allocation to the desired point. We show that by implicit control of the proposed prices, the best response (BR) power allocation of each user converges rapidly. The Shannon rate-based QoS requirement of each user is achieved with minimum power at the unique NE point. We analyse different behavior types of the users, especially the malicious behavior of misrepresenting the user utility function. The resulting NE power allocation and achievable rates of all users are derived when malicious behavior exists. The strategy-proof mechanism is designed using the punishment prices when the types of the malicious users are detected. The algorithm of the strategy-proof noncooperative game is proposed. We illustrate the convergence of the BR dynamic and the Price of Malice (PoM) by numerical simulations. The uplink transmission within the single cell of heterogeneous networks is exactly the same model as MAC. Therefore, the results of the pricing-based power allocation for MAC can be implemented into heterogeneous networks. Femtocells deployed in the Macrocell network provide better indoor coverage to the user equipments (UEs) with low power consumption and maintenance cost. The industrial vendors show great interest in the access mode, called the hybrid access, in which the macrocell UEs (MUEs) can be served by the nearby Femtocell Access Point (FAP). By adopting hybrid access in the femtocell, the system energy efficiency is improved due to the short distance between the FAP and MUEs while at the same time, the QoS requirements are better guaranteed. However, both the Macrocell base station (MBS) and the FAP are rational and selfish, who maximize their own utilities. The framework to successively apply the hybrid access in femtocell and fulfill the QoS requirement of each UE is important. We propose two novel compensation frameworks to motivate the hybrid access of femtocells. To save the energy consumption, the MBS is willing to motivate the FAP for hybrid access with compensation. The Stackelberg game is formulated where the MBS serves as the leader and the FAP serves as the follower. The MBS maximizes its utility by choosing the compensation prices. The FAP optimizes its utility by selecting the number of MUEs in hybrid access. By choosing the proper compensation price, the optimal number of MUEs served by the FAP to maximize the utility of the MBS coincides with that to maximize the utility of the FAP. Numerous simulation results are conducted, showing that the proposed compensation frameworks result in a win-win solution. In this thesis, based on game theory, mechanism design and pricing framework, efficient power allocation are proposed to guarantee the QoS requirements of all users in the wireless networks. The results are applicable in the multi-user systems such as heterogeneous networks. Both centralized and distributed allocation schemes are analyzed which are suitable for different communication scenarios. / Aufgrund der hohen Nachfrage nach Datenrate und wegen der Knappheit an Ressourcen in Funknetzen ist die effiziente Allokation von Leistung ein wichtiges Thema in den heutigen Mehrnutzer-Kommunikationssystemen. Die Spieltheorie bietet Methoden, um egoistische und soziale Konfliktsituationen zu analysieren. Das vorgeschlagene System befasst sich mit der Erfüllung der auf Signal-zu-Rausch-und-Interferenz-Verhältnis (SINR) basierenden Quality-of-Service (QoS)-Anforderungen aller Nutzer mittels effizienter Leistungsallokation, anstatt die Übertragungsrate zu maximieren. Es wird ein Framework entworfen, um die Leistungsallokation mittels universellen Pricing-Mechanismen umzusetzen. In der Dissertation werden zentralisierte und verteilte Leistungsallokationsalgorithmen unter Verwendung verschiedener Pricing-Ansätze diskutiert. Die Nutzer in Funksystemen handeln rational im spieltheoretischen Sinne, indem sie ihre eigenen Nutzenfunktionen maximieren. Die mobilen Endgeräte, die dasselbe Spektrum nutzen, haben den Anreiz durch bewusste Fehlinterpretation ihrer privaten Informationen das eigene Ergebnis zu verbessern. Daher ist es wichtig, die Funktionalität des Systems zu überwachen und durch Optimierung des Pricings und Priorisierungsgewichte zu beeinflussen. Für den zentralisierten Ressourcenallokationsansatz werden der allgemeine Mehrfachzugriffskanal (Multiple Access Channel, MAC) und der Broadcastkanal (BC) mit linearen bzw. nichtlinearen Empfängern untersucht. Die Preise, die resultierenden Kostenterme und die optimale Leistungsallokation, mit der die QoS-Anforderungen in der zulässigen Ratenregion erfüllt werden, werden in geschlossener Form hergeleitet. Lineare und nichtlineare Pricing-Ansätze werden separat diskutiert. Das unendlich oft wiederholte Spiel wird vorgeschlagen, um Spieler vom Betrügen durch Übermittlung falscher Kanalinformationen abzuhalten. Für die verteilten Ressourcenvergabe wird das nichtkooperative Spiel in Normalform verwendet und formuliert. Die Nutzer wählen ihre Sendeleistung zur Maximierung ihrer eigenen Nutzenfunktion. Individuelle Preise werden eingeführt und so angepasst, dass die QoS-Anforderungen mit der Leistungsallokation im eindeutigen Nash-Gleichgewicht erfüllt werden. Verschiedene Arten des Nutzerverhaltens werden bezüglich der Täuschung ihrer Nutzenfunktion analysiert, und ein Strategy-Proof-Mechanismus mit Strafen wird entwickelt. Die Ergebnisse für den MAC sind anwendbar auf heterogene Netzwerke, wobei zwei neuartige Ansätze zur Kompensation bereitgestellt werden, die den hybriden Zugang zu Femtozell-Netzwerken motivieren. Mithilfe des Stackelberg-Spiels wird gezeigt, dass die vorgeschlagenen Ansätze in einer Win-Win-Situation resultieren.
262

Characterization of SIP Signaling-Messages Over OpenSIPS Running On Multicore Server

Awan, Naser Saeed January 2012 (has links)
Over the course of last decade, the demand for VoIP (Voice over Internet Protocol) applications has increased significantly among enterprises and individuals due to its low cost. This increasing demand resulted in a significant increase in users who require reliable VoIP communication systems. QoS (Quality of Service) is a major issue in VoIP implementation and is a method to impel the development of real-time multimedia services like VoIP and videoconferencing. However, there are certain challenges in achieving QoS for VoIP application, which need special attentions; like latency and packet loss. The VoIP servers which are functioning on single core software/hardware model have high latency and packet loss issues due to their limited processing bandwidth. A multicore software/hardware model is the solution to cope up with the increasing demands of VoIP and yet an active research area in telecommunication. Using a multicore software/hardware model for VoIP has several challenges, one of the challenges is to design and implement QoS Benchmarking module for VoIP client and server on multicore. In this thesis the focus is on latency and packet loss of SIP messages on OpenSIPS server. This is done by performing stress testing for QoS benchmarking, where delay and call drop rate is calculated for SIP (Session Initiation Protocol) signaling messages on parallel VoIP client server model. The model is built in C for multicore and is used as a simulation tool. SIP is widely deployed protocol for call establishment, maintenance and termination in VoIP.
263

Link Criticality Characterization for Network Optimization : An approach to reduce packet loss rate in packet-switched networks

Zareafifi, Farhad January 2019 (has links)
Network technologies are continuously advancing and attracting ever-growing interests from the industry and society. Network users expect better experience and performance every day. Consequently, network operators need to improve the quality of their services. One way to achieve this goal entails over-provisioning the network resources, which is not economically efficient as it imposes unnecessary costs. Another way is to employ Traffic Engineering (TE) solutions to optimally utilize the current underlying resources by managing traffic distribution in the network. In this thesis, we consider packet-switched Networks (PSN), which allows messages to be split across multiple packets as in today’s Internet. Traffic engineering in PSN is a well-known topic yet current solutions fail to make efficient utilization of the network resources. The goal of the TE process is to compute a traffic distribution in the network that optimizes a given objective function while satisfying the network capacity constraints (e.g., do not overflow the link capacity with an excessive amount of traffic). A critical aspect of TE tools is the ability to capture the impact of routing a certain amount of traffic through a certain link, also referred as the link criticality function. Today’s TE tools rely on simplistic link criticality functions that are inaccurate in capturing the network-wide performance of the computed traffic distribution. A good link criticality function allows the TE tools to distribute the traffic in a way that it achieves close-to-optimal network performance, e.g., in terms of packet loss and possibly packet latencies. In this thesis, we embark upon the study of link criticality functions and introduce four different criticality functions called: 1) LeakyCap, 2) LeakyReLU, 3) SoftCap, and 4) Softplus. We compare and evaluate these four functions with the traditional link criticality function defined by Fortz and Thorup, which aims at capturing the performance degradation of a link given its utilization. To assess the proposed link criticality functions, we designed 57 network scenarios and showed how the link criticality functions affect network performance in terms of packet loss. We used different topologies and considered both constant and bursty types of traffic. Based on our results, the most reliable and effective link criticality function for determining traffic distribution rates is Softplus. Softplus outperformed Fortz function in 79% of experiments and was comparable in the remaining 21% of the cases. / Nätverksteknik är ett område under snabb utveckling som röner ett stort och växande intresse från såväl industri som samhälle. Användare av nätverkskommunikation förväntar sig ständigt ökande prestanda och därför behöver nätverksoperatörerna förbättra sina tjänster i motsvarande grad. Ett sätt att möta användarnas ökade krav är att överdimensionera nätverksresurserna, vilket dock leder till onödigt höga kostnader. Ett annat sätt är att använda sig av trafikstyrninglösningar med målet att utnyttjade tillgängliga resurserna så bra som möjligt. I denna avhandling undersöker vi paketswitchade nätverk (PSN) i vilka meddelanden kan delas upp i multipla paket, vilket är den rådande paradigmen för dagens Internet. Ä ven om trafikstyrning (TS) för PSN är ett välkänt ämne så finns det utrymme för förbättringar relativt de lösningar som är kända idag. Målet för TS-processen är att beräkna en trafikfördelning i nätverket som optimerar en given målfunktion, samtidigt som nätverkets kapacitetsbegränsningar inte överskrids. En kritisk aspekt hos TS-verktygen är förmågan att fånga påverkan av att sända en viss mängd trafik genom en specifik länk, vilket vi kallar länkkritikalitetsfunktionen. Dagens TS verktyg använder sig av förenklade länkkritikalitetsfunktioner som inte väl nog beskriver trafikfördelningens påverkan på hela nätverkets prestanda. En bra länkkritikalitetsfunktion möjliggör för TS-verktygen att fördela trafiken på ett sätt som närmar sig optimal nätverksprestanda, till exempel beskrivet som låg paketförlust och låg paketlatens. I denna avhandling undersöker vi länkkritikalitetsfunktioner och föreslår fyra olika funktioner som vi kallar 1) LeakyCap, 2) LeakyReLU, 3) SoftCap, och 4) Softplus. Vi jämför och utvärderar dessa fyra funktioner och inkluderar även klassiska länkkritikalitetsfunktioner som Fortz och Thorup, vilka avser fånga prestandadegraderingen av en länk över graden av utnyttjande.Vi har undersökt 57 olika nätverksscenarier för att bestämma hur de olika länk kritikalitets funktionerna påverkar nätverksprestanda i form av paketförlust. Olika topologier har använts och vi har studerat såväl konstant som stötvis flödande trafik. Enligt våra resultat är Softplus den mest tillförlitliga och effektiva länkkritikalitetsfunktionen för att fördela trafiken i ett nätverk. Softplus presterade bättre än Fortz i 79% av våra tester, och var jämförbar i övriga 21%.
264

WebRTC Quality Control in Contextual Communication Systems

Wang, Wei January 2018 (has links)
Audio and video communication is a universal task with a long history of technologies. Recent examples of these technologies include Skype video calling, Apple’s Face Time, and Google Hangouts. Today, these services offer everyday users the ability to have an interactive conference with both audio and video streams. However, many of these solutions depend on extra plugins or applications installing on the user’s personal computer or mobile device. Some of them also are subject to licensing, introducing a huge barrier for developers and restraining new companies from entering this area. The aim of Web Real-Time Communications (WebRTC) is to provide direct access to multimedia streams in the browser, thus making it possible to create rich media applications using web technology without the need for plugins or developers needing to pay technology license fees. Ericsson develops solutions for communication targeting professional and business users. With the increasing possibilities to gather data (via cloud-based applications) about the quality experienced by users in their video conferences, new demands are placed on the infrastructure to handle this data. Additionally, there is a question of how the stats should be utilized to automatically control the quality of service (QoS) in WebRTC communication systems. The thesis project deployed a WebRTC quality control service with methods of data processing and modeling to assess the perceived video quality of the ongoing session, and in further produce appropriate actions to remedy poor quality. Lastly, after evaluated on the Ericsson contextual test platform, the project verified that two of the stats-parameters (network delay and packet loss percentage) for assessing QoS have the negative effect on the perceived video quality but with different influence degree. Moreover, the available bandwidth turned out to be an important factor, which should be added as an additional stats-parameter to improve the performance of a WebRTC quality control service. / Ljud och videokommunikation är en universell uppgift med en lång historia av teknik. Exempel på dessa teknologier är Skype-videosamtal, Apples ansiktstid och Google Hangouts. Idag erbjuder dessa tjänster vardagliga användare möjligheten att ha en interaktiv konferens med både ljud- och videoströmmar. Men många av dessa lösningar beror på extra plugins eller applikationer som installeras på användarens personliga dator eller mobila enhet. Vissa av dem är också föremål för licensiering, införande av ett stort hinder för utvecklare och att hindra nya företag att komma in i detta område. Syftet med Web Real-Time Communications (WebRTC) är att ge direkt åtkomst till multimediaströmmar i webbläsaren, vilket gör det möjligt att skapa rich media-applikationer med webbteknik utan att plugins eller utvecklare behöver betala licensavgifter för teknik. Ericsson utvecklar lösningar för kommunikationsriktning för professionella och företagsanvändare. Med de ökande möjligheterna att samla data (via molnbaserade applikationer) om kvaliteten hos användare på sina videokonferenser ställs nya krav på infrastrukturen för att hantera dessa data. Dessutom är det fråga om hur statistiken ska användas för att automatiskt kontrollera kvaliteten på tjänsten (QoS) i WebRTC-kommunikationssystem. Avhandlingsprojektet tillämpade en WebRTC-kvalitetskontrolltjänst med metoder för databehandling och modellering för att bedöma upplevd videokvalitet av den pågående sessionen och vidare producera lämpliga åtgärder för att avhjälpa dålig kvalitet. Slutligen, efter utvärdering på Ericssons kontextuella testplattform, verifierade projektet att två av statistikparametrarna (nätverksfördröjning och paketförlustprocent) för bedömning av QoS har den negativa effekten på upplevd videokvalitet men med olika inflytningsgrad. Dessutom visade den tillgängliga bandbredd att vara en viktig faktor, som bör läggas till som en extra statistikparameter för att förbättra prestanda för enWebRTC-kvalitetskontrolltjänst.
265

New quality of service routing algorithms based on local state information. The development and performance evaluation of new bandwidth-constrained and delay-constrained quality of service routing algorithms based on localized routing strategies.

Aldosari, Fahd M. January 2011 (has links)
The exponential growth of Internet applications has created new challenges for the control and administration of large-scale networks, which consist of heterogeneous elements under dynamically changing traffic conditions. These emerging applications need guaranteed service levels, beyond those supported by best-effort networks, to deliver the intended services to the end user. Several models have been proposed for a Quality of Service (QoS) framework that can provide the means to transport these services. It is desirable to find efficient routing strategies that can meet the strict routing requirements of these applications. QoS routing is considered as one of the major components of the QoS framework in communication networks. In QoS routing, paths are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of traffic. Several QoS routing schemes have been proposed that differ in the way they gather information about the network state and the way they select paths based on this information. The biggest downside of current QoS routing schemes is the frequent maintenance and distribution of global state information across the network, which imposes huge communication and processing overheads. Consequently, scalability is a major issue in designing efficient QoS routing algorithms, due to the high costs of the associated overheads. Moreover, inaccuracy and staleness of global state information is another problem that is caused by relatively long update intervals, which can significantly deteriorate routing performance. Localized QoS routing, where source nodes take routing decisions based solely on statistics collected locally, was proposed relatively recently as a viable alternative to global QoS routing. It has shown promising results in achieving good routing performance, while at the same time eliminating many scalability related problems. In localized QoS routing each source¿destination pair needs to determine a set of candidate paths from which a path will be selected to route incoming flows. The goal of this thesis is to enhance the scalability of QoS routing by investigating and developing new models and algorithms based on the localized QoS routing approach. For this thesis, we have extensively studied the localized QoS routing approach and demonstrated that it can achieve a higher routing performance with lower overheads than global QoS routing schemes. Existing localized routing algorithms, Proportional Sticky Routing (PSR) and Credit-Based Routing (CBR), use the blocking probability of candidate paths as the criterion for selecting routing paths based on either flow proportions or a crediting mechanism, respectively. Routing based on the blocking probability of candidate paths may not always reflect the most accurate state of the network. This has motivated the search for alternative localized routing algorithms and to this end we have made the following contributions. First, three localized bandwidth-constrained QoS routing algorithms have been proposed, two are based on a source routing strategy and the third is based on a distributed routing strategy. All algorithms utilize the quality of links rather than the quality of paths in order to make routing decisions. Second, a dynamic precautionary mechanism was used with the proposed algorithms to prevent candidate paths from reaching critical quality levels. Third, a localized delay-constrained QoS routing algorithm was proposed to provide routing with an end-to-end delay guarantee. We compared the performance of the proposed localized QoS routing algorithms with other localized and global QoS routing algorithms under different network topologies and different traffic conditions. Simulation results show that the proposed algorithms outperform the other algorithms in terms of routing performance, resource balancing and have superior computational complexity and scalability features. / Umm AlQura University, Saudi Arabia
266

Novel localised quality of service routing algorithms. Performance evaluation of some new localised quality of service routing algorithms based on bandwidth and delay as the metrics for candidate path selection.

Alghamdi, Turki A. January 2010 (has links)
The growing demand on the variety of internet applications requires management of large scale networks by efficient Quality of Service (QoS) routing, which considerably contributes to the QoS architecture. The biggest contemporary drawback in the maintenance and distribution of the global state is the increase in communication overheads. Unbalancing in the network, due to the frequent use of the links assigned to the shortest path retaining most of the network loads is regarded as a major problem for best effort service. Localised QoS routing, where the source nodes use statistics collected locally, is already described in contemporary sources as more advantageous. Scalability, however, is still one of the main concerns of existing localised QoS routing algorithms. The main aim of this thesis is to present and validate new localised algorithms in order to develop the scalability of QoS routing. Existing localised routing, Credit Based Routing (CBR) and Proportional Sticky Routing (PSR), use the blocking probability as a factor in selecting the routing paths and work with either credit or flow proportion respectively, which makes impossible having up-to-date information. Therefore our proposed Highest Minimum Bandwidth (HMB) and Highest Average Bottleneck Bandwidth History (HABBH) algorithms utilise bandwidth as the direct QoS criterion to select routing paths. We introduce an Integrated Delay Based Routing and Admission Control mechanism. Using this technique Minimum Total Delay (MTD), Low Fraction Failure (LFF) and Low Path Failure (LPF) were compared against the global QoS routing scheme, Dijkstra, and localised High Path Credit (HPC) scheme and showed superior performance. The simulation with the non-uniformly distributed traffic reduced blocking probability of the proposed algorithms. Therefore, we advocate the algorithms presented in the thesis, as a scalable approach to control large networks. We strongly suggest that bandwidth and mean delay are feasible QoS constraints to select optimal paths by locally collected information. We have demonstrated that a few good candidate paths can be selected to balance the load in the network and minimise communication overhead by applying the disjoint paths method, recalculation of candidate paths set and dynamic paths selection method. Thus, localised QoS routing can be used as a load balancing tool in order to improve the network resource utilization. A delay and bandwidth combination is one of the future prospects of our work, and the positive results presented in the thesis suggest that further development of a distributed approach in candidate paths selection may enhance the proposed localised algorithms. / Umm AlQura University in Mecca
267

The Extended Quality-of-Service Resource Allocation Model

Bopanna, Sumanth M. January 2005 (has links)
No description available.
268

Quality of service analysis for distributed multimedia systems in a local area networking environment

Chung, Edward Chi-Fai January 1996 (has links)
No description available.
269

Modellierung des QoS-QoE-Zusammenhangs für mobile Dienste und empirische Bestimmung in einem Netzemulations-Testbed / Modelling of the Relation between QoS and QoE for mobile Services and an empirical Evaluation in a Testbed for Network Emulation

Kurze, Albrecht 03 June 2016 (has links) (PDF)
In der theoretischen Auseinandersetzung mit mobilen Internet-Diensten sind Quality of Service (QoS) und Quality of Experience (QoE) als hochkomplexe und verbundene Konzepte zu erkennen. QoS umfasst dabei die technische Sicht auf das Telekommunikationsnetz, charakterisiert durch leistungsrelevante Parameterwerte (z. B. Durchsatz und Latenz). QoE hingegen bezieht sich auf die Bewertung des Nutzererlebnisses (z. B. Zufriedenheit und Akzeptanz). Zur gemeinsamen Erklärung bedarf es einer multi- bzw. interdisziplinären Betrachtung zwischen Ingenieurs- und Humanwissenschaften, da neben der Technik auch der Mensch als Nutzer in den QoS-QoE-Zusammenhang involviert ist. Ein mehrschichtiges Modell erfasst die relevanten Einflussfaktoren und internen Zusammenhänge zwischen QoS und QoE sowohl aus Netz- als auch Nutzersicht. Zur Quantifizierung des Zusammenhangs konkreter Werte in einer empirischen QoE-Evaluation wurde ein umfangreiches psychophysikalisches Laborexperiment konzipiert. Das dafür entwickelte Netzemulations-Testbed erlaubt mobiltypische Netz- und Nutzungssituationen gezielt in einem Testparcours zusammenzubringen. Die formulierten Prinzipien zur Testrelevanz, -eignung und -effizienz berücksichtigen hierbei die Besonderheiten des Testaufbaus und -designs mit echten Endgeräten und Diensten. Die Ergebnisse von über 200 Probanden bestätigen die vorhergesagten QoS-QoE-Charakteristiken der sechs untersuchten Dienste als kontinuierlich-elastisch bzw. sprunghaft-fest. Dienstspezifisch lässt sich jeweils von einem angestrebten Grad der Nutzerzufriedenheit auf die notwendigen Werte der QoS-Netzparameter schließen, woraus sich ein QoS-QoE-Zufriedenheitskorridor zwischen einem unteren und oberen Schwellwert ergibt. Teilweise sind dabei QoS-unabhängige Faktoren, z. B. die Art der Präsentation der Stimuli in der App auf dem Endgerät, als ebenso relevant zu erkennen wie die QoS-Netzparameter selbst. / The thesis is centered on the relationship of Quality of Service (QoS) and Quality of Experience (QoE) for mobile Internet services. While QoS covers the technical view on the telecommunications network characterized by performance-related parameter values (e.g. throughput and latency), QoE refers to the assessment of the user experience (e.g. satisfaction and acceptability) in the use of the services. In the thesis QoS and QoE are revealed as highly complex and related concepts in theoretical contemplation. Integrating both concepts requires a multidisciplinary or interdisciplinary approach between engineering and human sciences to consider both - technological aspects of the network as well the human user. The designed multilayered model appropriately integrates the technical network view as well as the user's perspective by considering all relevant factors of influence and all internal relationships between QoS and QoE. The conducted extensive psychophysical laboratory experiment with real users, devices and services quantifies the relationship between specific QoS values and specific QoE values. A testbed developed for network emulation allows combining typical mobile network situations with typical usage situations in a controlled and focused manner. The three elaborated principles to test for relevance, suitability and efficiency take into account the special features of the test setup and test design. Test results gained from more than 200 volunteers confirm the predicted QoS-QoE-characteristics of the six tested mobile services to be either elastic or non-elastic. It is possible to conclude from the desired degree of user satisfaction on the necessary values of the QoS network parameters, which results in a QoS-QoE-corridor between lower and upper threshold values. Findings prove that QoS-independent factors, e.g. the type of presentation of the stimuli in the app on the user’s device, can be as relevant for QoE as the evaluated QoS network parameters themselves.
270

Contribution à l'amélioration de la qualité de service dans les réseaux sans-fil multi-sauts

Moad, Dalil 12 November 2015 (has links)
Les réseaux sans fil 802.11 sont en train d'être considérés comme étant la pierre angulaire des systèmes de communication autonomes. En permettant aux usagers de communiquer les uns avec les autres avec les stations de base fixées a des endroits bien précis par l'intermédiaire de protocoles de communication comme les protocole de routage ad hoc. Le standard IEEE 802.11 propose des spéciations pour les deux couches basses (MAC et Physique) du modelé OSI. La couche MAC (Medium Access Control) introduit deux mécanismes d'accès au médium sans fil qui sont différents l'un de l'autre. Le mécanisme DCF (accès au canal distribue ou Distributed Coordination Function), l'accès au canal s'exécute dans chaque station sans faire appel à une unité centrale. Le mécanisme PCF (Point Coordination Function), contrairement au mécanisme DCF l'accès au canal se fait à l'aide d'une unité centrale. Le mécanisme le plus utilise par la norme 802.11 est DCF vu qu'il ne nécessite pas d'infrastructure au déploiement. Pour améliorer la qualité de service dans les réseaux sans multi- sauts, cette thèse aborde cette problématique dans deux couches de la pile protocolaire, à savoir la couche routage et la couches MAC. Elle améliore le routage a QoS en utilisant le protocole de routage a état de lien optimisé (OLSR) et améliore aussi l'efficacité de l'accès au médium sans fil lors du fonctionnement de la couche MAC en mode le plus courant DCF. Pour l'amélioration de routage, nous proposons une approche basée sur le graphe de conflit pour l'estimation de la bande passante partagée entre les nœuds adjacents. Pour la couche MAC, nous proposons un nouveau schéma de Backoff nomme l'algorithme Backoff de Padovane (PBA), pour améliorer l'efficacité de l'accès au médium sans fil dans les réseaux sans fil mobiles Ad Hoc (MANETs). / IEEE 802.11 based wireless networks are considered the cornerstone of autonomous communication systems. These networks allow users to communicate with each others via base stations deployed in specic locations through a set of dedicated communication protocols like Ad Hoc routing protocols. The IEEE 802.11 standard proposes specications for both physical and MAC layers of the OSI model. MAC layer denes dierent types of access to the wireless medium as explained below. The DCF (Distributed Coordination Function) mechanism, in which the access to the medium is executed localy in each station. The PCF (Point Coordination Function) method, unlike DCF mechanism the access the medium is managed by a central unit. The most widespread mechanism among them is the DCF mode as it does not require any infrastructure deployment. To improve the Quality of Service (QoS) oered to the dierent applications in multihop wireless networks, this thesis proposes original solutions to enhance the eciency of certain protocols in two dierent layers of OSI, i.e., routing and MAC layers. More specically, our proposed solutions enable higher eciency of OLSR protocol and ensure more ecient usage of the available bandwidth through the designed Padovan based medium access scheme operating in DCF mode. The routing approach used in OLSR is improved by applying the conict graphs to acquire more accurate estimation of the bandwidth shared with the adjacent nodes. At MAC layer, the number of collisions in dense networks is signicantly reduced by designing new backo scheme dubbed Padovan Backo Algorithm (BEB).

Page generated in 0.055 seconds