Spelling suggestions: "subject:"coequality off service."" "subject:"coequality oof service.""
31 |
Best effort QoS support routing in mobile ad hoc networksLuo, Heng January 2012 (has links)
In the past decades, mobile traffic generated by devices such as smartphones, iphones, laptops and mobile gateways has been growing rapidly. While traditional direct connection techniques evolve to provide better access to the Internet, a new type of wireless network, mobile ad hoc network (MANET), has emerged. A MANET differs from a direct connection network in the way that it is multi-hopping and self-organizing and thus able to operate without the help of prefixed infrastructures. However, challenges such dynamic topology, unreliable wireless links and resource constraints impede the wide applications of MANETs. Routing in a MANET is complex because it has to react efficiently to unfavourable conditions and support traditional IP services. In addition, Quality of Service (QoS) provision is required to support the rapid growth of video in mobile traffic. As a consequence, tremendous efforts have been devoted to the design of QoS routing in MANETs, leading to the emergence of a number of QoS support techniques. However, the application independent nature of QoS routing protocols results in the absence of a one-for-all solution for MANETs. Meanwhile, the relative importance of QoS metrics in real applications is not considered in many studies. A Best Effort QoS support (BEQoS) routing model which evaluates and ranks alternative routing protocols by considering the relative importance of multiple QoS metrics is proposed in this thesis. BEQoS has two algorithms, SAW-AHP and FPP for different scenarios. The former is suitable for cases where uncertainty factors such as standard deviation can be neglected while the latter considers uncertainty of the problems. SAW-AHP is a combination of Simple Additive Weighting and Analytic Hierarchical Process in which the decision maker or network operator is firstly required to assign his/her preference of metrics with a specific number according to given rules. The comparison matrices are composed accordingly, based on which the synthetic weights for alternatives are gained. The one with the highest weight is the optimal protocol among all alternatives. The reliability and efficiency of SAW-AHP are validated through simulations. An integrated architecture, using evaluation results of SAW-AHP is proposed which incorporates the ad hoc technology into the existing WLAN and therefore provides a solution for the last mile access problems. The protocol selection induced cost and gains are also discussed. The thesis concludes by describing the potential application area of the proposed method. Fuzzy SAW-AHP is extended to accommodate the vagueness of the decision maker and complexity of problems such as standard deviation in simulations. The fuzzy triangular numbers are used to substitute the crisp numbers in comparison matrices in traditional AHP. Fuzzy Preference Programming (FPP) is employed to obtain the crisp synthetic weight for alternatives based on which they are ranked. The reliability and efficiency of SAW-FPP are demonstrated by simulations.
|
32 |
On the Quality of Service of mobile cloud gaming using GamingAnywhereGrandhi, Veera Venkata Santosh Surya Ganesh January 2016 (has links)
In the recent years, the mobile gaming has been tremendously increased because of its enormous entertainment features. Mobile cloud gaming is a promising technology that overcomes the implicit restrictions such as computational capacity and limited battery life. GamingAnywhere is an open source cloud gaming system which is used in this thesis to measure the Quality of service of mobile cloud gaming. The aim of the thesis is to measure the QoS used in GamingAnywhere for mobile cloud gaming. Games are streamed from the server to the mobile client. In our study, QoS is measured using Differentiated Service (DiffServ) architecture for the traffic shaping. The research method is carried out using an experimental testbed. Dummynet is used for traffic shaping. Performance is measured in terms of bitrate, packet loss, jitter, and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the framerate and bitrate have increased with the impact of network delay. / <p>Ganesh Grandhi: +46767671612</p>
|
33 |
End-to-End Quality of Service Guarantees for Wireless Sensor NetworksDobslaw, Felix January 2015 (has links)
Wireless sensor networks have been a key driver of innovation and societal progressover the last three decades. They allow for simplicity because they eliminate ca-bling complexity while increasing the flexibility of extending or adjusting networksto changing demands. Wireless sensor networks are a powerful means of fillingthe technological gap for ever-larger industrial sites of growing interconnection andbroader integration. Nonetheless, the management of wireless networks is difficultin situations wherein communication requires application-specific, network-widequality of service guarantees. A minimum end-to-end reliability for packet arrivalclose to 100% in combination with latency bounds in the millisecond range must befulfilled in many mission-critical applications.The problem addressed in this thesis is the demand for algorithmic support forend-to-end quality of service guarantees in mission-critical wireless sensor networks.Wireless sensors have traditionally been used to collect non-critical periodic read-ings; however, the intriguing advantages of wireless technologies in terms of theirflexibility and cost effectiveness justify the exploration of their potential for controland mission-critical applications, subject to the requirements of ultra-reliable com-munication, in harsh and dynamically changing environments such as manufactur-ing factories, oil rigs, and power plants.This thesis provides three main contributions in the scope of wireless sensor net-works. First, it presents a scalable algorithm that guarantees end-to-end reliabilitythrough scheduling. Second, it presents a cross-layer optimization/configurationframework that can be customized to meet multiple end-to-end quality of servicecriteria simultaneously. Third, it proposes an extension of the framework used toenable service differentiation and priority handling. Adaptive, scalable, and fast al-gorithms are proposed. The cross-layer framework is based on a genetic algorithmthat assesses the quality of service of the network as a whole and integrates the phys-ical layer, medium access control layer, network layer, and transport layer.Algorithm performance and scalability are verified through numerous simula-tions on hundreds of convergecast topologies by comparing the proposed algorithmswith other recently proposed algorithms for ensuring reliable packet delivery. Theresults show that the proposed SchedEx scheduling algorithm is both significantlymore scalable and better performing than are the competing slot-based schedulingalgorithms. The integrated solving of routing and scheduling using a genetic al-vvigorithm further improves on the original results by more than 30% in terms of la-tency. The proposed framework provides live graphical feedback about potentialbottlenecks and may be used for analysis and debugging as well as the planning ofgreen-field networks.SchedEx is found to be an adaptive, scalable, and fast algorithm that is capa-ble of ensuring the end-to-end reliability of packet arrival throughout the network.SchedEx-GA successfully identifies network configurations, thus integrating the rout-ing and scheduling decisions for networks with diverse traffic priority levels. Fur-ther, directions for future research are presented, including the extension of simula-tions to experimental work and the consideration of alternative network topologies. / <p>Vid tidpunkten för disputationen var följande delarbeten opublicerade: delarbete 4 (manuskript inskickat för granskning), delarbete 5 (manuskript inskickat för granskning)</p><p>At the time of the doctoral defence the following papers were unpublished: paper 4 (manuscript under review), paper 5 (manuscript under review)</p>
|
34 |
Sustainable Throughput Measurements for Video StreamingNutalapati, Hima Bindu January 2017 (has links)
With the increase in demand for video streaming services on the hand held mobile terminals with limited battery life, it is important to maintain the user Quality of Experience (QoE) while taking the resource consumption into consideration. Hence, the goal is to offer as good quality as feasible, avoiding as much user-annoyance as possible. Hence, it is essential to deliver the video, avoiding any uncontrollable quality distortions. This can be possible when an optimal (or desirable) throughput value is chosen such that exceeding the particular threshold results in entering a region of unstable QoE, which is not feasible. Hence, the concept of QoE-aware sustainable throughput is introduced as the maximal value of the desirable throughput that avoids disturbances in the Quality of Experience (QoE) due to delivery issues, or keeps them at an acceptable minimum. The thesis aims at measuring the sustainable throughput values when video streams of different resolutions are streamed from the server to a mobile client over wireless links, in the presence of network disturbances packet loss and delay. The video streams are collected at the client side for quality assessment and the maximal throughput at which the QoE problems can still be kept at a desired level is determined. Scatter plots were generated for the individual opinion scores and their corresponding throughput values for the disturbance case and regression analysis is performed to find the best fit for the observed data. Logarithmic, exponential, linear and power regressions were considered in this thesis. The R-squared values are calculated for each regression model and the model with R-squared value closest to 1 is determined to be the best fit. Power regression model and logarithmic model have the R-squared values closest to 1. Better quality ratings have been observed for the low resolution videos in the presence of packet loss and delay for the considered test cases. It can be observed that the QoE disturbances can be kept at a desirable level for the low resolution videos and from the test cases considered for the investigation, 360px video is more resilient in case of high delay and packet loss values and has better opinion score values. Hence, it can be observed that the throughput is sustainable at this threshold.
|
35 |
Energy efficiency heterogeneous wireless communication network with QoS supportHou, Ying January 2013 (has links)
The overarching goal of this thesis is to investigate network architectures, and find the trade-off between low overall energy use and maintaining the level of quality of service (QoS), or even improve it. The ubiquitous wireless communications environment supports the exploration of different network architectures and techniques, the so-called heterogeneous network. Two kinds of heterogeneous architectures are considered: a combined cellular and femtocell network and a combined cellular, femtocell and Wireless Local Area Network(WLAN) network. This thesis concludes that the investigated heterogeneous networks can significantly reduce the overall power consumption, depending on the uptake of femtocells and WLANs. Also, QoS remains high when the power consumption drops. The main energy saving is from reducing the macrocell base station embodied and operational energy. When QoS is evaluated based on the combined cellular and femtocell architecture, it is suggested that use of resource scheduling for femtocells within the macrocell is crucial since femtocell performance is affected significantly by interference when installed in a co-channel system. Additionally, the femtocell transmission power mode is investigated using either variable power level or a fixed power level. To achieve both energy efficiency and QoS, the choice of system configurations should change according to the density of the femtocell deployment. When combining deployment of femtocells with WLANs, more users are able to experience a higher QoS. Due to increasing of data traffic and smartphone usage in the future, WLANs are more important for offloading data from the macrocell, reducing power consumption and also increasing the bandwidth. The localised heterogeneous network is a promising technique for achieving power efficiency and a high QoS system.
|
36 |
Operational benefit of implementing VoIP in a tactical environment / Operational benefit of implementing Voice Over Internet Protocol in a tactical environmentLewis, Rosemary 06 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, Voice over Internet Protocol (VoIP) technology will be explored and a recommendation of the operational benefit of VoIP will be provided. A network model will be used to demonstrate improvement of voice End-to-End delay by implementing quality of service (QoS) controls. An overview of VoIP requirements will be covered and recommended standards will be reviewed. A clear definition of a Battle Group will be presented and an overview of current analog RF voice technology will be explained. A comparison of RF voice technology and VoIP will modeled using OPNET Modeler 9.0. / Lieutenant, United States Navy
|
37 |
QoS-aware adaptive resource management in OFDMA networksLi, Aini January 2017 (has links)
One important feature of the future communication network is that users in the network are required to experience a guaranteed high quality of service (QoS) due to the popularity of multimedia applications. This thesis studies QoS-aware radio resource management schemes in different OFDMA network scenarios. Motivated by the fact that in current 4G networks, the QoS provisioning is severely constrained by the availability of radio resources, especially the scarce spectrum as well as the unbalanced traffic distribution from cell to cell, a joint antenna and subcarrier management scheme is proposed to maximise user satisfaction with load balancing. Antenna pattern update mechanism is further investigated with moving users. Combining network densi fication with cloud computing technologies, cloud radio access network (C-RAN) has been proposed as the emerging 5G network architecture consisting of baseband unit (BBU) pool, remote radio heads (RRHs) and fronthaul links. With cloud based information sharing through the BBU pool, a joint resource block and power allocation scheme is proposed to maximise the number of satisfi ed users whose required QoS is achieved. In this scenario, users are served by high power nodes only. With spatial reuse of system bandwidth by network densi fication, users' QoS provisioning can be ensured but it introduces energy and operating effciency issue. Therefore two network energy optimisation schemes with QoS guarantee are further studied for C-RANs: an energy-effective network deployment scheme is designed for C-RAN based small cells; a joint RRH selection and user association scheme is investigated in heterogeneous C-RAN. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and the effectiveness of all proposed algorithms is validated via comprehensive simulations.
|
38 |
QoE evaluation across a range of user age groups in video applicationsRoshan, Mujtaba January 2018 (has links)
Quality of Service (QoS) measures are the network parameters; delay, jitter, and loss and they do not reflect the actual quality of the service received by the end user. To get an actual view of the performance from a user's perspective, the Quality of the Experience (QoE) measure is now used. Traditionally, QoS network measurements are carried on actual network components, such as the routers and switches since these are the key network components. In this thesis, however, the experimentation has been done on real video traffic. The experimental setup made use of a very popular network tool, Network Emulator (NetEm) created by the Linux Foundation. NetEm allows network emulation without using the actual network devices such as the routers and traffic generator. The common NetEm offered features are those that have been used by the researchers in the past. These have the same limitation as a traditional simulator, which is the inability of NetEm delay jitter model to represent realistic network traffic models, such to reflect the behaviour of real world networks. The NetEm default method of inputting delay and jitter adds or subtracts a fixed amount of delay on the outgoing traffic. NetEm also allows the user to add this variation in a correlated fashion. However, using this technique the outputted packet delays are generated in such a way as to be very limited and hence not much like real internet traffic which has a vast range of delays. The standard alternative that NetEm allows is generate the delays from either a Normal (Gaussian) or Pareto distribution. This research, however, has shown that using a Gaussian or Pareto distribution also has very severe limitations, and these are fully discussed and described in Chapter 5 on page 68 of this thesis. This research adopts another approach that is also allowed (with more difficulty) by NetEm: by measuring a very large number of packet delays generated from a double exponential distribution a packet delay profile is created that far better imitates the actual delays seen in Internet traffic. In this thesis a large set of statistical delay values were gathered and used to create delay distribution tables. Additionally, to overcome another default behaviour of NetEm of re-ordering packets once jitter is implemented, PFIFO queuing discipline has been deployed to retain the original packet order regardless of the highest levels of implemented jitter. Furthermore, this advancement in NetEm's functionality also incorporates the ability to combine delay, jitter, and loss, which is not allowed on NetEm by default. In the literature, no work has been found to have utilised NetEm previously with such an advancement. Focusing on Video On Demand (VOD) it was discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. A bottleneck TCP model was then used to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required. The results are potentially very useful to service providers and network designers to be able to provide a satisfactory service to their customers, and in return, maintaining a prosperous business.
|
39 |
A Runtime Verification and Validation Framework for Self-Adaptive SoftwareSayre, David B. 01 January 2017 (has links)
The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules.
|
40 |
Quality of service for high-speed interconnection networks onboard spacecraftFerrer Florit, Albert January 2013 (has links)
State-of-the-art onboard spacecraft avionics use SpaceWire networks to interconnect payload data-handling sub-systems. This includes high data-rate sensors and instruments, processing units, and memory devices. SpaceWire is an interconnection network composed of nodes and routers connected by bi-directional, point-to-point, high-speed, serial-data communication links. SpaceWire is established as one of the main data-handling protocols and is being used on many ESA, NASA and JAXA spacecraft. SpaceWire is very successful for being fast, flexible and simple to use and implement. However it does not implement Quality of Service mechanisms, which aim to provide guarantees in terms of reliability and timely delivery to data generated by network clients. Quality of Service is increasingly being deployed in commercial ground technologies and its availability for space applications, which requires high reliability and performance, is of high interest for the space community. This thesis researches how Quality of Service can be provided to existing SpaceWire networks. Existing solutions for ground-based technologies cannot be directly used because of the constraints imposed by the limitations of space-qualified electronics. Due to these limitations SpaceWire uses wormhole routing which has many benefits but makes it more challenging to obtain timing guarantees and to achieve a deterministic behaviour. These challenges are addressed in this work with a careful analysis of existing Quality of Service techniques and the implementation of a novel set of protocols specifically designed for SpaceWire networks. These new protocols target specific use cases and utilise different mechanisms to achieve the required reliability, timely delivery and determinism. Traditional and novel techniques are deployed for first time in SpaceWire networks. In particular, segmentation, acknowledgements, retry, time-division multiplexing an cross-layer techniques are considered, analysed, implemented and evaluated with extensive prototyping efforts. SpaceWire provides high-rate data transfers but the next generation of payload instruments are going to require multi-gigabit capabilities. SpaceFibre is a new onboard networking technology under development which aims to satisfy these new requirements, keeping compatibility with SpaceWire user-applications. As a new standard, SpaceFibre offers the opportunity to implement Quality of Service techniques without the limitations imposed by the SpaceWire standard. The last part of this thesis focuses on the specification of the SpaceFibre standard in order to provide the Quality of Service required by next generation of space applications. This work includes analytical studies, software simulations, and hardware prototyping of new concepts which are the basis of the Quality of Service mechanisms defined in the new SpaceFibre standard. Therefore, a critical contribution is made to the definition and evaluation of a novel Quality of Service solution which provides high reliability, bandwidth reservation, priority and deterministic delivery to SpaceFibre links.
|
Page generated in 0.0696 seconds