• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 119
  • 60
  • 55
  • 30
  • 23
  • 16
  • 15
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 688
  • 688
  • 176
  • 154
  • 133
  • 113
  • 103
  • 100
  • 100
  • 88
  • 85
  • 83
  • 81
  • 73
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Distributed multimedia quality : the user perspective

Gulliver, Stephen Richard January 2004 (has links)
Distributed multimedia supports a symbiotic infotainment duality, i.e. the ability to transfer information to the user, yet also provide the user with a level of satisfaction. As multimedia is ultimately produced for the education and / or enjoyment of viewers, the user’s-perspective concerning the presentation quality is surely of equal importance as objective Quality of Service (QoS) technical parameters, to defining distributed multimedia quality. In order to extensively measure the user-perspective of multimedia video quality, we introduce an extended model of distributed multimedia quality that segregates quality into three discrete levels: the network-level, the media-level and content-level, using two distinct quality perspectives: the user-perspective and the technical-perspective. Since experimental questionnaires do not provide continuous monitoring of user attention, eye tracking was used in our study in order to provide a better understanding of the role that the human element plays in the reception, analysis and synthesis of multimedia data. Results showed that video content adaptation, results in disparity in user video eye-paths when: i) no single / obvious point of focus exists; or ii) when the point of attention changes dramatically. Accordingly, appropriate technical- and user-perspective parameter adaptation is implemented, for all quality abstractions of our model, i.e. network-level (via simulated delay and jitter), media-level (via a technical- and user-perspective manipulated region-of-interest attentive display) and content-level (via display-type and video clip-type). Our work has shown that user perception of distributed multimedia quality cannot be achieved by means of purely technical-perspective QoS parameter adaptation.
22

Integration of Wireless Sensor Networks Into a Commercial Off-the-Shelf (COTS) Multimedia Network.

Molineux, Jeffrey S. 25 July 2012
As the primary military operating environment shifts from the traditional battlefields to a more diverse urban environment, the use of remote wireless sensors is increasing. Traditional development and procurement methods are not capable of meeting the changing requirements and time constraints of commanders. To minimize the time to develop and deploy new systems, commercial solutions must be examined. The focus of this thesis is on the integration of Commercial Off-the-Shelf (COTS) components into a wireless multimedia sensor network. Because components from multiple vendors were utilized, different operating systems and transmission protocols had to be integrated across the network. The network must be capable of providing a varying Quality of Service (QoS) level depending on the active sensors in the network. To ensure the QoS level is met, an adaptive QoS algorithm was implemented in the wireless IEEE 802.11 router which monitored and measured the outgoing transmission interface; from which, it determined the latency and transmission jitter. Based on the results, the program can adjust the bandwidth as necessary. Finally, a user interface is developed that allows end users to monitor the network. The performance of the network is based on the end-to-end throughput, latency and jitter exhibited by the network.
23

Best effort QoS support routing in mobile ad hoc networks

Luo, Heng January 2012 (has links)
In the past decades, mobile traffic generated by devices such as smartphones, iphones, laptops and mobile gateways has been growing rapidly. While traditional direct connection techniques evolve to provide better access to the Internet, a new type of wireless network, mobile ad hoc network (MANET), has emerged. A MANET differs from a direct connection network in the way that it is multi-hopping and self-organizing and thus able to operate without the help of prefixed infrastructures. However, challenges such dynamic topology, unreliable wireless links and resource constraints impede the wide applications of MANETs. Routing in a MANET is complex because it has to react efficiently to unfavourable conditions and support traditional IP services. In addition, Quality of Service (QoS) provision is required to support the rapid growth of video in mobile traffic. As a consequence, tremendous efforts have been devoted to the design of QoS routing in MANETs, leading to the emergence of a number of QoS support techniques. However, the application independent nature of QoS routing protocols results in the absence of a one-for-all solution for MANETs. Meanwhile, the relative importance of QoS metrics in real applications is not considered in many studies. A Best Effort QoS support (BEQoS) routing model which evaluates and ranks alternative routing protocols by considering the relative importance of multiple QoS metrics is proposed in this thesis. BEQoS has two algorithms, SAW-AHP and FPP for different scenarios. The former is suitable for cases where uncertainty factors such as standard deviation can be neglected while the latter considers uncertainty of the problems. SAW-AHP is a combination of Simple Additive Weighting and Analytic Hierarchical Process in which the decision maker or network operator is firstly required to assign his/her preference of metrics with a specific number according to given rules. The comparison matrices are composed accordingly, based on which the synthetic weights for alternatives are gained. The one with the highest weight is the optimal protocol among all alternatives. The reliability and efficiency of SAW-AHP are validated through simulations. An integrated architecture, using evaluation results of SAW-AHP is proposed which incorporates the ad hoc technology into the existing WLAN and therefore provides a solution for the last mile access problems. The protocol selection induced cost and gains are also discussed. The thesis concludes by describing the potential application area of the proposed method. Fuzzy SAW-AHP is extended to accommodate the vagueness of the decision maker and complexity of problems such as standard deviation in simulations. The fuzzy triangular numbers are used to substitute the crisp numbers in comparison matrices in traditional AHP. Fuzzy Preference Programming (FPP) is employed to obtain the crisp synthetic weight for alternatives based on which they are ranked. The reliability and efficiency of SAW-FPP are demonstrated by simulations.
24

On the Quality of Service of mobile cloud gaming using GamingAnywhere

Grandhi, Veera Venkata Santosh Surya Ganesh January 2016 (has links)
In the recent years, the mobile gaming has been tremendously increased because of its enormous entertainment features. Mobile cloud gaming is a promising technology that overcomes the implicit restrictions such as computational capacity and limited battery life. GamingAnywhere is an open source cloud gaming system which is used in this thesis to measure the Quality of service of mobile cloud gaming. The aim of the thesis is to measure the QoS used in GamingAnywhere for mobile cloud gaming. Games are streamed from the server to the mobile client. In our study, QoS is measured using Differentiated Service (DiffServ) architecture for the traffic shaping. The research method is carried out using an experimental testbed. Dummynet is used for traffic shaping. Performance is measured in terms of bitrate, packet loss, jitter, and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the framerate and bitrate have increased with the impact of network delay. / <p>Ganesh Grandhi: +46767671612</p>
25

End-to-End Quality of Service Guarantees for Wireless Sensor Networks

Dobslaw, Felix January 2015 (has links)
Wireless sensor networks have been a key driver of innovation and societal progressover the last three decades. They allow for simplicity because they eliminate ca-bling complexity while increasing the flexibility of extending or adjusting networksto changing demands. Wireless sensor networks are a powerful means of fillingthe technological gap for ever-larger industrial sites of growing interconnection andbroader integration. Nonetheless, the management of wireless networks is difficultin situations wherein communication requires application-specific, network-widequality of service guarantees. A minimum end-to-end reliability for packet arrivalclose to 100% in combination with latency bounds in the millisecond range must befulfilled in many mission-critical applications.The problem addressed in this thesis is the demand for algorithmic support forend-to-end quality of service guarantees in mission-critical wireless sensor networks.Wireless sensors have traditionally been used to collect non-critical periodic read-ings; however, the intriguing advantages of wireless technologies in terms of theirflexibility and cost effectiveness justify the exploration of their potential for controland mission-critical applications, subject to the requirements of ultra-reliable com-munication, in harsh and dynamically changing environments such as manufactur-ing factories, oil rigs, and power plants.This thesis provides three main contributions in the scope of wireless sensor net-works. First, it presents a scalable algorithm that guarantees end-to-end reliabilitythrough scheduling. Second, it presents a cross-layer optimization/configurationframework that can be customized to meet multiple end-to-end quality of servicecriteria simultaneously. Third, it proposes an extension of the framework used toenable service differentiation and priority handling. Adaptive, scalable, and fast al-gorithms are proposed. The cross-layer framework is based on a genetic algorithmthat assesses the quality of service of the network as a whole and integrates the phys-ical layer, medium access control layer, network layer, and transport layer.Algorithm performance and scalability are verified through numerous simula-tions on hundreds of convergecast topologies by comparing the proposed algorithmswith other recently proposed algorithms for ensuring reliable packet delivery. Theresults show that the proposed SchedEx scheduling algorithm is both significantlymore scalable and better performing than are the competing slot-based schedulingalgorithms. The integrated solving of routing and scheduling using a genetic al-vvigorithm further improves on the original results by more than 30% in terms of la-tency. The proposed framework provides live graphical feedback about potentialbottlenecks and may be used for analysis and debugging as well as the planning ofgreen-field networks.SchedEx is found to be an adaptive, scalable, and fast algorithm that is capa-ble of ensuring the end-to-end reliability of packet arrival throughout the network.SchedEx-GA successfully identifies network configurations, thus integrating the rout-ing and scheduling decisions for networks with diverse traffic priority levels. Fur-ther, directions for future research are presented, including the extension of simula-tions to experimental work and the consideration of alternative network topologies. / <p>Vid tidpunkten för disputationen var följande delarbeten opublicerade: delarbete 4 (manuskript inskickat för granskning), delarbete 5 (manuskript inskickat för granskning)</p><p>At the time of the doctoral defence the following papers were unpublished: paper 4 (manuscript under review), paper 5 (manuscript under review)</p>
26

Sustainable Throughput Measurements for Video Streaming

Nutalapati, Hima Bindu January 2017 (has links)
With the increase in demand for video streaming services on the hand held mobile terminals with limited battery life, it is important to maintain the user Quality of Experience (QoE) while taking the resource consumption into consideration. Hence, the goal is to offer as good quality as feasible, avoiding as much user-annoyance as possible. Hence, it is essential to deliver the video, avoiding any uncontrollable quality distortions. This can be possible when an optimal (or desirable) throughput value is chosen such that exceeding the particular threshold results in entering a region of unstable QoE, which is not feasible. Hence, the concept of QoE-aware sustainable throughput is introduced as the maximal value of the desirable throughput that avoids disturbances in the Quality of Experience (QoE) due to delivery issues, or keeps them at an acceptable minimum. The thesis aims at measuring the sustainable throughput values when video streams of different resolutions are streamed from the server to a mobile client over wireless links, in the presence of network disturbances packet loss and delay. The video streams are collected at the client side for quality assessment and the maximal throughput at which the QoE problems can still be kept at a desired level is determined. Scatter plots were generated for the individual opinion scores and their corresponding throughput values for the disturbance case and regression analysis is performed to find the best fit for the observed data. Logarithmic, exponential, linear and power regressions were considered in this thesis. The R-squared values are calculated for each regression model and the model with R-squared value closest to 1 is determined to be the best fit. Power regression model and logarithmic model have the R-squared values closest to 1.  Better quality ratings have been observed for the low resolution videos in the presence of packet loss and delay for the considered test cases. It can be observed that the QoE disturbances can be kept at a desirable level for the low resolution videos and from the test cases considered for the investigation, 360px video is more resilient in case of high delay and packet loss values and has better opinion score values. Hence, it can be observed that the throughput is sustainable at this threshold.
27

Energy efficiency heterogeneous wireless communication network with QoS support

Hou, Ying January 2013 (has links)
The overarching goal of this thesis is to investigate network architectures, and find the trade-off between low overall energy use and maintaining the level of quality of service (QoS), or even improve it. The ubiquitous wireless communications environment supports the exploration of different network architectures and techniques, the so-called heterogeneous network. Two kinds of heterogeneous architectures are considered: a combined cellular and femtocell network and a combined cellular, femtocell and Wireless Local Area Network(WLAN) network. This thesis concludes that the investigated heterogeneous networks can significantly reduce the overall power consumption, depending on the uptake of femtocells and WLANs. Also, QoS remains high when the power consumption drops. The main energy saving is from reducing the macrocell base station embodied and operational energy. When QoS is evaluated based on the combined cellular and femtocell architecture, it is suggested that use of resource scheduling for femtocells within the macrocell is crucial since femtocell performance is affected significantly by interference when installed in a co-channel system. Additionally, the femtocell transmission power mode is investigated using either variable power level or a fixed power level. To achieve both energy efficiency and QoS, the choice of system configurations should change according to the density of the femtocell deployment. When combining deployment of femtocells with WLANs, more users are able to experience a higher QoS. Due to increasing of data traffic and smartphone usage in the future, WLANs are more important for offloading data from the macrocell, reducing power consumption and also increasing the bandwidth. The localised heterogeneous network is a promising technique for achieving power efficiency and a high QoS system.
28

Operational benefit of implementing VoIP in a tactical environment / Operational benefit of implementing Voice Over Internet Protocol in a tactical environment

Lewis, Rosemary 06 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, Voice over Internet Protocol (VoIP) technology will be explored and a recommendation of the operational benefit of VoIP will be provided. A network model will be used to demonstrate improvement of voice End-to-End delay by implementing quality of service (QoS) controls. An overview of VoIP requirements will be covered and recommended standards will be reviewed. A clear definition of a Battle Group will be presented and an overview of current analog RF voice technology will be explained. A comparison of RF voice technology and VoIP will modeled using OPNET Modeler 9.0. / Lieutenant, United States Navy
29

QoS-aware adaptive resource management in OFDMA networks

Li, Aini January 2017 (has links)
One important feature of the future communication network is that users in the network are required to experience a guaranteed high quality of service (QoS) due to the popularity of multimedia applications. This thesis studies QoS-aware radio resource management schemes in different OFDMA network scenarios. Motivated by the fact that in current 4G networks, the QoS provisioning is severely constrained by the availability of radio resources, especially the scarce spectrum as well as the unbalanced traffic distribution from cell to cell, a joint antenna and subcarrier management scheme is proposed to maximise user satisfaction with load balancing. Antenna pattern update mechanism is further investigated with moving users. Combining network densi fication with cloud computing technologies, cloud radio access network (C-RAN) has been proposed as the emerging 5G network architecture consisting of baseband unit (BBU) pool, remote radio heads (RRHs) and fronthaul links. With cloud based information sharing through the BBU pool, a joint resource block and power allocation scheme is proposed to maximise the number of satisfi ed users whose required QoS is achieved. In this scenario, users are served by high power nodes only. With spatial reuse of system bandwidth by network densi fication, users' QoS provisioning can be ensured but it introduces energy and operating effciency issue. Therefore two network energy optimisation schemes with QoS guarantee are further studied for C-RANs: an energy-effective network deployment scheme is designed for C-RAN based small cells; a joint RRH selection and user association scheme is investigated in heterogeneous C-RAN. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and the effectiveness of all proposed algorithms is validated via comprehensive simulations.
30

QoE evaluation across a range of user age groups in video applications

Roshan, Mujtaba January 2018 (has links)
Quality of Service (QoS) measures are the network parameters; delay, jitter, and loss and they do not reflect the actual quality of the service received by the end user. To get an actual view of the performance from a user's perspective, the Quality of the Experience (QoE) measure is now used. Traditionally, QoS network measurements are carried on actual network components, such as the routers and switches since these are the key network components. In this thesis, however, the experimentation has been done on real video traffic. The experimental setup made use of a very popular network tool, Network Emulator (NetEm) created by the Linux Foundation. NetEm allows network emulation without using the actual network devices such as the routers and traffic generator. The common NetEm offered features are those that have been used by the researchers in the past. These have the same limitation as a traditional simulator, which is the inability of NetEm delay jitter model to represent realistic network traffic models, such to reflect the behaviour of real world networks. The NetEm default method of inputting delay and jitter adds or subtracts a fixed amount of delay on the outgoing traffic. NetEm also allows the user to add this variation in a correlated fashion. However, using this technique the outputted packet delays are generated in such a way as to be very limited and hence not much like real internet traffic which has a vast range of delays. The standard alternative that NetEm allows is generate the delays from either a Normal (Gaussian) or Pareto distribution. This research, however, has shown that using a Gaussian or Pareto distribution also has very severe limitations, and these are fully discussed and described in Chapter 5 on page 68 of this thesis. This research adopts another approach that is also allowed (with more difficulty) by NetEm: by measuring a very large number of packet delays generated from a double exponential distribution a packet delay profile is created that far better imitates the actual delays seen in Internet traffic. In this thesis a large set of statistical delay values were gathered and used to create delay distribution tables. Additionally, to overcome another default behaviour of NetEm of re-ordering packets once jitter is implemented, PFIFO queuing discipline has been deployed to retain the original packet order regardless of the highest levels of implemented jitter. Furthermore, this advancement in NetEm's functionality also incorporates the ability to combine delay, jitter, and loss, which is not allowed on NetEm by default. In the literature, no work has been found to have utilised NetEm previously with such an advancement. Focusing on Video On Demand (VOD) it was discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. A bottleneck TCP model was then used to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required. The results are potentially very useful to service providers and network designers to be able to provide a satisfactory service to their customers, and in return, maintaining a prosperous business.

Page generated in 0.0566 seconds