• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 300
  • 109
  • 60
  • 54
  • 52
  • 25
  • 20
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 758
  • 256
  • 227
  • 150
  • 140
  • 120
  • 103
  • 89
  • 79
  • 73
  • 71
  • 70
  • 68
  • 61
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

User-centric quality of service provisioning in IP networks

Culverhouse, Mark January 2012 (has links)
The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.
62

A Credit-based Home Access Point (CHAP) to Improve Application Quality on IEEE 802.11 Networks

Lee, Choong-Soo 23 June 2010 (has links)
"Increasing availability of high-speed Internet and wireless access points has allowed home users to connect not only their computers but various other devices to the Internet. Every device running different applications requires unique Quality of Service (QoS). It has been shown that delay- sensitive applications, such as VoIP, remote login and online game sessions, suffer increased latency in the presence of throughput-sensitive applications such as FTP and P2P. Currently, there is no mechanism at the wireless AP to mitigate these effects except explicitly classifying the traffic based on port numbers or host IP addresses. We propose CHAP, a credit-based queue management technique, to eliminate the explicit configuration process and dynamically adjust the priority of all the flows from different devices to match their QoS requirements and wireless conditions to improve application quality in home networks. An analytical model is used to analyze the interaction between flows and credits and resulting queueing delays for packets. CHAP is evaluated using Network Simulator (NS2) under a wide range of conditions against First-In-First- Out (FIFO) and Strict Priority Queue (SPQ) scheduling algorithms. CHAP improves the quality of an online game, a VoIP session, a video streaming session, and a Web browsing activity by 20%, 3%, 93%, and 51%, respectively, compared to FIFO in the presence of an FTP download. CHAP provides these improvements similar to SPQ without an explicit classification of flows and a pre- configured scheduling policy. A Linux implementation of CHAP is used to evaluate its performance in a real residential network against FIFO. CHAP reduces the web response time by up to 85% compared to FIFO in the presence of a bulk file download. Our contributions include an analytic model for the credit-based queue management, simulation, and implementation of CHAP, which provides QoS with minimal configuration at the AP."
63

A Real-Time Communication Framework for Wireless Sensor Networks

AAL SALEM, MOHAMMED January 2009 (has links)
Doctor of Philosophy(PhD) / Recent advances in miniaturization and low power design have led to a flurry of activity in wireless sensor networks. Sensor networks have different constraints than traditional wired networks. A wireless sensor network is a special network with large numbers of nodes equipped with embedded processors, sensors, and radios. These nodes collaborate to accomplish a common task such as environment monitoring or asset tracking. In many applications, sensor nodes will be deployed in an ad-hoc fashion without careful planning. They must organize themselves to form a multihop, wireless communication network. In sensor network environments, much research has been conducted in areas such as power consumption, self-organisation techniques, routing between the sensors, and the communication between the sensor and the sink. On the other hand, real-time communication with the Quality of Service (QoS) concept in wireless sensor networks is still an open research field. Most protocols either ignore real time or simply attempt to process as fast as possible and hope that this speed is sufficient to meet the deadline. However, the introduction of real-time communication has created additional challenges in this area. The sensor node spends most of its life routing packets from one node to another until the packet reaches the sink; therefore, the node functions as a small router most of the time. Since sensor networks deal with time-critical applications, it is often necessary for communication to meet real time constraints. However, research that deals with providing QoS guarantees for real-time traffic in sensor networks is still in its infancy.This thesis presents a real-time communication framework to provide quality of service in sensor networks environments. The proposed framework consists of four components: First, present an analytical model for implementing Priority Queuing (PQ) in a sensor node to calculate the queuing delay. The exact packet delay for corresponding classes is calculated. Further, the analytical results are validated through an extensive simulation study. Second, report on a novel analytical model based on a limited service polling discipline. The model is based on an M/D/1 queuing system (a special class of M/G/1 queuing systems), which takes into account two different classes of traffic in a sensor node. The proposed model implements two queues in a sensor node that are served in a round robin fashion. The exact queuing delay in a sensor node for corresponding classes is calculated. Then, the analytical results are validated through an extensive simulation study. Third, exhibit a novel packet delivery mechanism, namely the Multiple Level Stateless Protocol (MLSP), as a real-time protocol for sensor networks to guarantee the traffic in wireless sensor networks. MLSP improves the packet loss rate and the handling of holes in sensor network much better than its counterpart, MMSPEED. It also introduces the k-limited polling model for the first time. In addition, the whole sending packets dropped significantly compared to MMSPEED, which it leads to decrease the consumption power. Fourth, explain a new framework for moving data from the sink to the user, at a low cost and low power, using the Universal Mobile Telecommunication System (UMTS), which is standard for the Third Generation Mobile System (3G). The integration of sensor networks with the 3G mobile network infrastructure will reduce the cost of building new infrastructures and enable the large-scale deployment of sensor networks
64

A Quality of Service Monitoring System for Service Level Agreement Verification

Ta, Xiaoyuan January 2006 (has links)
Master of Engineering by Research / Service-level-agreement (SLA) monitoring measures network Quality-of-Service (QoS) parameters to evaluate whether the service performance complies with the SLAs. It is becoming increasingly important for both Internet service providers (ISPs) and their customers. However, the rapid expansion of the Internet makes SLA monitoring a challenging task. As an efficient method to reduce both complexity and overheads for QoS measurements, sampling techniques have been used in SLA monitoring systems. In this thesis, I conduct a comprehensive study of sampling methods for network QoS measurements. I develop an efficient sampling strategy, which makes the measurements less intrusive and more efficient, and I design a network performance monitoring software, which monitors such QoS parameters as packet delay, packet loss and jitter for SLA monitoring and verification. The thesis starts with a discussion on the characteristics of QoS metrics related to the design of the monitoring system and the challenges in monitoring these metrics. Major measurement methodologies for monitoring these metrics are introduced. Existing monitoring systems can be broadly classified into two categories: active and passive measurements. The advantages and disadvantages of both methodologies are discussed and an active measurement methodology is chosen to realise the monitoring system. Secondly, the thesis describes the most common sampling techniques, such as systematic sampling, Poisson sampling and stratified random sampling. Theoretical analysis is performed on the fundamental limits of sampling accuracy. Theoretical analysis is also conducted on the performance of the sampling techniques, which is validated using simulation with real traffic. Both theoretical analysis and simulation results show that the stratified random sampling with optimum allocation achieves the best performance, compared with the other sampling methods. However, stratified sampling with optimum allocation requires extra statistics from the parent traffic traces, which cannot be obtained in real applications. In order to overcome this shortcoming, a novel adaptive stratified sampling strategy is proposed, based on stratified sampling with optimum allocation. A least-mean-square (LMS) linear prediction algorithm is employed to predict the required statistics from the past observations. Simulation results show that the proposed adaptive stratified sampling method closely approaches the performance of the stratified sampling with optimum allocation. Finally, a detailed introduction to the SLA monitoring software design is presented. Measurement results are displayed which calibrate systematic error in the measurements. Measurements between various remote sites have demonstrated impressively good QoS provided by Australian ISPs for premium services.
65

A Network Conditions Estimator for Voice Over IP Objective Quality Assessment

Nocito, Carlos Daniel 22 November 2011 (has links)
Objective quality evaluation is a key element for the success of the emerging Voice over IP (VoIP) technologies. Although there are extensive economic incentives for the convergence of voice, data, and video networks, packet networks such as the Internet have inherent incompatibilities with the transport of real time services. Under this paradigm, network planners and administrators are interested in ongoing mechanisms to measure and ensure the quality of these real time services. Objective quality assessment algorithms can be broadly divided into a) intrusive (methods that require a reference signal), and b) non intrusive (methods that do not require a known reference signal). The latter group, typically requires knowledge of the network conditions (level of delay, jitter, packet loss, etc.), and that has been a very active area of research in the past decade. The state of the art methods for objective non-intrusive quality assessment provide high correlations with the subjective tests. Although good correlations have been achieved already for objective non-intrusive quality assessment, the current large voice transport networks are in a hybrid state, where the necessary network parameters cannot easily be observed from the packet traffic between nodes. This thesis proposes a new process, the Network Conditions Estimator (NCE), which can serve as bridge element to real-world hybrid networks. Two classifications systems, an artificial neural network and a C4.5 decision tree, were developed using speech from a database collected from experiments under controlled network conditions. The database was composed of a group of four female speakers and three male speakers, who conducted unscripted conversations without knowledge about the details of the experiment. Using mel frequency cepstral coefficients (MFCCs) as the feature-set, an accuracy of about 70% was achieved in detecting the presence of jitter or packet loss on the channel. This resulting classifier can be incorporated as an input to the E-Model, in order to properly estimate the QoS of a network in real time. Additionally, rather than just providing an estimation of subjective quality of service provided, the NCE provides an insight into the cause for low performance.
66

On Admission Control for IP Networks Based on Probing

Más, Ignacio January 2008 (has links)
The current Internet design is based on a best-effort service, which combines high utilization of network resources with architectural simplicity. As a consequence of this design, the Internet is unable to provide guaranteed or predictable quality of service (QoS) to real-time services that have constraints on end-to-end delay, delay jitter and packet loss. To add QoS capabilities to the present Internet, the new functions need to be simple to implement, while allowing high network utilization. In recent years, different methods have been investigated to provide the required QoS. Most of these methods include some form of admission control so that new flows are only admitted to the network if the admission does not decrease the quality of connections that are already in progress below some defined level. To achieve the required simplicity a new family of admission control methods, called end-to-end measurement-based admission control moves the admission decision to the edges of the network. This thesis presents a set of methods for admission control based on measurements of packet loss. The thesis studies how to deploy admission control in an incremental way: First, admission control is included in the audiovisual real-time applications, without any support from the network. Second, admission control is enabled at the transport layer to differentiate between elastic and inelastic flows, by embedding the probing mechanism in UDP and using the inherent congestion control of TCP. Finally, admission control is deployed at the network layer by providing differentiated scheduling in the network for probe and data packets, which then allows the operator to control the blocking probability for the inelastic flows and the average throughput for the elastic flows. The thesis offers a description of the incremental steps to provide QoS on a DiffServ-based Internet. It analyzes the proposed schemes and provides extensive figures of performance based on simulations and on real implementations. It also shows how the admission control can be used in multicast sessions by making the admission decision at the receiver. The thesis provides as well two different mathematical analyses of the network layer admission control, which enable operators to obtain initial configuration parameters for the admission decision, like queue sizes, based on the forecasted or measured traffic volume. The thesis ends by considering a new method for overload control in WLAN cells, closely based on the ideas for admission control presented in the rest of the articles. / QC 20100826
67

Exposing and Aggregating Non-functional Properties in SOA from the Perspective of the Service Consumer

Becha, Hanane 18 October 2012 (has links)
Non-functional properties (NFPs) represent an important facet of service descriptions, especially when a Service Oriented Architecture (SOA) approach is used. An effective SOA service development approach requires the identification, specification, implemen-tation, aggregation, management and monitoring of service-related NFPs. However, at this point in time, NFPs are either not handled at all or handled partially in proprietary ways. The goal of this thesis is to encourage their availability for use. In this thesis, the focus is on the NFPs relevant from the perspective of service consumers, in opposition to the perspective of service providers (or developers) and to multi-perspectives. In other words, the scope covers only the NFPs that need to be pub-lished to help service consumers determine whether a given service is an appropriate one for their needs (e.g., description of NFPs to be attached to the service along with the functionality description). This thesis provides the following contributions to the SOA knowledge base: definition of a domain-independent catalogue comprising 17 NFPs relevant to the descriptions of atomic services from the perspective of service consumers. These NFPs have been derived from a literature review and have been vali-dated via a two-step survey; formalization of NFP representation by defining data structures to enable quantifying and codifying them, together with a corresponding XML schema; definition, implementation and validation of algorithms to aggregate the NFPs of the composite service based on the NFPs of its underlying services, with a discussion of the NFP aggregation limitations; definition of a modeling approach for the NFP-aware selection of services, which involves aspect-oriented modeling with the User Requirements Nota-tion, in the context of SOA; integration of NFP descriptions into the Web Services Description Language (WSDL); and definition and use of the discriminator operator in service composition, to en-able the creation of fault-tolerant composite services. Overall, this work contributes to research by providing better insight on the nature, rele-vance, and composability of NFPs in a service engineering context. As for industrial im-pact, this work contributes a validated collection of NFPs with a concrete syntax and composition algorithms ready to be used for defining, selecting, and composing NFP-driven services and for evolving current SOA-related standards.
68

QoS evaluation of bandwidth schedulersin IPTV networks offered SRD fluidvideo traffic: a simulation study

Islam, Md Rashedul January 2009 (has links)
IPTV is now offered by several operators in Europe, US and Asia using broadcast video over private IP networks that are isolated from Internet. IPTV services rely ontransmission of live (real-time) video and/or stored video. Video on Demand (VoD)and Time-shifted TV are implemented by IP unicast and Broadcast TV (BTV) and Near video on demand are implemented by IP multicast. IPTV services require QoS guarantees and can tolerate no more than 10-6 packet loss probability, 200 ms delay, and 50 ms jitter. Low delay is essential for satisfactory trick mode performance(pause, resume,fast forward) for VoD, and fast channel change time for BTV. Internet Traffic Engineering (TE) is defined in RFC 3272 and involves both capacity management and traffic management. Capacity management includes capacityplanning, routing control, and resource management. Traffic management includes (1)nodal traffic control functions such as traffic conditioning, queue management, scheduling, and (2) other functions that regulate traffic flow through the network orthat arbitrate access to network resources. An IPTV network architecture includes multiple networks (core network, metronetwork, access network and home network) that connects devices (super head-end, video hub office, video serving office, home gateway, set-top box). Each IP router in the core and metro networks implements some queueing and packet scheduling mechanism at the output link controller. Popular schedulers in IP networks include Priority Queueing (PQ), Class-Based Weighted Fair Queueing (CBWFQ), and Low Latency Queueing (LLQ) which combines PQ and CBWFQ.The thesis analyzes several Packet Scheduling algorithms that can optimize the tradeoff between system capacity and end user performance for the traffic classes. Before in the simulator FIFO,PQ,GPS queueing methods were implemented inside. This thesis aims to implement the LLQ scheduler inside the simulator and to evaluate the performance of these packet schedulers. The simulator is provided by ErnstNordström and Simulator was built in Visual C++ 2008 environmentand tested and analyzed in MatLab 7.0 under windows VISTA.
69

Fast Reroute with Pre-established Bypass Tunnels in MPLS

Cheng, Chen-Chang 01 September 2003 (has links)
This paper proposes a new approach to support restoration of Label Switched Paths (LSP) set up in the MPLS network. The proposed scheme tries to establish all possible bypass tunnels according to the maximum bandwidth between two LSR around the protected Label Switched Router (LSR). The proposed scheme uses the idea of the maximum bandwidth between two LSRs and establishes the bypass tunnels passing through the critical links which will affect the maximum bandwidth between two LSRs. All of LSPs affected by a LSR failure or a link failure can choice a bypass tunnel fit its QoS constraints to reroute. This paper also compares the different between the proposed bypass tunnel and link disjoint bypass tunnel. The simulation result show that the proposed approach has better packet loss in rerouting and can allow more affected LSP to reroute compare to RSVP and efficient Pre-Qualify. The proposed bypass tunnels have better performance than link disjoint bypass tunnels.
70

Study of Supporting Per Class Differentated Service on MPLS VPN

Wu, Jung-Chieh 10 August 2004 (has links)
Nowadays, MPLS VPN has become a widely used solution on the issue of QoS guarantees against unexpected changes of network environments. This thesis investigates the system performance of the BGP-based MPLS VPN, which supports per class differentiated services. The results are compared with those without VPN. In this study, the target network is simulated through the OPNET simulator. Through adjusting the network parameters and creating different scenarios, such as network congestion and disconnection, we make statistical analyses based on the simulation results. It is observed that in addition to increasing the labels contained in each packet, MPLS VPN require PE routers be capable of supporting more protocols, such as searching for IP tables, and transferring various tables for the use of routing. Therefore, introducing VPN may increase processing load and overhead for data transmission. On the other hand, MPLS VPN may take longer convergence time in establishing entire routing messages than Non-VPN. However, when network disconnection occurs, the former has better throughput than the latter due to the shorter convergence time in the search of new routes. Also, if the networks become congested, the transmission delay of EF traffic in MPLS VPN is smaller since the alternative LSP for it is pre-established. For the same disconnected LSP route, MPLS VPN network also achieves better throughput, because the guaranteed EF traffic can be rerouted via different LSP routes. According to our simulation results, the background traffic has the largest throughput decrease, while the EF Traffic has the least.

Page generated in 0.0204 seconds