• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 28
  • 15
  • 1
  • 1
  • Tagged with
  • 134
  • 25
  • 23
  • 21
  • 18
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Traffic engineering for quality of service provisioning in IP networks

Trimintzios, Panagiotis January 2004 (has links)
No description available.
32

Semantic web service generation for text classification

Ball, Stephen Wayne January 2006 (has links)
No description available.
33

Resource allocation policies for service provisioning systems

Palmer, Jennie January 2006 (has links)
This thesis is concerned with maximising the efficiency of hosting of service provisioning systems consisting of clusters or networks of servers. The tools employed are those of probabilistic modelling, optimization and simulation. First, a system where the servers in a cluster may be switched dynamically and preemptively from one kind of work to another is examined. The demand consists of two job types joining separate queues, with different arrival and service characteristics, and also different relative importance represented by appropriate holding costs. The switching of a server from queue i to queue j incurs a cost which may be monetary or may involve a period of unavail- ability. The optimal switching policy is obtained numerically by solving a dynamic programming equation. Two heuristic policies - one static and one are evaluated by simulation and are compared to the optimal dynamic - policy. The dynamic heuristic is shown to perform well over a range of pa- rameters, including changes in demand. The model, analysis and evaluation are then generalized to an arbitrary number, M, of job types. Next, the problem of how best to structure and control a distributed com- puter system containing many processors is considered. The performance trade-offs associated with different tree structures are evaluated approximately by applying appropriate queueing models. It is shown that. for a given set of parameters and job distribution policy, there is an optimal tree structure that minimizes the overall average response time. This is obtained numerically through comparison of average response times. A simple heuris- tic policy is shown to perform well under certain conditions. The last model addresses the trade-offs between reliability and perfor- mance. A number of servers, each of which goes through alternating periods of being operative and inoperative, offer services to an incoming stream of demands. The objective is to evaluate and optimize performance and cost metrics. A large real-life data set containing information about server break- downs is analyzed first. The results indicate that the durations of the oper- ative periods are not distributed exponentially. However, hyperexponential distributions are found to be a good fit for the observed data. A model based on these distributions is then formulated, and is solved exactly using the method of spectral expansion. A simple approximation which is accu- rate for heavily loaded systems is also proposed. The results of a number of numerical experiments are reported.
34

Design and implementation of a QoS-Supportive system for reliable multicast

Di Ferdinando, Antonio January 2007 (has links)
As the Internet is increasingly being used by business companies to offer and procure services, providers of networked system services are expected to assure customers of specific Quality of Service (QoS) they could offer. This leads to scenarios where users prefer to negotiate required QoS guarantees prior to accepting a service, and service providers assess their ability to provide the customer with the requested QoS on the basis of existing resource availability. A system to be deployed in such scenarios should, in addition to providing the services, (i) monitor resource availability, (ii) be able to assess whether or not requested QoS can be met, and (iii) adapt to QoS perturbations (e.g., node failures) which undermine any assumptions made on continued resource availability. This thesis focuses on building such a QoS-Supportive system for reliably multicasting messages within a group of crash-prone nodes connected by loss-prone networks. System design involves developing a Reliable Multicast protocol and analytically estimating the multicast performance in terms of protocol parameters. It considers two cases regarding message size: small messages that fit into a single packet and large ones that need to be fragmented into multiple packets. Analytical estimations are obtained through stochastic modelling and approximation, and their accuracy is demonstrated using simulations. They allow the affordability of the requested QoS to be numerically assessed for a given set of performance metrics of the underlying network, and also indicate the values to be used for the protocol parameters if the affordable QoS is to be achieved. System implementation takes a modular approach and the major sub-systems built include: the QoS negotiation component, the network monitoring component and the reliable multicast protocol component. Two prototypes have been built. The first one is built as a middleware system in itself to the extent of testing our ideas over a group of geographically distant nodes using PlanetLab. The second prototype is developed as a part of the JGroups Reliable Communication Toolkit and provides, besides an example of scenario directly benefitting of such technology, an example integration of our subsystem into an already-existing system.
35

Routing and transfers amongst parallel queues

Martin, Simon P. January 2008 (has links)
This thesis is concerned with maximizing the performance of policies for routing and transferring jobs in systems of heterogeneous servers. The tools used are probabilistic modelling, optimization and simulation. First, a system is studied where incoming jobs are allocated to the queue belonging to one of a number of servers, each of which goes through alternating periods of being operative and inoperative. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service. The optimal routing policy for incoming jobs is obtained by solving numerical programming equations. A number of heuristic policies are compared against the optimal, and one dynamic routing policy is shown to perform well over a large range of parameters. Next, the problem of how best to deal with the transfer of jobs is considered. Jobs arrive externally into the queue attached to one of a number of servers, and on arrival are assigned a time-out period. Jobs whose time-out period expires before it commences service is instantaneously transferred to the end another queue, based on a routing policy. Upon transfer, a transfer cost is incurred. An approximation to the optimal routing policy is computed, and compared with a number of heuristic policies. One heuristic policy is found to perform well over a large range of parameters. The last model considered is the case where incoming jobs are allocated to the queue attached to one of a number of servers, each of which goes through periods of being operative and inoperative. Additionally, each job is assigned a time-out on arrival into a queue. Any job whose time-out period expires before it commences service is instantaneously transferred to the end of another queue, based on a transfer policy. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service, and additionally incur a cost for each transfer they experience. A number of heuristic transfer policies are evaluated and one heuristic which performs for a wide range of parameters is observed.
36

Support of elastic TCP traffic over broadband geostationary satellite networks

Karaliopoulos, Merkourios January 2004 (has links)
The last decade has seen the clear dominance of the Internet Protocol in the data communication networks. Although the killer applications have changed significantly during this time, from file transfer and email to Web browsing and more recently to file sharing, TCP has been consistently responsible for the overwhelming majority of Internet traffic. More significantly, recent protocol design efforts within the Internet community adopt fundamental features of the protocol, strongly suggesting that TCP and TCP-like traffic will continue contributing the majority of Internet data traffic, at least in the short-term future. Given the clear IP dominance, broadband satellite networks may be viewed as yet another subnet over which the TCP/IP suite, in general, and TCP traffic in particular, have to be efficiently supported. In this Thesis, we investigate issues relevant to the support of elastic TCP traffic over satellite networks. We focus our attention on geostationary satellite networks, which have almost monopolized the interest of satcom community the last decade, in light of the high risks and investment involved in the development of satellite constellations. We investigate mechanisms available at the satellite network for the provision of service differentiation to TCP flows. We demonstrate that fundamental satellite access network capabilities provide enough flexibility for the provision of qualitative service differentiation for TCP flows over these networks without necessitating per-flow state at the MAC layer and/or the computational overhead of prediction methods that are not straightforward for TCP traffic. Moreover, the split-TCP mechanism, despite the security-related and reliability-related concerns it raises, provides the network operator with significant additional flexibility in the treatment of TCP traffic. The mechanism forms a transport-layer differentiation mechanism that, when combined with lower-layer capabilities, can give rise to separate bearer services over the satellite network. We make two contributions to the study of the split-TCP concept. The first contribution is an analytical model for the estimation of split-TCP latency and the buffer requirements at the intermediate node that hosts the split-TCP agent. The second contribution is a split-TCP scheme, called split-Delayed Duplicate Acknowledgments (split-DDA), which draws heavily on DDA, a TCP variant presented and evaluated earlier as an end-to-end scheme in the context of terrestrial wireless networks. The provision of quantitative Quality-of-Service guarantees to TCP flows necessitates some form of access control to the satellite network and the presence of TCP performance enhancing proxy agents can assist this task significantly. We present and evaluate a heuristic implicit admission control algorithm for TCP flows over split-TCP satellite networks that can preserve the target requirements in terms of TCP steady-state throughput and TCP latency. We describe generic fixed-point approximations for the performance of TCP flows in geostationary satellite networks. We provide examples for the method applicability to various satellite network configurations evaluate the method against simulation results in the context of MAC-shared satellite links with dynamic bandwidth allocation mechanisms and discuss its strong and weak points. The utility of these approximations is further demonstrated in the case of the algorithm we introduce for the dynamic control of the TCP maximum receive window variable in. split-TCP satellite networks. The algorithm accelerates TCP transfers at low load without overloading unnecessarily the MAC buffers at high load. Finally, as an addendum, we provide a case study of Web browsing over bandwidth on demand satellite links. Spanning three layers, namely application, transport and access layers, the study demonstrates the impact of the radio interface mechanisms upon the performance perceived at the application layer.
37

Admission control and bandwidth management in IP differentiated services networks

Georgoulas, Stylianos January 2006 (has links)
The ever increasing importance of IP networks to home and business users, the steadily growing number of devices and services that run on them and the evolution of Internet towards the global multiservice network of the future make the efficient utilization of resources an issue of great importance and the capability to provide Quality of Service (QoS) an important challenge. It is widely accepted that the current Internet, using the simple best effort service model, is not able to support in a satisfactory fashion emerging services and market demands, such as Voice over IP (VoIP), Videoconferencing and real-time traffic in general. The latter has QoS requirements that the current best effort Internet cannot provide in a resource and, consequently, cost efficient manner, e.g. without massive overprovisioning. Differentiated Services (DiffServ) are seen as the emerging technology to support QoS in IP networks without the inherent scalability problems of Integrated services (IntServ). This is achieved by grouping traffic with similar QoS requirements into a finite number of traffic classes, allocating bandwidth to these classes, and differentiating their forwarding treatment in the network. However, by simply providing forwarding differentiation, DiffServ does not fundamentally solve the problem of controlling congestion. If the amount of traffic injected in a given class is not controlled through admission control, overload situations will occur and all traffic flows in that class will suffer a potentially harsh QoS degradation. The main objectives of this thesis are to investigate issues related to bandwidth allocation for provisioning real-time traffic classes and to propose admission control functions that can prevent overload situations so that the designated QoS guarantees are provided, while at the same time improving the allocated resources utilization under any offered traffic load conditions. We begin by investigating certain bandwidth management related issues with respect to bandwidth allocation and admission control schemes for the support of real-time traffic in DiffServ networks, related to the performance of the schemes as a function of topological placement and assumed multiplexing gains. We validate our study using simulations with the publicly available ns-2 simulator. Taking into account the implications of our bandwidth management study we then proceed to present our approach towards admission control for real-time traffic in DiffServ network domains, covering both the case where the traffic originates and terminates within the same domain (intradomain traffic) as well as the case where the traffic has to traverse a sequence of domains before reaching its destination (inter-domain traffic). By means of simulations we show that our proposed schemes perform very well and that they compare favourably against other schemes found in the literature. Key words; Admission Control, Bandwidth Management, Differentiated Services (DiffServ), Quality of Service (QoS).
38

Dynamic discovery, creation and invocation of type adaptors for Web Service workflow harmonisation

Szomszor, Martin January 2007 (has links)
Service-oriented architectures have evolved to support the composition and utilisation of heterogeneous resources, such as services and data repositories, whose deployments can span both physical and organisational boundaries. The Semantic Web Service paradigm facilitates the construction of workflows over such resources using annotations that express the meaning of the service through a shared conceptualisation. While this aids non expert users in the composition of meaningful workflows, sophisticated middleware is required to cater for the fact that service providers and consumers often assume different data formats for conceptually equivalent information. When syntactic mismatches occur, some form of workflow harmonisation is required to ensure that data incompatibilities are resolved, a step we refer to as syntactic mediation. Current solutions are entirely manual; users must consider the low-level interoperability issues and insert Type Adaptor components into the workflow by hand, contradicting the Semantic Web Service ideology. By exploiting the fact that services are connected together based on shared conceptual interfaces, it is possible to associate a canonical data model with these shared concepts, providing the basis for workflow harmonisation through this intermediary data model. To investigate this hypothesis, we have developed a formalism to express the mapping of elements between data models in a modular and composable fashion. To utilise such a formalism, we propose additional architecture that facilitates the discovery of declarative mediation rules and subsequent on-the-fly construction of Type Adaptors that can translate data between different syntactic representations. This formalism and proposed architecture have been implemented and evaluated against bioinformatics data sources to demonstrate a scalable and efficient solution that offers composability with virtually no overhead. This novel mediation approach scales well as the number of compatible data formats increases, promotes the sharing and reuse of mediation rules, and facilitates the automatic inclusion of Type Adaptor components into workflows.
39

Formal patterns for Web-based systems design

Rezazadeh, Abdolbaghi January 2006 (has links)
The ubiquitous and simple interface of Web browsers has opened the door for the devel- opment of a new class of distributed applications which they have been known as Web applications. As more and more systems become Web-enabled we become increasingly dependent on the Web applications. Therefore, reliability of such systems is a very crucial factor for successful operation of many modern organisations and institutes. In the ¯rst part of this thesis we review how Web systems have evolved from simple static pages, in their early days, to their current situation as distributed applications with sophisticated functionalities. We also ¯nd out how the design methods have evolved to align with the rapid changes both in the new emerging technologies and growing functionalities. Although design approaches for Web applications have improved during the last decade we conclude that dependability should be given more consideration. In Chapter 2 we explain how this could be achieved through the application of formal methods. Therefore, we have provided an overview of dependability and formal methods in this chapter. In the second part of this research we follow a practical approach to the formal modelling of Web Applications. Accordingly, in Chapter 3 we have developed a series of formal models for an integrated holiday booking system. Our main objectives are to gain some common knowledge of the domain and to identify some key areas and features with regard to our formal modelling approach. Formal modelling of large Web applications could be a very complex process. In Chapter 4 we have introduced the idea of formal patterns for speci¯cation and re¯nement to accelerate the modelling process and to help alleviate the burden of formal modelling. In a further attempt to tackle the complexity of the formal modelling of Web applica- tions, we have introduced the idea of speci¯cation partitioning in Chapter 5. Speci¯- cation partitioning is closely related to the notion of composition. In this chapter we have extended some CSP-like composition techniques to build the system speci¯cation from subsystems or parts. The summary of our research, related ¯ndings and some suggestions for the future work are presented in Chapter 6.
40

Performance evaluation of active network-based unicast and multicast congestion control protocols

Sari, Riri Fitri January 2003 (has links)
This thesis investigates the application of the Active Networks (AN) paradigm in congestion control. ANs provide an alternative paradigm to solving network problems by allowing the network elements to perform computation. Thus, the AN is a promising paradigm to shorten the deployment time of new protocols. Congestion control is a vital element for the Internet to avoid undesirable situations such as congestion collapse. The complexity and importance of congestion control has attracted many researchers to approach it in different ways, i.e. queuing theory, control theory, and recently game theory (pricing). This thesis is concerned with the performance evaluation of AN-based congestion control protocols which have been classified according to their modes of operation, i.e. unicast, multicast single rate, and multicast multirate protocols. The research phase includes modelling and simulation experiments with the ns-2 network simulator. The first area of interest in this thesis is unicast congestion control protocols. We integrate, run and test the novel active queue management called Random Early Marking (REM) over an AN-based unicast congestion controlled network called Active Congestion Control Transmission Control Protocol. (ACC TCP). It can be concluded that the implementation of an ANs paradigm is congestion control, enhanced by the application of REM queuing policy, improves the performance of the network in terms of its low buffer occupancy and stability compared with the one using Random Early Detection (RED) queue management algorithm. Results of simulation studies comparing the performance of conventional protocols with those of AN-based protocols are presented. We investigate the TCP-friendliness behaviour of an AN-based single rate multicast congestion control called Active Error Recovery/Nominee Congestion Avoidance (AER/NCA), which uses active services to recover from loss at the point of loss and assists the congestion control. The use of AN helps the multicast application to achieve optimal data rates. We compare the results of AER/NCA TCP-friendliness to those of a single rate multicast protocols called Pragmatic General Multicast Congestion Control (PGMCC). Our simulation revealed that AER/NCA achieves the desirable property of TCP-friendliness. We also calculated AER/NCA’s fairness index. For multicast multirate congestion control protocols we compare an adaptive AN-based layered multicast protocol called ALMA (Active Layered Multicast Adaptation) with a non-active network-based one called Packet-pair Receiver-driven Layered Multicast (PLM). ALMA performs layered multicast congestion control using AN approach, whereas PLM uses packet-pair techniques and fair scheduler. Experiments results shows that ALMA reacts differently from PLM in sharing the bottleneck link with CBR and TCP flows. We found that ALMA has fast convergence properties and provides flexibility to manage the network using the price function. Our experiments show that ALMA is not a TCP-friendly protocol and has a low inter-protocol fairness index.

Page generated in 0.0529 seconds