• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 28
  • 15
  • 1
  • 1
  • Tagged with
  • 134
  • 25
  • 23
  • 21
  • 18
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for developing and deploying business-to-business virtual communities

Tickle, Matthew January 2011 (has links)
The last decade has seen a growing interest in virtual communities (VCs) as a way of transferring and generating knowledge within organisations (Michaelides et al., 2010). The MySpace phenomenon and the increased use of VCs by large international organisations such as IBM and Procter and Gamble (P&G) confirms the importance of VCs in today's society and the global economy. The holistic approach of using modern Internet tools and technologies with social networks presents both opportunities and challenges in the modern era (Tickle et al., 2007). A VC can be defined as a community of people with a common interest but not necessarily a common geographic location (Sands, 2003). In their most basic form, VCs are websites that allow their users to interact with each other using tools such as discussion forums, 'Blog Spaces', real-time chat and trading areas. VCs allow companies to build stronger, more cost-effective connections between themselves, their partners and their customers (Roberts, 2006). If planned and executed correctly, VCs can benefit businesses by improving resource allocation, customer service and revenues, as well as lowering operating costs. Furthermore, VCs can act as bridges between companies and their customers by fostering product awareness, providing forums for questions and concerns and serving as conduits for feedback to improve future company products. It is therefore imperative that companies embrace business-to-business (B2B) VCs in order for them to remain competitive. Despite the benefits there are numerous challenges that stand to impede the success of each VC; the landscape is littered with VC projects that have failed to meet their expectations due to poor decision making at the development and subsequent deployment stages (Roberts, 2006). This research uses four qualitative case studies to create a holistic Framework with the aim of aiding practitioners wishing to develop and deploy their own B2B VCs. The Framework highlights the various decisions that must be made during the lifecycle of a VC before emphasising each decision's respective consequences. Contrary to common belief, this research has found that the technological aspect of a VC's development and deployment is not the most important factor - it is, in fact, the establishment of a community culture that dictates the success of a B2B VC.
2

Resource allocation policies for service provisioning systems

Palmer, Jennie January 2006 (has links)
This thesis is concerned with maximising the efficiency of hosting of service provisioning systems consisting of clusters or networks of servers. The tools employed are those of probabilistic modelling, optimization and simulation. First, a system where the servers in a cluster may be switched dynamically and preemptively from one kind of work to another is examined. The demand consists of two job types joining separate queues, with different arrival and service characteristics, and also different relative importance represented by appropriate holding costs. The switching of a server from queue i to queue j incurs a cost which may be monetary or may involve a period of unavail- ability. The optimal switching policy is obtained numerically by solving a dynamic programming equation. Two heuristic policies - one static and one are evaluated by simulation and are compared to the optimal dynamic - policy. The dynamic heuristic is shown to perform well over a range of pa- rameters, including changes in demand. The model, analysis and evaluation are then generalized to an arbitrary number, M, of job types. Next, the problem of how best to structure and control a distributed com- puter system containing many processors is considered. The performance trade-offs associated with different tree structures are evaluated approximately by applying appropriate queueing models. It is shown that. for a given set of parameters and job distribution policy, there is an optimal tree structure that minimizes the overall average response time. This is obtained numerically through comparison of average response times. A simple heuris- tic policy is shown to perform well under certain conditions. The last model addresses the trade-offs between reliability and perfor- mance. A number of servers, each of which goes through alternating periods of being operative and inoperative, offer services to an incoming stream of demands. The objective is to evaluate and optimize performance and cost metrics. A large real-life data set containing information about server break- downs is analyzed first. The results indicate that the durations of the oper- ative periods are not distributed exponentially. However, hyperexponential distributions are found to be a good fit for the observed data. A model based on these distributions is then formulated, and is solved exactly using the method of spectral expansion. A simple approximation which is accu- rate for heavily loaded systems is also proposed. The results of a number of numerical experiments are reported.
3

Design and implementation of a QoS-Supportive system for reliable multicast

Di Ferdinando, Antonio January 2007 (has links)
As the Internet is increasingly being used by business companies to offer and procure services, providers of networked system services are expected to assure customers of specific Quality of Service (QoS) they could offer. This leads to scenarios where users prefer to negotiate required QoS guarantees prior to accepting a service, and service providers assess their ability to provide the customer with the requested QoS on the basis of existing resource availability. A system to be deployed in such scenarios should, in addition to providing the services, (i) monitor resource availability, (ii) be able to assess whether or not requested QoS can be met, and (iii) adapt to QoS perturbations (e.g., node failures) which undermine any assumptions made on continued resource availability. This thesis focuses on building such a QoS-Supportive system for reliably multicasting messages within a group of crash-prone nodes connected by loss-prone networks. System design involves developing a Reliable Multicast protocol and analytically estimating the multicast performance in terms of protocol parameters. It considers two cases regarding message size: small messages that fit into a single packet and large ones that need to be fragmented into multiple packets. Analytical estimations are obtained through stochastic modelling and approximation, and their accuracy is demonstrated using simulations. They allow the affordability of the requested QoS to be numerically assessed for a given set of performance metrics of the underlying network, and also indicate the values to be used for the protocol parameters if the affordable QoS is to be achieved. System implementation takes a modular approach and the major sub-systems built include: the QoS negotiation component, the network monitoring component and the reliable multicast protocol component. Two prototypes have been built. The first one is built as a middleware system in itself to the extent of testing our ideas over a group of geographically distant nodes using PlanetLab. The second prototype is developed as a part of the JGroups Reliable Communication Toolkit and provides, besides an example of scenario directly benefitting of such technology, an example integration of our subsystem into an already-existing system.
4

Routing and transfers amongst parallel queues

Martin, Simon P. January 2008 (has links)
This thesis is concerned with maximizing the performance of policies for routing and transferring jobs in systems of heterogeneous servers. The tools used are probabilistic modelling, optimization and simulation. First, a system is studied where incoming jobs are allocated to the queue belonging to one of a number of servers, each of which goes through alternating periods of being operative and inoperative. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service. The optimal routing policy for incoming jobs is obtained by solving numerical programming equations. A number of heuristic policies are compared against the optimal, and one dynamic routing policy is shown to perform well over a large range of parameters. Next, the problem of how best to deal with the transfer of jobs is considered. Jobs arrive externally into the queue attached to one of a number of servers, and on arrival are assigned a time-out period. Jobs whose time-out period expires before it commences service is instantaneously transferred to the end another queue, based on a routing policy. Upon transfer, a transfer cost is incurred. An approximation to the optimal routing policy is computed, and compared with a number of heuristic policies. One heuristic policy is found to perform well over a large range of parameters. The last model considered is the case where incoming jobs are allocated to the queue attached to one of a number of servers, each of which goes through periods of being operative and inoperative. Additionally, each job is assigned a time-out on arrival into a queue. Any job whose time-out period expires before it commences service is instantaneously transferred to the end of another queue, based on a transfer policy. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service, and additionally incur a cost for each transfer they experience. A number of heuristic transfer policies are evaluated and one heuristic which performs for a wide range of parameters is observed.
5

Memory management architecture for next generation networks traffic managers

Zhang, Q. January 2012 (has links)
The trend of moving conventionallP networks towards Next Generation Networks (NGNs) has highlighted the need for more sophisticated Traffic Managers (TMs) to guarantee better network and Quality of Service (QoS); these have to be scalable to support increasing link bandwidth and to cater for more diverse emerging applications. Current TM solutions though, are limited and not flexible enough to support new TM functionality or QoS with increasing diversity at faster speeds. This thesis investigates efficient and flexible memory management architectures that are critical in determining scalability and upper limits of TM performance. The approach presented takes advantage of current FPGA technology that now offers a high density of computational resources and flexible memory configurations, leading to what the author contends to be an ideal, programmable platform for distributed network management. The thesis begins with a survey of current TM solutions and their underlying technologies/architectures, the outcome of which indicates that memory and memory interfacing are the major factors in determining the scalability and upper limits of TM performance. An analysis of the implementation cost for a new TM with the capability of integrated queuing and scheduling further highlights the need to develop a more effective memory management architecture. A new on-demand QM architecture for programmable TM is then proposed that can dynamically map the ongoing active flows to a limited number of physical queues. Compared to the traditional QMs, it consumes much less memory resources, leading to a more scalable and effiCient TM solution. Based on the analysis of the effect of varying Internet traffic on the proposed OM, a more robust and resilient QM architecture is derived that achieves higher scalability and pefformance by adapting its functionality to the changing network conditions.
6

Adaptive service provision and execution in mobile environments

AlShahwan, Feda A. January 2012 (has links)
Advances in the mobile device manufacture, rapid growth of Web services development and progression of wireless communication with the widespread use of Internet applications are the most recent trends in distributed information systems. The evolution of these trends yields the emergence of Mobile Web Services technology (MWS). Adaptive service provision and execution in mobile environments is a new avenue in MWS. It is emanated from the need to cope with the existence of mobile resource limitations to allow the reliable provision of a number of useful services that are hosted on these mobile devices. This research argues that the mechanisms used to facilitate service distribution allow non-interrupted provision of MWS efficiently. The objective of this research is to investigate these mechanisms and define a system for applying them. The main criteria for this system are flexibility, dynamicity and transparency. Simple Object Access Protocol (SOAP)-based Mobile Host Web service Framework (MHWF) is reproduced and extended as part of this research, to allow deploying, providing and executing SOAP-based services. Correspondingly, Representational State Transfer (RESTful) MHWF is defined and developed for providing RESTful-based services. Both frameworks have been analysed and compared in terms of performance, scalability, reliability and resource consumptions. Moreover, they both have been extended to allow service distribution through offloading, which is the first explored distribution mechanism. RESTful-based technology has shown that it is more convenient for mobile environment. This research also classifies the distribution strategies into three classes: Contentment Distribution (CD), Simple Partial Distribution (SPD) and Complex Partial Distribution (CPD). The distinction emerges from variance in the types and complexity levels of services that influence the quantity and quality of the distribution mechanisms' usage. Novel approaches are proposed in order to exploit these mechanisms and to define and setup the building blocks for the corresponding MHWFs. The correct behaviour of these frameworks is empirically validated and their safety properties are also verified analytically using formal methods. This is complemented by a proof of concept demonstration. Furthermore, an evaluation of their performance is carried out by simulation. These evaluation results are interpreted as Fuzzy logic rules that are used to trigger and control distribution schemes. Last but not least, an innovative approach to partition and orchestrate the execution tasks of the distributed services is followed based on the hierarchical structure used for representing the Uniform Resource Identifier (URI) of the invoked services
7

Reconciling community resource requirements in user provided networks

Bury, Sara Elizabeth January 2011 (has links)
In recent times, broad band Internet connectivity has become something presumed accessible to all, often shared throughout the home and between multiple users and devices. Despite the proliferation of online services this perceived ubiquity is unfortunately false, many are still unable to receive high quality Internet access within their homes, due to infrastructure restrictions or geographical problems. One solution is through the deployment of community networks to share Internet access, initiated, designed, and managed by ordinary people with little or no technical background. This thesis takes an interdisciplinary approach to understanding the challenges faced by the users of such networks, and emphasises the importance of user focussed techniques when designing network management solutions for community settings. It investigates the ways in which communal networks can be improved by encouraging more formalised resource sharing, and how users can be aided to better understand their network usage through the design and implementation of an appropriate system.
8

High frequency internet protocol for wide area networks

Kariyawasam, Sharadha January 2010 (has links)
The future success of high frequency (HF) communication systems rely on its ability to integrate and support IP diversity within a multiple intemet protocol (IP) based networks, such as satellite communication (SATCOM), local area network (LAN), wide area network (WAN) bearers. The introduction of new and proposed standards on HP-IP in recent years has increased the interest in the areas of performance analysis of HP -IP communication systems and networks. A wide range of modem services rely on IP and current HP-IP systems can support 2.4 to 19.2 kbps services such as e-mail and intemet. However, the reliability and the quality of service (QoS) still remains an issue of interest, particularly over longer distance skywave channels. These modem services require a higher data rate, much better bandwidth utilisation and a good QoS for its successful implementation. This work investigated HP-IP systems with the aim of improving the performance of legacy, current and proposed future systems without modifications to existing hardware systems. Initially the research conducted involved practical measurements and analysis on HF-IP systems complying with proposed NATO STANAG 5066 draft/edition 2 standards. Having investigated several NATO HF-IP standards (STANAG 5066 editionl, STANAG 5066 draft/edition 2, STANAG 4539/4285/4529, etc), a novel concept of error control coding (ECC) within the data link (DL) layer for HP-IP systems was proposed. Benefit of this proposed concept is that it does not require hardware modifications in legacy and current system for improving the performance. For application of this concept high performance low density parity check (LDPC) coding was considered. Two classes of short block length quasi-cyclic (QC) LDPC codes with switchable- rate single encoder/decoder structure; based on finite fields were designed and constructed. Several code rates were constructed within a single encoder/decoder structure resulting in reduced implementation complexity. Both classes of codes were simulated using HF channel model (ITU- R F.1487) covering latitudes and conditions for performance analysis. The simulation results show by using switchable-rate QC-LDPC coding scheme that there is coding gain of 2.4 dB compared to the existing STANAG 4539 convolutional coding scheme demonstrating the high performance of the proposed scheme in ITU-R F.l487 HF channel environment. In addition, the use of STANAG 5066 draft/edition 2 operating on a skywave multi-node HF-IP token ring (TR) WAN for a civilian disaster relief scenario was investigated. Here, a novel HF-IP network concept was proposed. The concept incorporates multi-node HF-IP TR WAN as an inner network, supported by an outer network made up of digital radio monodiale (DRM) service operating on a single frequency within the HF band. As STANAG 5066 draft/edition 2 was primarily designed to supporting multi-node HF-IP networks, it was vital to understand the network reliability and number of practical nodes that this network can support in different skywave HF channel conditions. A 3-node network based on skywave propagation covering a large geographical area was investigated. Using this scenario probability of reliability of a skywave multi-node HF-IP was analysed by simulations and practical measurements using STANAG 5066 draft/edition 2 IP protocol and STANAG 4539 modem setups. This analysis showed that the skywave multi-node HF-IP TR network can reliably operate between 3-5 nodes.
9

Traffic engineering for quality of service provisioning in IP networks

Trimintzios, Panagiotis January 2004 (has links)
No description available.
10

Semantic web service generation for text classification

Ball, Stephen Wayne January 2006 (has links)
No description available.

Page generated in 0.0862 seconds