• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 32
  • 11
  • 11
  • 9
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 175
  • 175
  • 76
  • 44
  • 43
  • 33
  • 32
  • 29
  • 22
  • 18
  • 18
  • 17
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Service Level Achievments - Test Data for Optimal Service Selection

Russ, Ricardo January 2016 (has links)
This bachelor’s thesis was written in the context of a joint research group, which developed a framework for finding and providing the best-fit web service for a user. The problem of the research group lays in testing their developed framework sufficiently. The framework can either be tested with test data produced by real web services which costs money or by generated test data based on a simulation of web service behavior. The second attempt has been developed within this scientific paper in the form of a test data generator. The generator simulates a web service request by defining internal services, whereas each service has an own internal graph which considers the structure of a service. A service can be atomic or can be compose of other services that are called in a specific manner (sequential, loop, conditional). The generation of the test data is done by randomly going through the services which result in variable response times, since the graph structure changes every time the system has been initialized. The implementation process displayed problems which have not been solved within the time frame. Those problems are displaying interesting challenges for the dynamical generation of random graphs. Those challenges should be targeted in further research.
122

Modelling of reliable service based operations support system (MORSBOSS)

Kogeda, Okuthe Paul January 2008 (has links)
Philosophiae Doctor - PhD / The underlying theme of this thesis is identification, classification, detection and prediction of cellular network faults using state of the art technologies, methods and algorithms.
123

Resource dimensioning in a mixed traffic environment

Roon, Selwin Jakobus Emiel 24 January 2006 (has links)
An important goal of modern data networks is to support multiple applications over a single network infrastructure. The combination of data, voice, video and conference traffic, each requiring a unique Quality of Service (QoS), makes resource dimensioning a very challenging task. To guarantee QoS by mere over-provisioning of bandwidth is not viable in the long run, as network resources are expensive. The aim of proper resource dimensioning is to provide the required QoS while making optimal use of the allocated bandwidth. Dimensioning parameters used by service providers today are based on best practice recommendations, and are not necessarily optimal. This dissertation focuses on resource dimensioning for the DiffServ network architecture. Four predefined traffic classes, i.e. Real Time (RT), Interactive Business (IB), Bulk Business (BB) and General Data (GD), needed to be dimensioned in terms of bandwidth allocation and traffic regulation. To perform this task, a study was made of the DiffServ mechanism and the QoS requirements of each class. Traffic generators were required for each class to perform simulations. Our investigations show that the dominating Transport Layer protocol for the RT class is UDP, while TCP is mostly used by the other classes. This led to a separate analysis and requirement for traffic models for UDP and TCP traffic. Analysis of real-world data shows that modern network traffic is characterized by long-range dependency, self-similarity and a very bursty nature. Our evaluation of various traffic models indicates that the Multi-fractal Wavelet Model (MWM) is best for TCP due to its ability to capture long-range dependency and self-similarity. The Markov Modulated Poisson Process (MMPP) is able to model occasional long OFF-periods and burstiness present in UDP traffic. Hence, these two models were used in simulations. A test bed was implemented to evaluate performance of the four traffic classes defined in DiffServ. Traffic was sent through the test bed, while delay and loss was measured. For single class simulations, dimensioning values were obtained while conforming to the QoS specifications. Multi-class simulations investigated the effects of statistical multiplexing on the obtained values. Simulation results for various numerical provisioning factors (PF) were obtained. These factors are used to determine the link data rate as a function of the required average bandwidth and QoS. The use of class-based differentiation for QoS showed that strict delay and loss bounds can be guaranteed, even in the presence of very high (up to 90%) bandwidth utilization. Simulation results showed small deviations from best practice recommendation PF values: A value of 4 is currently used for both RT and IB classes, while 2 is used for the BB class. This dissertation indicates that 3.89 for RT, 3.81 for IB and 2.48 for BB achieve the prescribed QoS more accurately. It was concluded that either the bandwidth distribution among classes, or quality guarantees for the BB class should be adjusted since the RT and IB classes over-performed while BB under-performed. The results contribute to the process of resource dimensioning by adding value to dimensioning parameters through simulation rather than mere intuition or educated guessing. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
124

Performance comparison of two dynamic shared-path protection algorithms for WDM optical mesh networks

Sharma, Ameeth 26 January 2009 (has links)
Finding an optimal solution to the problem of fast and efficient provisioning of reliable connections and failure recovery in future intelligent optical networks is an ongoing challenge. In this dissertation, we investigate and compare the performance of an adapted shared-path protection algorithm with a more conventional approach; both designed for survivable optical Wavelength Division Multiplexing (WDM) mesh networks. The effect of different classes of service on performance is also investigated. Dedicated path protection is a proactive scheme which reserves spare resources to combat single link failures. Conventional Shared-path Protection (CSP) is desirable due to the efficient utilization of resources which results from the sharing of backup paths. Availability is an important performance assessment factor which measures the probability that a connection is in an operational state at some point in time. It is the instantaneous counterpart of reliability. Therefore, connections that do not meet their availability requirements are considered to be unreliable. Reliability Aware Shared-path Protection (RASP) adopts the advantages of CSP by provisioning reliable connections efficiently, but provides protection for unreliable connections only. With the use of a link disjoint parameter, RASP also permits the routing of partial link disjoint backup paths. A simulation study, which evaluates four performance parameters, is undertaken using a South African mesh network. The parameters that are investigated are: 1. Blocking Probability (BP), which considers the percentage of connection requests that are blocked, 2. Backup Success Ratio (BSR), which considers the number of connections that are successfully provisioned with a backup protection path, 3. Backup Primary Resource Ratio (BPR), which considers the ratio of resources utilized to cater for working traffic to the resources reserved for protection paths and lastly 4. Reliability Satisfaction Ratio (RSR), which evaluates the ratio of provisioned connections that meet their availability requirements to the total number of provisioned connections. Under dynamic traffic conditions with varying network load, simulation results show that RASP can provision reliable connections and satisfy Service Level Agreement (SLA) requirements. A competitive Blocking Probability (BP) and lower Backup Primary Resource Ratio (BPR) signify an improvement in resource utilization efficiency. A higher Backup Success Ratio (BSR) was also achieved under high Quality of Service (QoS) constraints. The significance of different availability requirements is evaluated by creating three categories, high availability, medium availability and low availability. These three categories represent three classes of service, with availability used as the QoS parameter. Within each class, the performance of RASP and CSP is observed and analyzed, using the parameters described above. Results show that both the BP and BPR increase with an increase in the availability requirements. The RSR decreases as the reliability requirements increase and a variation in BSR is also indicated. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
125

Optimizing PaaS provider profit under service level agreement constraints / Optimisation du profit des fournisseurs PaaS sous des contraintes de contrat de service

Dib, Djawida 07 July 2014 (has links)
L'informatique en nuage (cloud computing) est un paradigme émergent qui révolutionne l'utilisation et la commercialisation des services informatiques. De nos jours, l'impact socio-économique de l'informatique en nuage et plus particulièrement des services de PaaS (plate-forme en tant que service) devient essentiel, puisque le nombre d'utilisateurs et de fournisseurs des cloud PaaS est en pleine croissance. L'objectif principal des fournisseurs de cloud PaaS est de générer le maximum de profit des services qu'ils fournissent. Cela les oblige à faire face à un certain nombre de défis, tels que la gestion efficace des ressources sous-jacentes et la satisfaction des SLAs (contrat de service) des applications hébergées. Dans cette thèse, nous considérons un environnement PaaS hybride de cloud bursting, où le fournisseur PaaS possède un nombre limité de ressources privées et a la possibilité de louer des ressources publiques. Ce choix permet au fournisseur PaaS d'avoir un contrôle complet sur les services hébergés dans les ressources privées et de profiter de ressources publiques pour gérer les périodes de pointe. De plus, nous proposons une solution rentable pour gérer un tel système PaaS sous des contraintes de SLA. Nous définissons une politique d'optimisation de profit qui, à chaque requête d'un nouveau client, évalue le coût d'hébergement de son application en utilisant les ressources publiques et privées et choisit l'option qui génère le plus de profit. Pendant les périodes de pointe la politique considère deux autres options. La première option consiste à emprunter quelques ressources aux applications en cours d'exécution tout en considérant le paiement de pénalités si leur qualité de service est affectée. La seconde option consiste à attendre que des ressources privées soient libérés tout en considérant le paiement de pénalités si la qualité de service de la nouvelle application est affectée. En outre, nous avons conçu et mis en œuvre une architecture de cloud PaaS, appelée Meryn, qui intègre la politique d'optimisation proposée, supporte le cloud bursting et héberge des applications du type batch et MapReduce. Les résultats de notre évaluation montrent l'efficacité de notre approche dans l'optimisation du profit du fournisseur. En effet, comparée à une approche de base, notre approche fournit jusqu'à 11.59 % et 9.02 % plus de profits pour le fournisseur dans respectivement les simulations et les expériences. / Cloud computing is an emerging paradigm revolutionizing the use and marketing of information technology. As the number of cloud users and providers grows, the socio-economical impact of cloud solutions and particularly PaaS (platform as a service) solutions is becoming increasingly critical. The main objective of PaaS providers is to generate the maximum profit from the services they provide. This requires them to face a number of challenges such as efficiently managing the underlying resources and satisfying the SLAs of the hosted applications. This thesis considers a cloud-bursting PaaS environment where the PaaS provider owns a limited number of private resources and is able to rent public cloud resources, when needed. This environment enables the PaaS provider to have full control over services hosted on the private cloud and to take advantage of public clouds for managing peak periods. In this context, we propose a profit-efficient solution for managing the cloud-bursting PaaS system under SLA constraints. We define a profit optimization policy that, after each client request, evaluates the cost of hosting the application using public and private resources and chooses the option that generates the highest profit. During peak periods the optimization policy considers two more options. The first option is to take some resources from running applications, taking into account the payment of penalties if their promised quality of service is affected. The second option is to wait until private resources become available, taking into account the payment of penalties if the promised quality of service of the new application is affected. Furthermore we designed and implemented an open cloud-bursting PaaS system, called Meryn, which integrates the proposed optimization policy and provides support for batch and MapReduce applications. The results of our evaluation show the effectiveness of our approach in optimizing the provider profit. Indeed, compared to a basic approach, our approach provides up to 11.59% and 9.02% more provider profit in, respectively, simulations and experiments.
126

Offre de service dans les réseaux de nouvelle génération : négociation sécurisée d’un niveau de service de bout en bout couvrant la qualité de service et la sécurité

Chalouf, Mohamed Aymen 03 December 2009 (has links)
Fondés sur la technologie IP, les réseaux de nouvelle génération (NGN) doivent surmonter les principaux défauts inhérents à cette technologie, à savoir l’absence de la qualité de service (QoS), la sécurité et la gestion de mobilité. Afin de garantir une offre de service dans un réseau NGN, un protocole de négociation de niveau de service peut être utilisé. Cependant, la majorité des protocoles de négociation existants permettent l’établissement d’un niveau de service qui ne couvre que la QoS. Quant à la sécurité et la mobilité, elles ont été souvent exclues de ces négociations, et donc gérées d’une manière indépendante. Cependant, la sécurisation d’un service peut causer la dégradation de la QoS, et la mobilité de l’utilisateur peut modifier ses besoins. D’où, l’intérêt de gérer simultanément la QoS et la sécurité tout en prenant en considération la mobilité des utilisateurs. Dans ce contexte, nous proposons de développer un protocole de signalisation qui permet à des clients fixes ou mobiles de négocier, d’une manière dynamique, automatique et sécurisée, un niveau de service couvrant à la fois la QoS et la sécurité. Notre contribution est composée de trois parties. Dans un premier temps, nous nous basons sur un protocole de négociation de QoS, utilisant les services web, afin de permettre la négociation conjointe de la sécurité et de la QoS tout en tenant compte de l’impact de la sécurité sur la QoS. Par la suite, cette négociation est rendue automatique en la basant sur un profil utilisateur qui permet d’adapter le niveau de service au contexte de l’utilisateur. Ainsi, l’offre de service est plus dynamique et peut s’adapter aux changements de réseau d’accès suite à la mobilité de l’utilisateur. Nous proposons, finalement, de sécuriser le flux de négociation afin de pallier aux différents types d’attaques qui peuvent viser les messages de négociation échangés. / Based on the IP technology, the next generation network (NGN) must overcome the main drawbacks of this technology consisting in the lack of quality of service (QoS), security and mobility management. To ensure a service offer in an NGN, a protocol for negotiating service level can be used. However, most of the existing negotiation protocols allow the establishment of a service level which includes only QoS. As for security and mobility, they were often not covered by these negotiations, and therefore managed independently. However, securing a service can cause degradation of the QoS, and the mobility of a user can change the service needs in terms of QoS and security. Thus, we need to simultaneously manage QoS and security while taking into account user’s mobility. In this context, we propose to develop a signaling protocol that allows fixed and mobile users to negotiate a service level covering both QoS and security, in a dynamic, automatic and secure manner. Our contribution is achieved in three steps. Initially, we rely on a signaling protocol, which performs QoS negotiation using web services, to enable the negotiation of both security and QoS while taking into account the impact of security on QoS. Then, this negotiation is automated by basing it on a user profile. This allows adjusting the service level according to changes which can occur on the user context. Thus, the service offer is more dynamic and can be adapted to changes of access network resulting from the mobility of the user. Finally, we propose to secure the negotiation flows in order to prevent the different attacks that can target the exchanged messages during a negotiation process.
127

A Component-based Business Continuity and Disaster Recovery Framework

Somasekaram, Premathas January 2017 (has links)
IT solutions must be protected so that the business can continue, even in the case of fatal failures associated with disasters. Business continuity in the context of disaster implies that business cannot continue in the current environment but instead must continue at an alternate site or data center. However, the BC/DR concept today is too fragmented, as many different frameworks and methodologies exist. Furthermore,many of the application-specific solutions are provided and promoted by software vendors, while hardware vendors provide solutions for their hardware environments. Nevertheless, there are concerns that BC/DR solutions often do not connect to the technical components that are in the lower layers, which function as the foundationfor any such solutions; hence, it is equally important to connect and map the requirements accordingly. Moreover, a shift in the hardware environment, such as cloud computing, as well as changes in operations management, such as outsourcing,add complexity that must be captured by a BC/DR solution. Furthermore, the integrated nature of IT-based business solutions also presents new challenges, as it isno longer one IT solution that must be protected but also other IT solutions that are integrated to deliver an individual business process. Thus, it will be difficult to employa current BC/DR approach. Hence, the purpose of this thesis project is to design, develop, and present a novel way of addressing the BC/DR gaps, while supporting the requirements of a dynamic IT environment. The solution reuses most elements fromthe existing standards and solutions. However, it also includes new elements to capture and present the technical solution; hence, the complete solution is designatedas a framework. The new framework can support many IT solutions since it will havea modular approach, and it is flexible, scalable, and platform and application independent, while addressing the solution on a component level. The new framework is applied to two application scenarios at the stakeholder site, and theresults are studied and presented in this thesis.
128

An investigation into parallel job scheduling using service level agreements

Ali, Syed Zeeshan January 2014 (has links)
A scheduler, as a central components of a computing site, aggregates computing resources and is responsible to distribute the incoming load (jobs) between the resources. Under such an environment, the optimum performance of the system against the service level agreement (SLA) based workloads, can be achieved by calculating the priority of SLA bound jobs using integrated heuristic. The SLA defines the service obligations and expectations to use the computational resources. The integrated heuristic is the combination of different SLA terms. It combines the SLA terms with a specific weight for each term. Theweights are computed by applying parameter sweep technique in order to obtain the best schedule for the optimum performance of the system under the workload. The sweepingof parameters on the integrated heuristic observed to be computationally expensive. The integrated heuristic becomes more expensive if no value of the computed weights result in improvement in performance with the resulting schedule. Hence, instead of obtaining optimum performance it incurs computation cost in such situations. Therefore, there is a need of detection of situations where the integrated heuristic can be exploited beneficially. For that reason, in this thesis we propose a metric based on the concept of utilization, to evaluate the SLA based parallel workloads of independent jobs to detect any impact of integrated heuristic on the workload.
129

PERFORMANCE ASSURANCE FOR CLOUD-NATIVE APPLICATIONS

Zabad, Bassam January 2021 (has links)
Preserving the performance of cloud services according to service level agreements (SLAs) is one of the most important challenges in cloud infrastructure. Since the workload is always changing incrementally or decremental, managing the cloud resources efficiently is considered an important challenge to satisfy non-functional requirements like high availability and cost. Although many common approaches like predictive autoscaling could solve this problem, it is still not so efficient because of its constraints like requiring a workload pattern as training data. Reinforcement machine learning (RL) can be considered a significant solution for this problem. Even though reinforcement learning needs some time to be stable and needs many trials to decide the value of factors like discount rate, this approach can adapt with the dynamic workload. In this  thesis, through a controlled experiment research method, we show how a model-free reinforcement algorithm like Q-learning can adapt to the dynamic workload by applying horizontal autoscaling to keep the performance of cloud services at the required level. Furthermore, the Amazon web services (AWS) platform is used to demonstrate the efficiency of the Q-learning algorithm in dealing with dynamic workload and achieving high availability.
130

Využití controllingu v centru sdílených služeb / Application of Management Control System in a Shared Services Centre

Šteinhüblová, Katarína January 2018 (has links)
The master thesis deals with the status of management control system in shared services center of analyzed company in cooperation with controlling department of the local branch office in Slovakia. Theoretical part of the theses focuses on the theoretical basis for the analytical part, concerning the theory of controlling, centers of shared services in general and the transfer of controlling activities to them. The analytical part includes the analysis of current state of the controlling activities transfer and the cooperation between local controllers and controllers in shared services center. In the final part of the thesis are proposed recommendations and suggestions for possible improvements of management control system as a subsystem of management in the company.

Page generated in 0.0628 seconds