• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 119
  • 60
  • 57
  • 38
  • 27
  • 23
  • 16
  • 9
  • 9
  • 7
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 746
  • 746
  • 195
  • 167
  • 145
  • 119
  • 107
  • 102
  • 100
  • 90
  • 89
  • 88
  • 86
  • 75
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

A new cross-layer adaptive architecture to guarantee quality of service in WiMAX networks / Uma nova arquitetura adaptativa entre camadas para garantir qualidade de serviço em redes WiMAX

Both, Cristiano Bonato January 2011 (has links)
Redes sem fio devem prover qualidade de serviço para aplicações de voz, video e dados. Um padrão definido para oferecer qualidade de serviço nessas redes é o documento IEEE 802.16. Com o objetivo de melhorar a qualidade de transmissão, este padrão utiliza dois principais mecanismos físicos: (i) Orthogonal Frequency Division Multiple Access como interface física e (ii) a possibilidade de ajustar a robustez da transmissão em relação as imperfeições físicas que podem comprometer a transmissão. Além disso, o padrão define um conjunto de componentes na estação base, tal como alocadores, escalonadores e controles de admissões que devem ser modelados para prover uma arquitetura que garanta qualidade de serviço. Entretanto, o padrão não define nem os algoritmos de cada componente, nem a integração entre estes componentes. Investigações objetivando prover qualidade de serviço tem sido propostas no contexto de redes IEEE 802.16. A literatura sobre redes IEEE 802.16 móveis mostra que as atuais pesquisas estão focadas em soluções específicas para cada componente, ou em soluções com integrações parciais. O foco destas soluções é prover a melhor alternativa para problemas individuais para um componente particular. Entretanto, em todos os estudos realizadas nesta tese, não encontrou-se nenhuma pesquisa endereçando propostas sobre a qualidade de serviço global considerando a diversidade dos requisitos de tráfegos das aplicações e as condições de propagação do canal de rádio frequência. Neste contexto, essa tese propõe uma nova arquitetura para garantir qualidade de serviço em uma estão base que deve ser modelada usando uma infraestrutura entre camada para adaptar-se aos requisitos dinâmicos do tráfego, bem como as condições do canal de rádio frequência. O objetivo é integrar os componentes definidos pelo padrão com os mecanismos físicos. Outro objetivo é analisar a arquitetura proposta, através de uma metodologia de avaliação que é baseada segundo a especificação do sistema de avaliação do fórum Worldwide Interoperability for Microwave Access. Assim, a análise da nova arquitetura adaptativa entre camadas é realizada e os resultados mostram a eficiência na alocação dos dados, bem como o mínimo atraso e jitter gerado nas aplicações de tempo real. / Wireless networks must provide quality of service to voice, video and data applications. A standard defined to offer quality of service in these networks is the IEEE 802.16 document. In order to improve the quality of transmission, this standard uses two main physical mechanisms: (i) Orthogonal Frequency Division Multiple Access as physical interface and (ii) the possibility of adjusting the transmission robustness to face the physical impairments that may compromise the transmission. Moreover, the standard defines a set of components in the base station, such as allocators, schedulers, and connection admission controllers that must be modeled to provide an architecture that guarantees quality of service. However, the standard does not define either the algorithm running inside each one of the components nor the integration among them. Investigations aiming to provide quality of service have been proposed in the context of IEEE 802.16 networks. The literature on mobile IEEE 802.16 networks shows that the current research is focused on specific solutions for each component or in solutions with partial integration. The focus of those solutions is to provide the best alternative for individual problems of a particular component. However, to the best of our knowledge, no research addressing the overall quality of service architecture considering both the diversity of applications traffic requirements and the propagation conditions of the radio frequency channel has been proposed so far. In this context, this thesis proposes a new architecture to guarantee quality of service in the base station that must be modeled using a cross-layer infrastructure able to adapt to the dynamics of traffic requirements as well as to the radio frequency channel conditions. The aim is to integrate the components defined by the standard with the physical mechanisms. Another objective is to evaluate the proposed architecture, through an evaluation methodology that is defined following the specification of the system evaluation of the Worldwide Interoperability for Microwave Access forum. Therefore, the analysis on the new cross-layer adaptive architecture is performed and the results show efficient data allocation as well as a minimal delay and jitter for real-time applications.
392

Object-based PON Access and Tandem Networking

January 2014 (has links)
abstract: The upstream transmission of bulk data files in Ethernet passive optical networks (EPONs) arises from a number of applications, such as data back-up and multimedia file upload. Existing upstream transmission approaches lead to severe delays for conventional packet traffic when best-effort file and packet traffic are mixed. I propose and evaluate an exclusive interval for bulk transfer (EIBT) transmission strategy that reserves an EIBT for file traffic in an EPON polling cycle. I optimize the duration of the EIBT to minimize a weighted sum of packet and file delays. Through mathematical delay analysis and verifying simulation, it is demonstrated that the EIBT approach preserves small delays for packet traffic while efficiently serving bulk data file transfers. Dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time, such as demanding e-science applications. Past research on upstream transmission in passive optical networks (PONs) has mainly considered packet-switched traffic and has focused on optimizing packet-level performance metrics, such as reducing mean delay. This study proposes and evaluates a dynamic circuit and packet PON (DyCaPPON) that provides dynamic circuits along with packet-switched service. DyCaPPON provides (i) flexible packet-switched service through dynamic bandwidth allocation in periodic polling cycles, and (ii) consistent circuit service by allocating each active circuit a fixed-duration upstream transmission window during each fixed-duration polling cycle. I analyze circuit-level performance metrics, including the blocking probability of dynamic circuit requests in DyCaPPON through a stochastic knapsack-based analysis. Through this analysis I also determine the bandwidth occupied by admitted circuits. The remaining bandwidth is available for packet traffic and I analyze the resulting mean delay of packet traffic. Through extensive numerical evaluations and verifying simulations, the circuit blocking and packet delay trade-offs in DyCaPPON is demonstrated. An extended version of the DyCaPPON designed for light traffic situation is introduced in this article as well. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
393

Plan de connaissance pour les réseaux sémantiques : application au contrôle d'admission / Knowledge plane for semantic networks : admission control

Ammar, Doreid 07 December 2012 (has links)
Depuis quelques années, il y a un réel changement dans les usages des réseaux en termes d'applications véhiculées ainsi que dans leur nombre. On voit apparaître de plus en plus d'applications contraintes en termes de délai, comme par exemple la téléphonie sur IP, ainsi que d'applications gourmandes en ressources comme par exemple le Srteaming Video. La croissance en volume de ces applications copmmence à poser des problèmes de congestion dasn les réseaux filiares et sans fil. Les opérateurs réseaux doivent être capables d'absorber ces changements de trafic, de faire face à cette demande de plus en plus intensive en bande passante et de fournir une bonne qualité (QoS) aux applications. Cela nécessite des mécanismes intellignets en termes d'ordonnancement et de gestion des files d'attente, de contrôle d'admission, de contrôle de débit et/ou de routage. L'objectif initial de cette thèse étati d'aboutir à la conception d'une nouvelle architecture de traitement et de gestion du trafic et de la qualité de service pour le contrôle d'admission. Plus précisément nous présentons une nouvelle solution pour le contrôle d'admission qui repose sur l'élaboration en continu d'un plan de connaissance et sur la modélisatio automatique du comportement d'un lien réseau par une file d'attente monoserveur. Norte solution doit permettre d'offrir une garantie probabiliste d'un paramètre de performance QoS qui peut être le délai d'attente moyen des paquets dans le buffer du lien ou le taux de perte. Nous avons évalué les performances de notre nouveau contro^le d'admission par simulation en considérant diverses conditions possibles de trafic. Lers résultats obtenus indiquent que la solution proposée permet d'atteindre un contrôle d'admission ni trop conservateur, ni trop permissif. En outre, notre solution offre l'avantage de se baser uniquement sur une connaisssance acquise au cours du temps et permet ainsi de s'affranchir d'un paramétrage compliqué des paramètres comme c'est le cas pour les solutions classiques de contrôle d'admission / Over the las few years, new ussages such as streaming or live video watching are increasingly representing a significant part of Internet traffic. Network operators face the challenge of satisfying the quality of experience expected by end-users while, in the same time, avoiding the over-provisioning of transmission links. Bandwidth management offers a wide spectrum of policies to overcome this issue. Possible options include congestion control, scheduling algorithms, traffic shaping and admission control. The initial objective of this thesis was to design of a new architecture of traffic management and quality of service for admission control. More precisely, we introduce a novel data-driven method based on a time-varying model that we refer to as Knowledge-Based Admission Control solutions (KBAC). Our KBAC solution consists of three main stages : (i) collect leasurments on the on-going traffic over the communication link ; (ii) maintain an up-to-date broad view of the link behavior, and feed it to a Knowledge Plane ; (iii) model the observed link behavior by a mono-server queue whose parameters are set auutomatically and which predicts the expected QoS if a flow requesting admission were to be accepted. our KBAC solution provides a probalistic guarantee whose admission thresold is either expressed, as a bounded dealy or as a bounded loss rate. We run extensive siçmulations using various traffic conditions to assess the behavior of our KBAC solution in the case of a delay thresold. The results show that our KBAC solution leads to a good trade-off between flow performance and resource utilization. This ability stems from the quick and autoamtic adjustment of its admission policy according to the actual variations on the traffic conditions. On the other hand, our KBAC solution avoids the critical step of precisely calibrating key parameters.
394

Equité d'accès aux ressources dans les systèmes partagés best-effort / Resources access fairness in best-effort shared systems

Goichon, François 16 December 2013 (has links)
Au cours de la dernière décennie, l'industrie du service informatique s'est métamorphosée afin de répondre à des besoins client croissants en termes de disponibilité, de performance ou de capacité de stockage des systèmes informatisés. Afin de faire face à ces demandes, les hébergeurs d'infrastructures ont naturellement adopté le partage de systèmes où les charges de travail de différents clients sont exécutées simultanément. Cette technique, mutualisant les ressources à disposition d'un système entre les utilisateurs, permet aux hébergeurs de réduire le coût de maintenance de leurs infrastructures, mais pose des problèmes d'interférence de performance et d'équité d'accès aux ressources. Nous désignons par le terme systèmes partagés best-effort les systèmes dont la gestion de ressources est centrée autour d'une maximisation de l'usage des ressources à disposition, tout en garantissant une répartition équitable entre les différents utilisateurs. Dans ce travail, nous soulignons la possibilité pour un utilisateur abusif d'attaquer les ressources d'une plateforme partagée afin de réduire de manière significative la qualité de service fournie aux autres utilisateurs concurrents. Le manque de métriques génériques aux différentes ressources, ainsi que le compromis naturel entre équité et optimisation des performances forment les causes principales des problèmes rencontrés dans ces systèmes. Nous introduisons le temps d'utilisation comme métrique générique de consommation des ressources, métrique s'adaptant aux différentes ressources gérées par les systèmes partagés best-effort. Ceci nous amène à la spécification de couches de contrôles génériques, transparentes et automatisées d'application de politiques d'équité garantissant une utilisation maximisée des ressources régulées. Notre prototype, implémenté au sein du noyau Linux, nous permet d'évaluer l'apport de notre approche pour la régulation des surcharges d'utilisation mémoire. Nous observons une amélioration significative de la performance d'applications typiques des systèmes partagés best-effort en temps de contention mémoire. De plus, notre technique borne l'impact d'applications abusives sur d'autres applications légitimes concurrentes, puisque l'incertitude sur les durées d'exécution est naturellement amoindrie. / Over the last ten years, the IT services industry has gone through major transformations, to comply with customers ever-growing needs in terms of availability, performance or storage capabilities of IT infrastructures. In order to cope with this demand, IT service providers tend to use shared systems, executing mutiple workloads from distinct customers simultaneously on the same system. This technique allows service providers to reduce the maintenance cost of their infrastructure, by sharing the resources at their disposal and therefore maximizing their utilization. However, this assumes that the system is able to prevent arbitrary workloads from having significant impact on other workloads' performance. In this scenario, the operating system's resource multiplexing layer tries to maximize resource consumption, as well as enforcing a fair distribution among users. We refer to those systems as best-effort shared systems. In this work, we show that malicious users may attack a shared system's resources, to significantly reduce the quality of service provided to other concurrent users. This issue of resource control layers in shared systems can be linked to the lack of generic accounting metrics, as well as the natural trade-off that such systems have to make between fairness and performance optimization. We introduce the utilization time as a generic accounting metric, which can be applied to the different resources typically managed by best-effort shared systems. This metric allows us to design a generic, transparent and automated resource control layer, which enables the specification of simple resource management policies centered around fairness and resource consumption maximization. We applied this approach to the swap subsystem, a traditional operating system bottleneck, and implemented a prototype within the Linux kernel. Our results show significative performance enhancements under high memory pressure, for typical workloads of best-effort shared systems. Moreover, our technique bounds the impact of abusive applications on other legit applications, as it naturally reduces uncertainties over execution duration.
395

Fatores de equivalência de veículos pesados em rodovias de pista dupla / Passenger-car equivalents for heavy vehicles on expressways

Fernando José Piva 19 June 2015 (has links)
Este trabalho visa avaliar o impacto de veículos pesados na qualidade de serviço de rodovias de pista dupla através de equivalentes veiculares. Para isso, foram feitas estimativas dos fatores de equivalência veicular em rodovias paulistas de pista dupla, com três ou mais faixas de tráfego em cada sentido. Essas estimativas foram obtidas a partir de dados empíricos coletados separadamente para cada faixa de tráfego, em intervalos de curta duração (5 ou 6 minutos). Foram utilizadas 53.655 observações, coletadas em oito estações de monitoramento, nos anos 2010 e 2011. O fator de equivalência foi calculado para cada intervalo através de uma equação obtida a partir do método de Huber, admitindo-se que a qualidade de serviço é a mesma para todas as faixas de tráfego naquele intervalo. Foi considerado como fluxo básico o da faixa da esquerda, nos intervalos em que são detectados apenas automóveis, e fluxo misto o da faixa da direita, em que passam automóveis e caminhões. Os resultados sugerem que: (1) em uma parte signicativa do tempo (52%), a qualidade de serviço não é a mesma em todas as faixas da rodovia; (2) o impacto marginal dos caminhões decresce à medida que a porcentagem de caminhões na corrente aumenta; e (3) as diferenças que existem no fator de equivalência em função do nível de serviço são menos evidentes em rampas mais íngremes, onde o efeito das limitações de desempenho dos caminhões é mais notado. A comparação deste estudo com outras duas pesquisas, em que foram utilizados dados gerados em simuladores de tráfego para estimar os fatores de equivalência, mostrou que as estimativas dos equivalentes veiculares obtidos usando dados empíricos são consistentemente maiores que as obtidas através de simulação. / The objective of this study is to evaluate the impact of heavy vehicles on the quality of service on Brazilian expressways (freeways and divided multilane highways), using passenger-car equivalents (PCEs) for heavy vehicles (trucks and buses). PCE estimates for expressways with three or more traffic lanes in each direction were obtained using traffic data collected over short time intervals (5 or 6 minutes) on expressways in the state of São Paulo. A total of 53,655 speed-flow observations, made at eight permanent trac sensor installations during 2010 and 2011, were used in this study. A PCE estimate was calculated for each time interval, using an equation derived from Huber\'s method, based on the assumption that the quality of service is the same across all traffic lanes during the time interval over which the traffic data is collected. Basic flow (passenger cars only) was assumed to be the observed traffic flow on the lane closest to the median, whereas mixed flow (passenger cars and heavy vehicles) was assumed to be the observed traffic flow on the lane closest to the shoulder. The results indicate that: (1) in a signicant portion of the time (52% of the observations) the quality of service is not the same across all traffic lanes; (2) the marginal impact of heavy vehicles decreases as the fraction of heavy vehicles in the traffic stream increases; and (3) the variations in PCE estimates due to the level of service are less evident on steeper grades, where the effect of heavy vehicles\' poorer performance is greater. PCE estimates obtained in this study were compared with PCEs obtained using simulation. The results indicate that PCE from empirical data are consistently higher than those estimated from simulation results.
396

Abordagem para Qualidade de ServiÃo em Banco de Dados Multi-Inquilinos em Nuvem / Approach for Quality of Service to Multi-Tenant Databases in the Cloud

Leonardo Oliveira Moreira 25 July 2014 (has links)
FundaÃÃo de Amparo à Pesquisa do Estado do Cearà / A computaÃÃo em nuvens à um paradigma bem consolidado de utilizaÃÃo de recursos computacionais, segundo o qual infraestrutura de hardware, software e plataformas para o desenvolvimento de novas aplicaÃÃes sÃo oferecidos como serviÃos disponÃveis remotamente e em escala global. Os usuÃrios de nuvens computacionais abrem mÃo de uma infraestrutura computacional prÃpria para dispÃ-la mediante serviÃos oferecidos por provedores de nuvem, delegando aspectos de Qualidade de ServiÃo (QoS) e assumindo custos proporcionais à quantidade de recursos que utilizam modelo de pagamento baseado no uso. Essas garantias de QoS sÃo definidas entre o provedor do serviÃo e o usuÃrio, e expressas por meio de Acordo de NÃvel de ServiÃo (SLA), o qual consiste de contratos que especificam um nÃvel de qualidade a ser atendido, e penalidades em caso de falha. A maioria das aplicaÃÃes em nuvem à orientada a dados e, por conta disso, Sistemas Gerenciadores de Banco de Dados (SGBDs) sÃo candidatos potenciais para a implantaÃÃo em nuvem. SGBDs em nuvem devem tratar uma grande quantidade de aplicaÃÃes ou inquilinos. Os modelos de multi-inquilinatos sÃo utilizados para consolidar vÃrios inquilinos dentro de um sà SGBD, favorecendo o compartilhamento eficaz de recursos, alÃm de gerenciar uma grande quantidade de inquilinos com padrÃes de carga de trabalho irregulares. Por outro lado, os provedores em nuvem devem reduzir os custos operacionais, garantindo a qualidade. Para muitas aplicaÃÃes, o maior tempo gasto no processamento das requisiÃÃes està relacionado ao tempo de execuÃÃo do SGBD. Portanto, torna-se importante que um modelo de qualidade seja aplicado ao SGBD para seu desempenho. TÃcnicas de provisionamento dinÃmico sÃo voltadas para o tratamento de cargas de trabalho irregulares, para que violaÃÃes de SLA sejam evitadas. Sendo assim, uma estratÃgia para ajustar a nuvem no momento em que se prevà um comportamento que pode violar o SLA de um dado inquilino (banco de dados) deve ser considerada. As tÃcnicas de alocaÃÃo sÃo usadas no intuito de aproveitar os recursos do ambiente em detrimento ao provisionamento. Com base nos sistemas de monitoramento e de modelos de otimizaÃÃo, as tÃcnicas de alocaÃÃo decidem onde serà o melhor local para receber um dado inquilino. Para realizar a transferÃncia do inquilino de forma eficiente, tÃcnicas de Live Migration sÃo adotadas para ter o mÃnimo de interrupÃÃo do serviÃo. Acredita-se que a combinaÃÃo destas trÃs tÃcnicas podem contribuir para o desenvolvimento de um soluÃÃo robusta de QoS para bancos de dados em nuvem, minimizando violaÃÃes de SLA. Ante tais desafios, esta tese apresenta uma abordagem, denominada PMDB, para melhorar QoS em SGBDs multi-inquilinos em nuvem. A abordagem tem como objetivo reduzir o nÃmero de violaÃÃes de SLA e aproveitar os recursos à disposiÃÃo por meio de tÃcnicas que realizam prediÃÃo de carga de trabalho, alocaÃÃo e migraÃÃo de inquilinos quando necessitam de recursos com maior capacidade. Para isso, uma arquitetura foi proposta e um protÃtipo implementado com tais tÃcnicas, alÃm de estratÃgias de monitoramento e QoS voltada para aplicaÃÃes de banco de dados em nuvem. Ademais, alguns experimentos orientados a desempenho foram especificados para mostrar a eficiÃncia da abordagem a fim de alcanÃar o objetivo em foco. / Cloud computing is a well-established paradigm of computing resources usage, whereby hardware infrastructure, software and platforms for the development of new applications are offered as services available remotely and globally. Cloud computing users give up their own infrastructure to dispose of it through the services offered by cloud providers, to which they delegate aspects of Quality of Service (QoS) and assume costs proportional to the amount of resources they use, which is based on a payment model. These QoS guarantees are established between the service provider and the user, and are expressed through Service Level Agreements (SLA). This agreement consists of contracts that specify a level of quality that must be met, and penalties in case of failure. The majority of cloud applications are data-driven, and thus Database Management Systems (DBMSs) are potential candidates for cloud deployment. Cloud DBMS should treat a wide range of applications or tenants. Multi-tenant models have been used to consolidate multiple tenants within a single DBMS, favoring the efficient sharing of resources, and to manage a large number of tenants with irregular workload patterns. On the other hand, cloud providers must be able to reduce operational costs while keeping quality levels as agreed. To many applications, the longer time spent in processing requests is related to the DBMS runtime. Therefore, it becomes important to apply a quality model to obtain DBMS performance. Dynamic provisioning techniques are geared to treat irregular workloads so that SLA violations are avoided. Therefore, it is necessary to adopt a strategy to adjust the cloud at the time a behavior that may violate the SLA of a given tenant (database) is predicted. The allocation techniques are applied in order to utilize the resources of the environment to the dentriment of provisioning. Based on both the monitoring and the optimization models systems, the allocation techniques will decide the best place to assign a given tenant to. In order to efficiently perform the transfer of the tenant, minimal service interruption, Live Migration techniques are adopted. It is believed that the combination of these three techniques may contribute to the development of a robust QoS solution to cloud databases which minimizes SLA violations. Faced with these challenges, this thesis proposes an approach, called PMDB, to improve DBMS QoS in multi-tenant cloud. The approach aims to reduce the number of SLA violations and take advantage the resources that are available using techniques that perform workload prediction, allocation and migration of tenants when greater capacity resources are needed. An architecture was then proposed and a prototype implementing such techniques was developed, besides monitoring strategies and QoS oriented database applications in the cloud. Some performance oriented experiments were then specified to show the effectiveness of our approach.
397

Mecanismos de autoconfiguração e auto-otimização para arquiteturas virtualizadas que visam a provisão de qualidade de serviço / Mechanisms of self-configuration and self-ptimization for virtualized architectures aiming at the provision of quality of service

Luis Hideo Vasconcelos Nakamura 19 April 2017 (has links)
A proposta deste projeto de doutorado envolve a pesquisa sobre computação autônoma, focando na elaboração de mecanismos de autoconfiguração e auto-otimização para arquiteturas virtualizadas que buscam garantir a provisão de qualidade de serviço. Esses mecanismos fazem uso de elementos autônomos que são auxiliados por uma ontologia. Para isso, instrumentos de Web Semântica são utilizados para que a ontologia represente uma base de conhecimento com as informações dos recursos computacionais. Tais informações são utilizadas por algoritmos de otimização que, baseados em regras pré-definidas pelo administrador, tomam a decisão por uma nova configuração do sistema que vise a otimizar o desempenho. A configuração e a otimização geralmente envolvem elementos de software que precisam ser gerenciados pelos profissionais em Tecnologia da Informação (TI). Parte desse gerenciamento é composto de tarefas corriqueiras, por exemplo, monitorações, reconfigurações e verificações de desempenho. Tais tarefas demandam tempo e, portanto, geram custos e desgastes para os profissionais. Dessa forma, este projeto visa automatizar algumas dessas tarefas corriqueiras, facilitando o trabalho dos profissionais de TI e permitindo que eles foquem em tarefas mais críticas. Portanto, para alcançar esse objetivo foi realizado um estudo e a criação de mecanismos distribuídos baseados em Computação Autônoma e Web Semântica que permitem a configuração e otimização de recursos de forma automática. Os resultados individuais de cada mecanismo indicam que é possível alcançar um nível satisfatório de auto-configuração e auto-otimização para arquiteturas virtualizadas. O mecanismo de auto-configuração obteve melhores resultados com a abordagem de monitoração de recursos ao invés de utilizar previsões, já o mecanismo de auto-otimização provou que sua metodologia e algoritmo são aplicáveis na busca de uma configuração otimizada para atender ao SLA acordado. / The purpose of this PhD project involves the research about autonomic computing, focusing on the development of self-configuration and self-optimization mechanisms for virtualized architectures that aims to ensure the provision of Quality of Service. These mechanisms make use of autonomous elements that are aided by an ontology. Therefore, SemanticWeb tools are used in order to allow the ontology to represent a knowledge base with information of the computational resources. Such information is used by optimization algorithms that take the decision of choosing a new configuration that aims at optimizing the architecture performance based on rules predefined by the administrator. The configuration and optimization usually involve elements of software that must be managed by professionals in the Information Technology (IT) field and part of this management is composed of common tasks, for example, monitoring tests, reconfigurations and performance evaluations. These tasks take some time and therefore generate costs and distress to the professionals. Thus, this project aims at automating some of these common tasks, facilitating the work of IT professionals and allowing them to focus on more critical tasks. Therefore, to achieve this goal a study was performed and distributed mechanisms based on Autonomic Computing and Semantic Web were created allowing the configuration and optimization of resources automatically. The individual results of each mechanism indicate that it is possible to achieve a satisfactory level of self-configuration and self-optimization for virtualized architectures. The self-configuration mechanism has achieved better results with the resource monitoring approach rather than using predictions and the self-optimization mechanism has proven that its methodology and algorithm are applicable in the search for an optimized configuration to meet the SLA agreed.
398

Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

Shehu, Umar Galadima January 2015 (has links)
Service Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment.
399

Key Drivers for the Successful Outsourcing of IT Services

Alisic, Senadin, Karapistoli, Eirini, Katkic, Adis January 2012 (has links)
Background: Services are without doubt the driving force in today’s economies in many countries. The increased importance of the service sector in industrialized economies and its productivity rates are testified by the fact that the current list of Fortune 500 companies contains more service companies and fewer manufacturing companies than in previous decades. Many products today are being transformed into services or have a higher service component than previously. In the development of this increasingly important bundling of services with products, outsourcing and offshoring play a key role. Companies have been outsourcing work for many years now appointing the latter a well-established phenomenon. Outsourcing to foreign countries, referred to as offshoring, has also been fuelled by ICT and globalization, where firms can capitalize on price and cost differentials between countries. Constant improvements in technology and global communications virtually guarantee that the future will bring much more outsourcing of services, and more specifically, outsourcing of IT services. While outsourcing and offshoring strategies play an important role in IT services, we would like to investigate the drivers that affect the successful outcome of an offshore outsourcing engagement. Purpose: The principle aim of the present study is therefore twofold: a) to identify key drivers for the successful outsourcing of IT services seen from the outsourcing partner’s perspective and b) to investigate how the outsourcing partner prioritizes these drivers. / Tel: +46 767864795
400

Utilisation du taux d'erreur binaire pour améliorer la qualité de service dans les réseaux ad hoc / Using bit error rate to improve quality of service in ad hoc networks

Yélémou, Tiguiane 18 December 2012 (has links)
Dans les réseaux sans fil ad hoc, les liens de communication sont sujets à un taux d'erreurimportant. Dans ce contexte, le routage joue un rôle stratégique pour augmenter les performancesdans les transmissions. Dans nos études, par une approche cross-layer, nous prenons en compte lafiabilité des liens dans le choix des routes. Pour cela, dans un premier temps, nous construisonsdeux nouvelles métriques, l'une basée sur le taux d'erreur binaire (au niveau couche physique) etl'autre, plus adaptée à la mesure, sur le nombre de retransmissions (au niveau couche MAC).Ensuite, pour exploiter ces métriques lors du calcul de routes, nous adaptons les algorithmes à labase des protocoles de routage.Les trois familles de protocoles de routage ont été traitées : les protocoles pro-actifs où chaquenoeud a une vision globale du réseau grâce aux échanges périodiques de messages de contrôle detopologie ; les protocoles réactifs où, avant de commencer ses transmissions de données, chaquenoeud doit initier un processus de recherche de route ; les protocoles hybrides qui mixent les deuxapproches.Pour tester l'effectivité de nos améliorations, nous utilisons le simulateur NS2 enrichi par unmodèle de propagation et un modèle de mobilité réalistes. Les paramètres de performance tels quele délai, le taux de livraison de paquets et la charge de routage sont mesurés dans plusieursscénarios représentatifs des conditions d'utilisation. Les résultats obtenus montrent une améliorationsignificative des protocoles standards dans un contexte de qualité de service. / In ad hoc wireless networks, links are error-prone. In this context, routing plays a decisive role inimproving transmission performances. In our studies, by a cross-layer approach, we take intoaccount the reliability of links in route choice. For this, first, we concept two new metrics, onebased on bit error rate (at physical layer) and the other, more suitable for the measurement, onnumber of retransmissions (at MAC layer). Then, to exploit these metrics when determining routes,we adapt the algorithms based routing protocols.The three families of routing protocols have been addressed: proactive protocols where eachnode has a global view of the network through periodic exchanges of topology control messages;reactive protocols where, before starting data transmission, each node must initiate a routediscovery process; hybrid protocols which mix the two approaches.To test the effectiveness of our enhancements, we use the simulator NS.2 enhanced by arealistic propagation model and a realistic mobility model. Performance parameters such as delay,packets delivery ratio and routing load are measured in several scenarios including mobility andmulti-communication. The results show a significant improvement of standard protocols in thequality of service context.

Page generated in 0.4818 seconds