Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
501 |
Flexible and integrated resource management for IaaS cloud environments based on programmability / Gerenciamento de recursos flexível e integrado para ambientes de nuvem iaas baseado em programabilidadeWickboldt, Juliano Araújo January 2015 (has links)
Nuvens de infraestrutura como serviço (IaaS) estão se tornando um ambiente habitual para execução de aplicações modernas da Internet. Muitas plataformas de gerenciamento de nuvem estão disponíveis para aquele que deseja construir uma nuvem de IaaS privada ou pública (e.g., OpenStack, Eucalyptus, OpenNebula). Um aspecto comum do projeto de plataformas atuais diz respeito ao seu modelo de controle caixa-preta. Em geral, as plataformas de gerenciamento de nuvem são distribuídas com um conjunto de estratégias de alocação de recursos embutida em seu núcleo. Dessa forma, os administradores de nuvem têm poucas oportunidades de influenciar a maneira como os recursos são realmente gerenciados (e.g., posicionamento de máquinas virtuais ou seleção caminho de enlaces virtuais). Os administradores poderiam se beneficiar de personalizações em estratégias de gerenciamento de recursos, por exemplo, para atingir os objetivos específicos de cada ambiente ou a fim de permitir a alocação de recursos orientada à aplicação. Além disso, as preocupações acerca do gerenciamento de recursos em nuvens se dividem geralmente em computação, armazenamento e redes. Idealmente, essas três preocupações deveriam ser abordadas no mesmo nível de importância por implementações de plataformas. No entanto, ao contrário do gerenciamento de computação e armazenamento, que têm sido amplamente estudados, o gerenciamento de redes em ambientes de nuvem ainda é bastante incipiente. A falta de flexibilidade e suporte desequilibrado para o gerenciamento de recursos dificulta a adoção de nuvens como um ambiente de execução viável para muitas aplicações modernas da Internet com requisitos rigorosos de elasticidade e qualidade do serviço. Nesta tese, um novo conceito de plataforma de gerenciamento de nuvem é introduzido onde o gerenciamento de recursos flexível é obtido pela adição de programabilidade no núcleo da plataforma. Além disso, uma API simplificada e orientada a objetos é introduzida a fim de permitir que os administradores escrevam e executem programas de gerenciamento de recursos para lidar com todos os tipos de recursos a partir de um único ponto. Uma plataforma é apresentada como uma prova de conceito, incluindo um conjunto de adaptadores para lidar com tecnologias de virtualização e de redes modernas, como redes definidas por software com OpenFlow, Open vSwitches e Libvirt. Dois estudos de caso foram realizados a fim de avaliar a utilização de programas de gerenciamento de recursos para implantação e otimização de aplicações através de uma rede emulada usando contêineres de virtualização Linux e Open vSwitches operando sob o protocolo OpenFlow. Os resultados mostram a viabilidade da abordagem proposta e como os programas de implantação e otimização são capazes de alcançar diferentes objetivos definidos pelo administrador. / Infrastructure as a Service (IaaS) clouds are becoming an increasingly common way to deploy modern Internet applications. Many cloud management platforms are available for users that want to build a private or public IaaS cloud (e.g., OpenStack, Eucalyptus, OpenNebula). A common design aspect of current platforms is their black-box-like controlling nature. In general, cloud management platforms ship with one or a set of resource allocation strategies hard-coded into their core. Thus, cloud administrators have few opportunities to influence how resources are actually managed (e.g., virtual machine placement or virtual link path selection). Administrators could benefit from customizations in resource management strategies, for example, to achieve environment specific objectives or to enable application-oriented resource allocation. Furthermore, resource management concerns in clouds are generally divided into computing, storage, and networking. Ideally, these three concerns should be addressed at the same level of importance by platform implementations. However, as opposed to computing and storage management, which have been extensively investigated, network management in cloud environments is rather incipient. The lack of flexibility and unbalanced support for resource management hinders the adoption of clouds as a viable execution environment for many modern Internet applications with strict requirements for elasticity or Quality of Service. In this thesis, a new concept of cloud management platform is introduced where resource management is made flexible by the addition of programmability to the core of the platform. Moreover, a simplified object-oriented API is introduced to enable administrators to write and run resource management programs to handle all kinds of resources from a single point. An implementation is presented as a proof of concept, including a set of drivers to deal with modern virtualization and networking technologies, such as software-defined networking with OpenFlow, Open vSwitches, and Libvirt. Two case studies are conducted to evaluate the use of resource management programs for the deployment and optimization of applications over an emulated network using Linux virtualization containers and Open vSwitches running the OpenFlow protocol. Results show the feasibility of the proposed approach and how deployment and optimization programs are able to achieve different objectives defined by the administrator.
|
502 |
Performance Metrics Analysis of GamingAnywhere with GPU accelerated NVIDIA CUDASreenibha Reddy, Byreddy January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
503 |
Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDASreenibha Reddy, Byreddy January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
504 |
Performance Metrics Analysis of GamingAnywhere with GPU acceletayed NVIDIA CUDA using gVirtuSZaahid, Mohammed January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
505 |
Nouveaux paradigmes de contrôle de congestion dans un réseau d'opérateur / New paradigms for congestion control in an operator's networkSanhaji, Ali 29 November 2016 (has links)
La congestion dans les réseaux est un phénomène qui peut influer sur la qualité de service ressentie par les utilisateurs. L’augmentation continue du trafic sur l’internet rend le phénomène de congestion un problème auquel l’opérateur doit répondre pour satisfaire ses clients. Les solutions historiques à la congestion pour un opérateur, comme le surdimensionnement des liens de son infrastructure, ne sont plus aujourd’hui viables. Avec l’évolution de l’architecture des réseaux et l’arrivée de nouvelles applications sur l’internet, de nouveaux paradigmes de contrôle de congestion sont à envisager pour répondre aux attentes des utilisateurs du réseau de l’opérateur. Dans cette thèse, nous examinons les nouvelles approches proposées pour le contrôle de congestion dans le réseau d’un opérateur. Nous proposons une évaluation de ces approches à travers des simulations, ce qui nous permet d’estimer leur efficacité et leur potentiel à être déployés et opérationnels dans le contexte d’internet, ainsi que de se rendre compte des défis qu’il faut relever pour atteindre cet objectif. Nous proposons également des solutions de contrôle de congestion dans des environnements nouveaux tels que les architectures Software Defined Networking et le cloud déployé sur un ou plusieurs data centers, où la congestion est à surveiller pour maintenir la qualité des services cloud offerts aux clients. Pour appuyer nos propositions d’architectures de contrôle de congestion, nous présentons des plateformes expérimentales qui démontrent le fonctionnement et le potentiel de nos solutions. / Network congestion is a phenomenon that can influence the quality of service experienced by the users. The continuous increase of internet traffic makes this phenomenon an issue that should be addressed by the network operator to satisfy its clients. The usual solutions to congestion, such as overdimensioning the infrastructure, are not viable anymore. With the evolution of the network architecture and the emergence of new internet applications, new paradigms for congestion control have to be considered as a response to the expectations of network users. In this thesis, we examine new approaches to congestion control in an operator’s network. We propose an evaluation of these approaches through simulations, which allows us to estimate their potential to be deployed and used over the internet, and allows us to be aware of the challenges in order to achieve this goal. We also provide solutions for congestion control in new environments such as Software- Defined Networking architectures and cloud computing deployed over many data centers, where congestion is to be monitored to maintain the quality of cloud services to its users. To support our proposals for congestion control architectures, we present experimental platforms that demonstrate the feasibility of our solutions.
|
506 |
Processus sécurisés de dématérialisation de cartes sans contact / Secure processes of dematerialization of contactless cardsBouazzouni, Mohamed Amine 08 November 2017 (has links)
Au fil des années, la technologie sans contact NFC s'est imposée dans notre quotidien au travers des différents services proposés. Les cas d'utilisation sont nombreux allant des cartes de fidélité, des cartes de transport, des cartes de paiement sans contact jusqu'aux cartes de contrôle d'accès. Cependant, les premières générations des cartes NFC ont une sécurité minimale reposant sur l'hypothèse de leur non-clonabilité. De multiples vulnérabilités ont été découvertes et leur exploitation a permis des copies frauduleuses. Afin de remédier à ces vulnérabilités, une nouvelle génération de cartes à la sécurité augmentée a vu le jour. Ces cartes permettent une authentification avec un lecteur basée sur des algorithmes de chiffrements symétriques tels qu'AES, DES, et 3DES. Elles sont plus robustes que la première génération mais ont subi des également une attaque en reverse-engineering. Pour garantir et améliorer le niveau de sécurité du système de contrôle d'accès, nous proposons dans le cadre de l'opération neOCampus, la dématérialisation sécurisée de la carte sans contact sur un smartphone muni de la technologie NFC. Cette dématérialisation nous permet d'exploiter la puissance de calcul et la capacité de stockage du smartphone afin de déployer des algorithmes d'authentification plus robustes. Cependant, l'OS du smartphone ne peut être considéré comme un environnement de confiance. Afin de répondre à la problématique du stockage et du traitement sécurisés sur un smartphone, plusieurs solutions ont été proposées : les Secure Elements (SE), les Trusted Platform Module (TPM), les Trusted Execution Environment (TEE) et la virtualisation. Afin de stocker et de traiter de manière sécurisée les données d'authentification, le TEE apparait comme la solution idéale avec le meilleur compromis sécurité/performances. Cependant, de nombreux smartphones n'embarquent pas encore de TEE. Pour remédier à cette contrainte, nous proposons une architecture basée sur l'utilisation de TEEs déportés sur le Cloud. Le smartphone peut le contacter via une liaison Wi-Fi ou 4G. Pour se faire, un protocole d'authentification basé sur IBAKE est proposé. En plus de ce scénario nominal, deux autres scenarii complémentaires ont été proposés permettant d'accompagner le développement et la démocratisation des TEE non seulement dans le monde des smartphones mais aussi sur des dispositifs peu onéreux comme le Raspberry Pi 3. Ces architectures déploient le même algorithme d'authentification que le scénario nominal. Nous proposons aussi une architecture hors ligne permettant à un utilisateur de s'authentifier à l'aide d'un jeton de connexion en cas d'absence de réseaux sans fil. Cette solution permet de relâcher la contrainte sur la connectivité du smartphone à son Cloud. Nous procédons à une évaluation de l'architecture de dématérialisation et de l'algorithme d'authentification en terme de performances et de sécurité. Les opérations cryptographiques du protocole d'authentification sont les plus coûteuses. Nous avons alors procédé à leur évaluation en nous intéressant en particulier aux opérations de chiffrement IBE et à la génération de challenges ECC. Nos implémentations ont été évaluées pour l'infrastructure Cloud et l'environnement mobile. Nous avons ensuite procédé à une validation du protocole d'authentification sur les trois architectures sélectionnées à l'aide de l'outil Scyther. Nous avons montré, que pour les trois scenarii, la clé de session négociée via le protocole d'authentification restait secrète durant tout le protocole. Cette caractéristique nous garantit que les données d'authentification chiffrées avec cette clé resteront secrètes et que la phase d'identification de la personne est protégée tout en préservant l'ergonomie du système existant. / Over the years, the Near Field Communication technology has emerged in our daily lives through a variety of services. There are several use cases for contactless cards : loyalty cards, metro and bus cards, payment cards and access control cards. However, the first version of these cards has a low security level that is based on the assumption that the cards can not be cloned. To address this issue, a new version of NFC cards has been developed. It allows an authentication with the NFC reader through symmetric encryption algorithms such as AES, DES or 3DES. These cards are more robust that the previous ones. However, these cards have also undergone a reverseengineering attack. We propose, in the context of the neOCampus project, to replace the contactless cards with a smartphone equipped with the NFC capabilities. This process, called dematerialization, allows us to take advantage of the computational power and the storage capabilities of the smartphone to deploy more complex and robust authentication algorithms. However, the OS of the smartphone can not be considered as a trusted environment for the storage and the processing of sensitive data. To address these issues, several solutions were proposed : Secure Elements (SE), Trusted Platform Module (TPM), Trusted Execution Environment (TEE) and Virtualization. In order to store and process securely authentication data, the TEE seems to be the best trade-off between security and performances. Nevertheless, many smartphones do not embeed TEE and it is necessary to negotiate agreements with the TEE manufacturers in order to deploy a secure application on it. In order to figure out these issues, we propose to set up an architecture with a TEE in the Cloud. The smartphone has a secure Cloud that can be reached through a Wi-Fi or 4G connection. The reader has also its own secure Cloud reachable with an Ethernet link. An authentication protocol based on IBAKE is also proposed. In addition to this scenario, two other scenarios were proposed to follow the development and democratization of the TEE on the smartphones and on some inexpensive devices such as Raspberry Pi 3. These alternative architectures deploy the same authentication protocol as the main scenario. We propose an offline architecture allowing a user to authenticate using a connection token. This solution relaxes the connectivity constraint between the smartphone and its secure Cloud. We perform an evaluation of our architecture and of the authentication algorithm in terms of performances and security. The cryptographical operations of the authentication protocol are the most consuming operations in term of performance. We have chosen to target these operations especially the encryption with the IBE and the ECC challenges generation. Our implementations have been evaluated for a Cloud infrastructure and a mobile-like environment. We also perform a formal verification of the authentication protocol through the three considered architectures with Scyther. We showed that, for the three scenarios, that the session key negotiated through the authentication protocol remains secret during the overall execution of the protocol. These characteristic guarantee that the authentication data encrypted with this key will remain secret and that this step of the algorithm will be secure while preserving the ergonomy of the existing system.
|
507 |
Unveiling the interplay between timeliness and scalability in cloud monitoring systems / Desvelando a relação mútua entre escalabilidade e oportunidade em sistemas de monitoramento de nuvens computacionaisRodrigues, Guilherme da Cunha January 2016 (has links)
Computação em nuvem é uma solução adequada para profissionais, empresas, centros de pesquisa e instituições que necessitam de acesso a recursos computacionais sob demanda. Atualmente, nuvens computacionais confiam no gerenciamento de sua estrutura para fornecer recursos computacionais com qualidade de serviço adequada as expectativas de seus clientes, tal qualidade de serviço é estabelecida através de acordos de nível de serviço. Nesse contexto, o monitoramento é uma função crítica de gerenciamento para se prover tal qualidade de serviço. Requisitos de monitoramento em nuvens computacionais são propriedades que um sistema de monitoramento de nuvem precisa reunir para executar suas funções de modo adequado e atualmente existem diversos requisitos definidos pela literatura, tais como: oportunidade, elasticidade e escalabilidade. Entretanto, tais requisitos geralmente possuem influência mútua entre eles, que pode ser positiva ou negativa, e isso impossibilita o desenvolvimento de soluções de monitoramento completas. Dado o cenario descrito acima, essa tese tem como objetivo investigar a influência mútua entre escalabilidade e oportunidade. Especificamente, essa tese propõe um modelo matemático para estimar a influência mútua entre tais requisitos de monitoramento. A metodologia utilizada por essa tese para construir tal modelo matemático baseia-se em parâmetros de monitoramento tais como: topologia de monitoramento, quantidade de dados de monitoramento e frequencia de amostragem. Além destes, a largura de banda de rede e o tempo de resposta também são importantes métricas do modelo matemático. A avaliação dos resultados obtidos foi realizada através da comparação entre os resultados do modelo matemático e de uma simulação. As maiores contribuições dessa tese são divididas em dois eixos, estes são denominados: Básico e Chave. As contribuições do eixo básico são: (i) a discussão a respeito da estrutura de monitoramento de nuvem e introdução do conceito de foco de monitoramento (ii) o exame do conceito de requisito de monitoramento e a proposição do conceito de abilidade de monitoramento (iii) a análise dos desafios e tendências a respeito de monitoramento de nuvens computacionais. As contribuições do eixo chave são: (i) a discussão a respeito de oportunidade e escalabilidade incluindo métodos para lidar com a mútua influência entre tais requisitos e a relação desses requisitos com parâmetros de monitoramento (ii) a identificação dos parâmetros de monitoramento que são essenciais na relação entre oportunidade e escalabilidade (iii) a proposição de um modelo matemático baseado em parâmetros de monitoramento que visa estimar a relação mútua entre oportunidade e escalabilidade. / Cloud computing is a suitable solution for professionals, companies, research centres, and institutions that need to have access to computational resources on demand. Nowadays, clouds have to rely on proper management of its structure to provide such computational resources with adequate quality of service, which is established by Service Level Agreements (SLAs), to customers. In this context, cloud monitoring is a critical management function to achieve it. Cloud monitoring requirements are properties that a cloud monitoring system need to meet to perform its functions properly, and currently there are several of them such as timeliness, elasticity and scalability. However, such requirements usually have mutual influence, which is either positive or negative, among themselves, and it has prevented the development of complete cloud monitoring solutions. From the above, this thesis investigates the mutual influence between timeliness and scalability. This thesis proposes a mathematical model to estimate such mutual influence to enhance cloud monitoring systems. The methodology used in this thesis is based on monitoring parameters such as monitoring topologies, the amount of monitoring data, and frequency sampling. Besides, it considers as important metrics network bandwidth and response time. Finally, the evaluation is based on a comparison of the mathematical model results and outcomes obtained via simulation. The main contributions of this thesis are divided into two axes, namely, basic and key. Basic contributions of this thesis are: (i) it discusses the cloud monitoring structure and introduced the concept of cloud monitoring focus (ii) it examines the concept of cloud monitoring requirement and proposed to divide them into two groups defined as cloud monitoring requirements and cloud monitoring abilities (iii) it analysed challenges and trends in cloud monitoring pointing research gaps that include the mutual influence between cloud monitoring requirements which is core to the key contributions. The key contributions of this thesis are: (i) it presents a discussion of timeliness and scalability that include: the methods currently used to cope with the mutual influence between them, and the relation between such requirements and monitoring parameters (ii) it identifies the monitoring parameters that are essential in the relation between timeliness and scalability (iii) it proposes a mathematical model based on monitoring parameters to estimate the mutual influence between timeliness and scalability.
|
508 |
SDN-based Proactive Defense Mechanism in a Cloud SystemJanuary 2015 (has links)
abstract: Cloud computing is known as a new and powerful computing paradigm. This new generation of network computing model delivers both software and hardware as on-demand resources and various services over the Internet. However, the security concerns prevent users from adopting the cloud-based solutions to fulfill the IT requirement for many business critical computing. Due to the resource-sharing and multi-tenant nature of cloud-based solutions, cloud security is especially the most concern in the Infrastructure as a Service (IaaS). It has been attracting a lot of research and development effort in the past few years.
Virtualization is the main technology of cloud computing to enable multi-tenancy.
Computing power, storage, and network are all virtualizable to be shared in an IaaS system. This important technology makes abstract infrastructure and resources available to users as isolated virtual machines (VMs) and virtual networks (VNs). However, it also increases vulnerabilities and possible attack surfaces in the system, since all users in a cloud share these resources with others or even the attackers. The promising protection mechanism is required to ensure strong isolation, mediated sharing, and secure communications between VMs. Technologies for detecting anomalous traffic and protecting normal traffic in VNs are also needed. Therefore, how to secure and protect the private traffic in VNs and how to prevent the malicious traffic from shared resources are major security research challenges in a cloud system.
This dissertation proposes four novel frameworks to address challenges mentioned above. The first work is a new multi-phase distributed vulnerability, measurement, and countermeasure selection mechanism based on the attack graph analytical model. The second work is a hybrid intrusion detection and prevention system to protect VN and VM using virtual machines introspection (VMI) and software defined networking (SDN) technologies. The third work further improves the previous works by introducing a VM profiler and VM Security Index (VSI) to keep track the security status of each VM and suggest the optimal countermeasure to mitigate potential threats. The final work is a SDN-based proactive defense mechanism for a cloud system using a reconfiguration model and moving target defense approaches to actively and dynamically change the virtual network configuration of a cloud system. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
|
509 |
RoSe : un framework pour la conception et l'exécution d'applications distribuées dynamiques et hétérogènes / RoSe : A framework for the design and execution of dynamic and heterogeneous distributed applications.Bardin, Jonathan 02 October 2012 (has links)
L'adaptation est aujourd'hui devenue un enjeu majeur en Génie Logiciel. Les ingénieurs sont en effet régulièrement confrontés à des demandes d'évolution qui peuvent prendre de nombreuses formes : mises à jour, nouvelles versions, besoins en nouvelles fonctionnalités, etc. Cette tendance est accrue par l'émergence de nouveaux domaines tels que l'informatique ubiquitaire ou le cloud computing qui exigent des changements dynamiques dans des environnements en constante évolution. Ainsi, dans ces domaines, les ressources sont souvent élastiques, volatiles et hétérogènes. Cette thèse s'intéresse en particulier à la conception et à l'exécution d'applications distribuées composées d'entités hétérogènes et qui nécessitent d'être adaptées durant l'exécution. Notre approche s'appuie sur les modèles à composant orientés service et sur les styles d'architectures SOA et REST. Nous proposons un framework, nommé RoSe, qui permet l'import de ressources distantes dans un framework à composant orienté service et l'export de service locaux. RoSe permet aux développeurs et aux administrateurs de gérer la distribution des applications de manière totalement indépendante et dynamique grâce à un langage de configuration et d'une API dite fluent. Le framework lui-même est modulaire et flexible et supporte l'ajout et le retrait de composants durant l'exécution. L'implantation de RoSe est hébergée au sein du projet OW2 Chameleon et est aujourd'hui utilisée dans plusieurs projets industriels et académiques. / Adaptation has now become a major challenge in Software Engineering. Engineers are indeed regularly confronted with requests for changes that can take many forms: updates, new versions, new features need etc. This trend is enhanced by the emergence of new areas such as ubiquitous computing or cloud computing that require dynamic changes in rapidly constantly evolving environments. For instance, in these areas, resources are often elastic, volatile and heterogeneous. %This thesis focuses especially in the design and execution of distributed applications composed of heterogeneous entities which need to be adapted at runtime. Our approach is based on service-oriented component models and on the SOA and REST architectural styles. We propose a framework, named RoSe, which enables the import of remote resources in a service-oriented component framework and the export of local services. RoSe allows developers and administrators to manage the distribution of their application in a totally independent and dynamic way thanks to a configuration language and a fluent API. The framework itself is modular, flexible and supports the addition and removal of components during execution. The implementation of RoSe is hosted by OW2 in the Chameleon project and is now used in several industrial and academic projects.
|
510 |
Privacy Challenges in Online Targeted Advertising / Protection de la vie privée dans la publicité ciblée en ligneTran, Minh-Dung 13 November 2014 (has links)
L'auteur n'a pas fourni de résumé en français. / In modern online advertising, advertisers tend to track Internet users' activities and use these tracking data to personalize ads. Even though this practice - known as extit{targeted advertising} - brings economic benefits to advertising companies, it raises serious concerns about potential abuses of users' sensitive data. While such privacy violations, if performed by trackers, are subject to be regulated by laws and audited by privacy watchdogs, the consequences of data leakage from these trackers to other entities are much more difficult to detect and control. Protecting user privacy is not easy since preventing tracking undermines the benefits of targeted advertising and consequently impedes the growth of free content and services on the Internet, which are mainly fostered by advertising revenue. While short-term measures, such as detecting and fixing privacy leakages in current systems, are necessary, there needs to be a long-term approach, such as privacy-by-design ad model, to protect user privacy by prevention rather than cure. In the first part of this thesis, we study several vulnerabilities in current advertising systems that leak user data from advertising companies to external entities. First, since targeted ads are personalized to each user, we present an attack exploiting these ads on the fly to infer user private information that have been used to select ads. Second, we investigate common ad exchange protocols, which allow companies to cooperate in serving ads to users, and show that advertising companies are leaking user private information, such as web browsing history, to multiple parties participating in the protocols. These web browsing histories are given to these entities at surprisingly low prices, reflecting the fact that user privacy is extremely underestimated by the advertising industry.In the second part of the thesis, we propose a privacy-by-design targeted advertising model which allows personalizing ads to users without the necessity of tracking. This model is specifically aimed for the two newly emerging ad technologies - retargeting advertising and ad exchange. We show that this model provides strong protection for user privacy while still ensuring ad targeting performance and being practically deployable.
|
Page generated in 0.0579 seconds