• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 15
  • 14
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

PARNAFOA: um processo de análise de requisitos não-funcionais orientado a aspectos. / PARNAFOA: an aspect-oriented non-functional requirements analysis process.

Bombonatti, Denise Lazzeri Gastaldo 19 August 2010 (has links)
Esta tese tem o objetivo de definir um processo para análise de requisitos não-funcionais orientado a aspectos denominado PARNAFOA. Este processo utiliza, de maneira integrada, métodos de tratamento de requisitos não-funcionais, baseados no NFR Framework, e métodos orientados a aspectos. Como resultado principal obtém-se um modelo de casos de uso que incorpora novas funções relacionadas aos requisitos não-funcionais. A aplicação do PARNAFOA foi realizada em cinco sistemas de software, com domínios, características e complexidades diversos. A avaliação da aplicação deste processo mostrou que o tratamento dos requisitos não-funcionais, desde as fases iniciais do desenvolvimento dos sistemas de software, complementa o modelo de casos de uso com funções adicionais ou gera restrições de projeto. Se estes requisitos não forem considerados desde o início, a introdução posterior dessas funções pode causar alterações nos modelos consolidados ou as atividades de projeto podem ser realizadas sem considerar as restrições. As aplicações do PARNAFOA e sua conseqüente melhoria, incorporada após sua avaliação, permitiu torná-lo mais flexível do que sua versal inicial. Aplicações futuras, com outros tipos de requisitos não-funcionais, irão permitir o amadurecimento deste processo. / The aim of this thesis is to define an aspect-oriented non-functional requirements analysis process named PARNAFOA. This process applies nonfunctional requirements methods in an integrated manner, based on NFR Framework, and aspect-oriented methods. A use case model that embodies non-functional requirements as new functions is the main result obtained from this process. PARNAFOA application was performed in five software systems, with diverse features, domains and complexities. The evaluation of this process application showed that the treatment of these non-functional requirements, from the early phases of software systems development, complements the use case model with additional new functions or generates project restrictions. If these requirements are not considered from the very beginning, the introduction of these functions at a later phase can generate modifications in consolidated models or project activities, that do not consider these restrictions, can be performed. The PARNAFOA applications and consequent improvement, incorporated after the assessment, allowed it to become more flexible than the initial version. Future applications, with other non-functional requirements types, will provide this process maturity.
42

Análise de disponibilidade em sistemas de software na Web. / Availability analysis of Web software systems.

Vasconcellos Neto, Oswaldo Cabral de 24 November 2009 (has links)
A utilização da Internet como um meio de automação de serviços de e-business tem sido adotada como estratégia por empresas em vários ramos da economia, diminuindo custos e propiciando uma melhoria no relacionamento com o cliente. Um requisito não-funcional importante a ser considerado no desenvolvimento de sistemas de software que possibilita esta automação é a disponibilidade. O nível de disponibilidade de um sistema pode ser influenciado pela arquitetura do sistema, e, em particular, pela arquitetura de software, pois as decisões arquitetônicas devem considerar aspectos relacionados à disponibilidade. No método de avaliação de arquitetura ATAM (Architecture Tradeoff Analysis Method Método de Análise de Compromissos de Arquitetura), esse requisito é analisado através da utilização de cenários de disponibilidade. Como a avaliação da disponibilidade é normalmente uma tarefa complexa, requerendo dos analistas a identificação de numerosos itens interdependentes, a geração e, conseqüentemente, a análise de cenários de disponibilidade na maioria das vezes não é uma tarefa trivial. O presente trabalho tem como objetivo elaborar uma técnica de análise de disponibilidade em sistemas de software para a Web, que auxilie a geração sistemática de cenários de disponibilidade requeridos no método ATAM. Para a elaboração da proposta, o trabalho aborda métodos para a elicitação, representação e análise de requisitos não-funcionais em uma determinada arquitetura de software, bem como conceitos e taxonomias relacionadas à dependabilidade. Ao final, a técnica é exercitada em um exemplo simplificado de sistema de software bancário na Web. / The use of Internet for e-business service automation has been adopted as a strategy by organizations in several sectors of the economy, reducing costs and providing a better relationship with the customer. Availability is an important nonfunctional requirement to be considered in the development of software systems offering this type of automation. The level of system availability may be affected by the system architecture, and, especially, by the software architecture, as architectural decisions must take availability-related aspects into account. In the ATAM (Architecture Tradeoff Analysis Method) architecture evaluation method, this requirement is analyzed by means of availability scenarios. As availability evaluation is normally a complex task, requiring analysts to identify several interdependent items, the generation and, consequently, the analysis of availability scenarios is often not a trivial task. This work aims to elaborate an availability technique analysis for web-based software systems, to aid in the systematic generation of availability scenarios required in the ATAM method. To elaborate the proposal, the work covers methods for elicitation, representation and analysis of non-functional requirements in a specific software architecture, as well as concepts and taxonomies related to dependability. In the end, the technique is applied on a simplified example of web banking software system.
43

PARNAFOA: um processo de análise de requisitos não-funcionais orientado a aspectos. / PARNAFOA: an aspect-oriented non-functional requirements analysis process.

Denise Lazzeri Gastaldo Bombonatti 19 August 2010 (has links)
Esta tese tem o objetivo de definir um processo para análise de requisitos não-funcionais orientado a aspectos denominado PARNAFOA. Este processo utiliza, de maneira integrada, métodos de tratamento de requisitos não-funcionais, baseados no NFR Framework, e métodos orientados a aspectos. Como resultado principal obtém-se um modelo de casos de uso que incorpora novas funções relacionadas aos requisitos não-funcionais. A aplicação do PARNAFOA foi realizada em cinco sistemas de software, com domínios, características e complexidades diversos. A avaliação da aplicação deste processo mostrou que o tratamento dos requisitos não-funcionais, desde as fases iniciais do desenvolvimento dos sistemas de software, complementa o modelo de casos de uso com funções adicionais ou gera restrições de projeto. Se estes requisitos não forem considerados desde o início, a introdução posterior dessas funções pode causar alterações nos modelos consolidados ou as atividades de projeto podem ser realizadas sem considerar as restrições. As aplicações do PARNAFOA e sua conseqüente melhoria, incorporada após sua avaliação, permitiu torná-lo mais flexível do que sua versal inicial. Aplicações futuras, com outros tipos de requisitos não-funcionais, irão permitir o amadurecimento deste processo. / The aim of this thesis is to define an aspect-oriented non-functional requirements analysis process named PARNAFOA. This process applies nonfunctional requirements methods in an integrated manner, based on NFR Framework, and aspect-oriented methods. A use case model that embodies non-functional requirements as new functions is the main result obtained from this process. PARNAFOA application was performed in five software systems, with diverse features, domains and complexities. The evaluation of this process application showed that the treatment of these non-functional requirements, from the early phases of software systems development, complements the use case model with additional new functions or generates project restrictions. If these requirements are not considered from the very beginning, the introduction of these functions at a later phase can generate modifications in consolidated models or project activities, that do not consider these restrictions, can be performed. The PARNAFOA applications and consequent improvement, incorporated after the assessment, allowed it to become more flexible than the initial version. Future applications, with other non-functional requirements types, will provide this process maturity.
44

Análise de disponibilidade em sistemas de software na Web. / Availability analysis of Web software systems.

Oswaldo Cabral de Vasconcellos Neto 24 November 2009 (has links)
A utilização da Internet como um meio de automação de serviços de e-business tem sido adotada como estratégia por empresas em vários ramos da economia, diminuindo custos e propiciando uma melhoria no relacionamento com o cliente. Um requisito não-funcional importante a ser considerado no desenvolvimento de sistemas de software que possibilita esta automação é a disponibilidade. O nível de disponibilidade de um sistema pode ser influenciado pela arquitetura do sistema, e, em particular, pela arquitetura de software, pois as decisões arquitetônicas devem considerar aspectos relacionados à disponibilidade. No método de avaliação de arquitetura ATAM (Architecture Tradeoff Analysis Method Método de Análise de Compromissos de Arquitetura), esse requisito é analisado através da utilização de cenários de disponibilidade. Como a avaliação da disponibilidade é normalmente uma tarefa complexa, requerendo dos analistas a identificação de numerosos itens interdependentes, a geração e, conseqüentemente, a análise de cenários de disponibilidade na maioria das vezes não é uma tarefa trivial. O presente trabalho tem como objetivo elaborar uma técnica de análise de disponibilidade em sistemas de software para a Web, que auxilie a geração sistemática de cenários de disponibilidade requeridos no método ATAM. Para a elaboração da proposta, o trabalho aborda métodos para a elicitação, representação e análise de requisitos não-funcionais em uma determinada arquitetura de software, bem como conceitos e taxonomias relacionadas à dependabilidade. Ao final, a técnica é exercitada em um exemplo simplificado de sistema de software bancário na Web. / The use of Internet for e-business service automation has been adopted as a strategy by organizations in several sectors of the economy, reducing costs and providing a better relationship with the customer. Availability is an important nonfunctional requirement to be considered in the development of software systems offering this type of automation. The level of system availability may be affected by the system architecture, and, especially, by the software architecture, as architectural decisions must take availability-related aspects into account. In the ATAM (Architecture Tradeoff Analysis Method) architecture evaluation method, this requirement is analyzed by means of availability scenarios. As availability evaluation is normally a complex task, requiring analysts to identify several interdependent items, the generation and, consequently, the analysis of availability scenarios is often not a trivial task. This work aims to elaborate an availability technique analysis for web-based software systems, to aid in the systematic generation of availability scenarios required in the ATAM method. To elaborate the proposal, the work covers methods for elicitation, representation and analysis of non-functional requirements in a specific software architecture, as well as concepts and taxonomies related to dependability. In the end, the technique is applied on a simplified example of web banking software system.
45

Towards interoperable and knowledge-based electronic health records using archetype methodology /

Chen, Rong, January 2009 (has links)
Diss. (sammanfattning) Linköping : Linköpings universitet, 2009. / Härtill 5 uppsatser.
46

[en] EUNOMIA (ΕΥΝΟΜIΑ): A REQUIREMENT ENGINEERING BASED COMPLIANCE FRAMEWORK FOR SOFTWARE SYSTEMS / [pt] EUNOMIA: UM FRAMEWORK DE CONFORMIDADE CONTINUA PARA SISTEMAS DE SOFTWARE BASEADO NA ENGENHARIA DE REQUISITOS

PRISCILA ENGIEL 14 December 2018 (has links)
[pt] Leis e regulamentos afetam o desenvolvimento de software, já que freqüentemente exigem mudanças nos requisitos de software para proteger indivíduos e empresas em relação à segurança, privacidade, governança, sustentabilidade e muito mais. Requisitos legais podem ditar novos requisitos ou restringir os existentes. O problema da conformidade de software é como garantir que o software esteja em conformidade com as normas impostas pela legislação. O problema é particularmente desafiador porque combina etapas difíceis: 1) analisar documentos legais, 2) extrair requisitos desses documentos, 3) identificar requisitos conflitantes com aqueles já implementados em software e 4) garantir que o software permaneça compatível mesmo com as alterações. A conformidade é um processo contínuo: as leis, o software e o contexto no qual o sistema de software opera mudam continuamente. Os trabalhos que lidam com o problema de conformidade concentram-se apenas em um ou dois assuntos: analisar documentos legais ou extrair requisitos ou identificar conflitos ou mudanças. Esta tese trata de todos os problemas ao mesmo tempo; a ideia é extrair requisitos do texto legal, compará-los com o requisito de software, resolver os possíveis conflitos que possam surgir, lidando continuamente as mudanças no ambiente, leis e requisitos. Para tanto, este trabalho propõe um framework que é composto por um processo de compliance e monitoramento contínuo das mudanças ambientais. O processo de conformidade suporta a identificação, extração, comparação e resolução de conflitos para ajudar na conformidade do software, produzindo um conjunto conforme de requisitos. O processo de conformidade é baseado na anotação semântica e no modelo de meta. A anotação semântica ajuda a extrair requisitos do arquivo, usando padrões. O modelo de meta é usado para ajudar na comparação entre requisitos e representar requisitos em uma especificação de requisitos formal e consistente. O processo é suportado por ferramentas; sendo algumas reutilizadas (Desiree e NomosT) para avançar cada etapa. Foi necessário adaptar as ferramentas para o contexto do processo de conformidade, criando diretrizes, padrões e heurísticas. O monitoramento contínuo está preocupado com as mudanças que afetam a conformidade do software e tem o mecanismo para garantir que, mesmo com essas mudanças, o software recupere a conformidade. O monitoramento da conformidade é baseado em agentes e requisitos não funcionais. Os agentes são representados usando em i, a idéia é mostrar a colaboração entre os agentes para garantir a conformidade contínua. A especificação de requisitos de como cada agente deve se comportar também foi gerada usando a linguagem Desiree e BPMN. O catálogo de Requisitos Não Funcionais é usado para ajudar a definir as operações para o reconhecimento de software. A validação do framework foi feita em duas partes: primeiro, o processo de compliance e após todo o framework proposto. Para o processo de conformidade, o esforço e a exatidão foram medidos comparando o uso do processo proposto e um método ad-hoc. Para todo o framework, foi usado o exemplo de monitoramento das mudanças no ambiente quando um carro automatizado cruza a fronteira entre Washington e o Canadá. A contribuição deste trabalho é a estrutura da Eunomia, que tem uma perspectiva de processo e modelo de metas, com ênfase no monitoramento que ajuda a lidar com o desafio da conformidade. O framework equipa a equipe de engenharia de requisitos com um método sistemático e suportado por ferramentas que pode ser reutilizado para reduzir o esforço de tempo e melhorar a qualidade da especificação de requisitos. / [en] Laws and regulation affect software development, as they frequently demand changes in software requirements to protect individuals and businesses regarding security, privacy, governance, sustainability and more. Legal requirements can dictate new requirements or constrain existing ones. The problem of software compliance is how to ensure that the software complies with the norms that the legislation imposes. The problem is particularly challenging because it combines difficultsteps: 1)analyze legal documents, 2) extract requirements from those documents, 3) identify conflictingrequirements with those already implemented in software and 4) ensure that software remains compliant even with the changes. Compliance is a continuous process: laws, software and the context within which software system operates changes continuously. The works dealing with the compliance problem focus only on one or two subjects: analyze legal documents or extract requirements or identify conflicts or changes. This thesis deals with all the problems at the same time; the idea is to extract requirements from legal text, compare them with the software requirement, resolve the possible conflicts that may arise, continuously leading with the changes on environment, laws and requirements. For this, this work proposes a framework that is composed of a compliance process and continuous monitoring of environmental changes. The framework deals with different types of laws (security, privacy, transparency, health care) that are represented in explicit norms. The compliance process supports the identification, extraction, comparison and conflict resolution to help software compliance, by producing a compliant set of requirements. The compliance process is based on the semantic annotation and goal model. The semantic annotation helps to extract requirements from thelaw, using patterns. The goal model is used to help the comparison between requirement and to represent requirements in a formal and consistent requirement specification. The process is tool supported; some tools were reused (Desiree and NomosT) to further each step. It was necessary to adapt the tools for the context of the compliance process, creating a guideline, patterns, and heuristics. The continuous monitoring is concerned about the changes that affect the software compliance and has presented using in i, the idea is to showthe collaboration between the agents to ensure the continuous compliance. The requirement specification of how each agent should behave was also generated using Business Process Modeling Notation and Desiree language. The Non Functional Requirements catalogue is used to help to define operalizations for the software awareness. The framework validation was made in two parts: first, the compliance process and after all the framework proposed. For the compliance process, the effort and correctness were measured comparing the use of the proposed process andan ad-hoc method. For the entire framework, the example of monitoring the changes in the environment when an automated car is crossing the border between Washington and Canada was used. The study shows that context has a strong influence on the software requirements, and nonconformity problems may incur penalties. The contribution of this work is the Eunomia framework that has a process and goal model perspective with emphasis on monitoring that helps to deal with the compliance challenge. The framework equips the requirements engineering team with a systematic method. Eunomia framework is a tool-supported and systematic process which can be reused to reduce the time effort and to improve the quality of the requirement specification that helps to create a compliant software requirement specification that is compliant over the time.
47

Generation of Formal Specifications for Avionic Software Systems

Gulati, Pranav 02 October 2020 (has links)
Development of software for electronic systems in the aviation industry is strongly regulated by pre-defined standards. The aviation industry spends significant costs of development in ensuring flight safety and showing conformance to these standards. Some safety requirements can be satisfied by performing formal verification. Formal verification is seen as a way to reduce costs of showing conformance of software with the requirements or formal specifications. Therefore, the correctness of formal specifications is critical. Writing formal specifications is at least as difficult as developing software [36]. This work proposes an approach to generate formal specifications from example data. This example data illustrates the natural language requirements and represents the ground truth about the system. This work eases the task of an engineer who has to write formal specifications by allowing the engineer to specify the example data instead. The use of a relationship model and a marking syntax and semantics are proposed that make the creation of formal specifications goal oriented. The evaluation of the approach shows that the proposed syntax and semantics capture more information than is strictly needed to generate formal specifications. The relationship model reduces the computational load and only produces formal specifications that are interesting for the engineer.
48

An Overview of Event-based Facades for Modular Composition and Coordination of Multiple Applications

Malakuti, Somayeh 18 May 2016 (has links)
Complex software systems are usually developed as systems of systems (SoS’s) in which multiple constituent applications are composed and coordinated to fulfill desired system-level requirements. The constituent applications must be augmented with suitable coordination-specific interfaces, through which they can participate in coordinated interactions. Such interfaces as well as coordination rules have a crosscutting nature. Therefore, to increase the reusability of the applications and to increase the comprehensibility of SoS’s, suitable mechanisms are required to modularize the coordination rules and interfaces from the constituent applications. We introduce a new abstraction named as architectural event modules (AEMs), which facilitate defining constituent applications and desired coordination rules as modules of SoS’s. AEMs augment the constituent applications with event-based facades to let them participate in coordinated interactions. We introduce the EventArch language in which the concept of AEMs is implemented, and illustrate its suitability using a case study.
49

Decentrally Coordinated Execution of Adaptations in Distributed Self-Adaptive Software Systems

Weißbach, Martin, Chrszon, Philipp, Springer, Thomas, Schill, Alexander 05 July 2021 (has links)
Software systems in domains like Smart Cities, the Internet of Things or autonomous cars are coined by a high degree of distribution across several independent computing devices and the requirement to be able to adjust themselves to varying situations in their operational environment. Self-adaptive software systems are a natural choice to implement such context-dependent software systems. A multitude of approaches already implement self-adaptive systems and some consider even distribution aspects.Yet, none of the existing solutions supports the coordination of adaptation operations spanning multiple independent nodes, which is necessary to ensure a consistent adaptation even in presence of network errors or node failures. In this paper, we tackle this challenge to execute adaptations in distributed self-adaptive software systems in a coordinated manner. We present a protocol that enables the self-adaptive software system to execute correlated adaptations on multiple nodes in a transactional manner ensuring an atomic and consistent transition of the distributed system from its source to the desired target configuration. The protocol is validated to be free of deadlocks for any given adaptation at any point in time using a model-checking approach. The performance of our approach is investigated in experiments that emulate the protocol's execution on real devices for different sizes of distributed applications and adaptation scenarios.
50

control theory for computing systems : application to big-data cloud services & location privacy protection / Contrôle des systèmes informatiques : application aux services clouds et à la protection de vie privée

Cerf, Sophie 16 May 2019 (has links)
Cette thèse présente une application de la théorie du contrôle pour les systèmes informatiques. Un algorithme de contrôle peut gérer des systèmes plus grands et plus complexes, même lorsqu'ils sont particulièrement sensibles aux variations de leur environnement. Cependant, l'application du contrôle aux systèmes informatiques soulève plusieurs défis, par exemple dû au fait qu'aucune physique ne les régisse. D'une part, le cadre mathématique fourni par la théorie du contrôle peut être utilisé pour améliorer l'automatisation, la robustesse et la fiabilité des systèmes informatiques. D'autre part, les défis spécifiques de ces cas d'étude permettent d'élargir la théorie du contrôle elle-même. L'approche adoptée dans ce travail consiste à utiliser deux systèmes informatiques d'application: la protection de vie privée liée à la mobilité et les performances des services clouds. Ces deux cas d'utilisation sont complémentaires par la nature de leurs technologies, par leur échelle et par leurs utilisateurs finaux.La popularité des appareils mobiles a favorisé la diffusion et la collecte des données de localisation, que ce soit pour que l'utilisateur bénéficie d'un service personnalisé (e.g. une planification d'itinéraire) ou pour que le prestataire de services tire des informations utiles des bases de données de mobilité (e.g. la popularité de lieux). En effet, de nombreuses informations peuvent être extraites de données de localisation, y compris des données personnelles très sensibles. Pour remédier à cette atteinte à la vie privée, des mécanismes de protection spécifiques aux données de mobilité (LPPM) ont été élaborés. Ce sont des algorithmes qui modifient les données de localisation de l'utilisateur, dans le but de cacher des informations sensibles. Cependant, ces outils ne sont pas facilement configurables par des non experts et sont des processus statiques qui ne s'adaptent pas à la mobilité de l'utilisateur. Dans cette thèse, nous développons deux outils, l'un pour les bases de données déjà collectées et l'autre pour l'utilisation en ligne, qui garantissent aux utilisateurs des niveaux de protection de la vie privée et de préservation de la qualité des services en configurant les LPPMs. Nous présentons la première formulation du problème en termes de théorie du contrôle (système et contrôleur, signaux d’entrée et de sortie), et un contrôleur PI pour servir de démonstration d’applicabilité. Dans les deux cas, la conception, la mise en œuvre et la validation ont été effectuées par le biais d'expériences utilisant des données d'utilisateurs réels recueillies sur le terrain.L'essor récent des bigdata a conduit au développement de programmes capables de les analyser, tel que MapReduce. Les progrès des pratiques informatiques ont également permis d'établir le modèle du cloud (où il est possible de louer des ressources de bas niveau pour permettre le développement d'applications de niveau supérieur sans se préoccuper d'investissement ou de maintenance) comme une solution incontournable pour tous types d'utilisateurs. Garantir les performances des tâches MapReduce exécutées sur les clouds est donc une préoccupation majeure pour les grandes entreprises informatiques et leurs clients. Dans ce travail, nous développons des techniques avancées de contrôle du temps d'exécution des tâches et de la disponibilité de la plate-forme en ajustant la taille du cluster de ressources et en réalisant un contrôle d'admission, fonctionnant quelle que soit la charge des clients. Afin de traiter les non linéarités de MapReduce, un contrôleur adaptatif a été conçu. Pour réduire l'utilisation du cluster (qui entraîne des coûts financiers et énergétiques considérables), nous présentons une nouvelle formulation du mécanisme de déclenchement du contrôle événementiel, combiné à un contrôleur prédictif optimal. L'évaluation est effectuée sur un benchmark s'exécutant en temps réel sur un cluster, et en utilisant des charges de travail industrielles. / This thesis presents an application of Control Theory for Computing Systems. It aims at investigating techniques to build and control efficient, dependable and privacy-preserving computing systems. Ad-hoc service configuration require a high level of expertise which could benefit from automation in many ways. A control algorithm can handle bigger and more complex systems, even when they are extremely sensitive to variations in their environment. However, applying control to computing systems raises several challenges, e.g. no physics governs the applications. On one hand, the mathematical framework provided by control theory can be used to improve automation and robustness of computing systems. Moreover, the control theory provides by definition mathematical guarantees that its objectives will be fulfilled. On the other hand, the specific challenges of such use cases enable to expand the control theory itself. The approach taken in this work is to use two application computing systems: location privacy and cloud control. Those two use-cases are complementary in the nature of their technologies and softwares, their scale and in their end-users.The widespread of mobile devices has fostered the broadcasting and collection of users’ location data. It could be for the user to benefit from a personalized service (e.g. weather forecast or route planning) or for the service provider or any other third party to derive useful information from the mobility databases (e.g. road usage frequency or popularity of places). Indeed, many information can be retrieved from location data, including highly sensitive personal data. To overcome this privacy breach, Location Privacy Protection Mechanisms (LPPMs) have been developed. They are algorithm that modify the user’s mobility data, hopefully to hide some sensitive information. However, those tools are not easily configurable by non experts and are static processes that do not adapt to the user’s mobility. We develop two tools, one for already collected databases and one for online usage, that, by tuning the LPPMs, guarantee to the users objective-driven levels of privacy protection and of service utility preservation. First, we present an automated tool able to choose and configure LPPMs to protect already collected databases while ensuring a trade-off between privacy protection and database processing quality. Second, we present the first formulation of the location privacy challenge in control theory terms (plant and control, disturbance and performance signals), and a feedback controller to serve as a proof of concept. In both cases, design, implementation and validation has been done through experiments using data of real users collected on the field.The surge in data generation of the last decades, the so-called bigdata, has lead to the development of frameworks able to analyze them, such as the well known MapReduce. Advances in computing practices has also settled the cloud paradigms (where low-level resources can be rented to allow the development of higher level application without dealing with consideration such as investment in hardware or maintenance) as premium solution for all kind of users. Ensuring the performances of MapReduce jobs running on clouds is thus a major concern for the big IT companies and their clients. In this work, we develop advanced monitoring techniques of the jobs execution time and the platform availability by tuning the resource cluster size and realizing admission control, in spite of the unpredictable client workload. In order to deal with the non linearities of the MapReduce system, a robust adaptive feedback controller has been designed. To reduce the cluster utilization (leading to massive financial and energetic costs), we present a new event-based triggering mechanism formulation combined with an optimal predictive controller. Evaluation is done on a MapReduce benchmark suite running on a large-scale cluster, and using real jobs workloads.

Page generated in 0.0509 seconds