• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatizace procesů agilního vývoje / Automation of Processes in Agile Development

Jašek, Tibor January 2016 (has links)
The goal of master thesis "Automation of Processes in Agile Development" is research of agile metodics with a focus on development practices in the Kentico company. This thesis describes different tools used as a support of agile software development including JIRA Software and Confluence, which are used in Kentico. Important part of this thesis is analysis of the current company processes and a plan of their optimization and automation. During the implementation part a web application which displays metrics arising from analysis and optimization proposal phase is realized. This thesis also contains discussion of it's realization and possible improvements.
12

Towards more scalability and flexibility for distributed storage systems / Vers un meilleur passage à l'échelle et une plus grande flexibilité pour les systèmes de stockage distribué

Ruty, Guillaume 15 February 2019 (has links)
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche. / The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects.
13

RIGEL : um repositorio com suporte para desenvolvimento basaeado em componentes / RIGEL : a repository with support for component based development

Pinho, Helder de Sousa 24 February 2006 (has links)
Orientador: Cecilia Mary Fischer Rubira / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-07T00:50:23Z (GMT). No. of bitstreams: 1 Pinho_HelderdeSousa_M.pdf: 1255692 bytes, checksum: 91ab06629ddbbf4b6885b93010e3511f (MD5) Previous issue date: 2006 / Resumo: O desenvolvimento baseado em componente (DBC) pennite que uma aplicação seja construída pela composição de componentes de software que já foram previamente especificados, construídos e testados, resultando em ganhos de produtividade e qualidade no software produzido. Para haver reuso de componentes, é necessário que usuários consIgam procurar e recuperar componentes previamente especificados ou implementados Um repositório de componentes é essencial para possibilitar tal reuso. Interoperabilidade é um requisito importante para repositórios, mas nem todas as ferramentas a tratam com a devida relevância. O modelo de metadados de um repositório para DBC deve contemplar características de componentes, tais como interface e separação entre especificação e implementação. Este trabalho apresentou o Rigel, um repositório de bens de software reutilizáveis com suporte para desenvolvimento baseado em componentes. O Rigel apresenta características que facilitam atividades executadas durante o desenvolvimento de sistemas baseados em componentes, tais como pesquisa, armazenamento e recuperação de bens e integração com CVS. O padrão RAS foi adotado como o fonnato de metadados e de empacotamento de bens, facilitando a integração do Rigel com outros sistemas. O modelo de metadados do RAS foi estendido para apoiar um modelo conceitual de componentes e arquitetura de software. Esta adaptação resultou na criação de quatro novos profiles RAS, para apoiar bens relacionados à DBC: componente abstrato, componente concreto, interface e configuração arquitetural. Um estudo de caso foi conduzido a fim de mostrar como o Rigel apóia um processo de desenvolvimento baseado em componentes. Conclui-se que as características do repositório Rigel facilitam um desenvolvimento baseado em componentes / Abstract: The component based development (CBD) permits an application to be built by composition of previously specified, build and tested components, resulting in increases in productivity and quality of the produced software. 1n order to make the reuse of components happen, it is necessary that users are able to search and retrieve previously specified or implemented components. A component repository is important to support this reuse. 1nteroperability is an important requirement for repositories, but not alI the tools consider it with the required relevance. The metadata model of a CBD repository must handle components features, such as interface and separation between specification and implementation. This work presents Rigel, a repository of reusable software assets with a support for component based development. Rigel presents features that make activities performed during the development of component based systems easier, such as search, storage and retrieval of assets and CVS integration. RAS standard was adopted as the asset metadata and packaging format, making Rigel integration with other systems easier. The RAS metadata model was extended to support a conceptual model of components and software architecture. This adaptation resulted in the creation of four new RAS profiles to support CBD related assets: abstract component, concrete component, interface and architectural configuration. A case study was conducted in order to show how Rigel supports a CBD processo We also conclude that Rigel repository features make the component based development easier / Mestrado / Engenharia de Computação / Mestre em Computação
14

Analysis and Improvement of a Software Production Process based on the Combination of Model Driven Development and Software Product Lines

Echeverría Ochoa, Jorge 10 September 2018 (has links)
La reutilización es un factor clave para reducir los costos y mejorar la calidad de las propiedades de productos software como la seguridad, fiabilidad o rendimiento. Siguiendo este factor surge la aproximación para el desarrollo de software de Líneas de Productos Software; esta aproximación promete, entre otras cosas, acortar el tiempo del desarrollo de los sistemas software y reducir significativamente los costes de desarrollo y mantenimiento. Por otro lado, el Desarrollo Dirigido por Modelos es un enfoque para el desarrollo de software que propone el uso de modelos en varios niveles de abstracción y transformaciones de modelo como artefactos principales. El uso de modelos como los principales artefactos en el desarrollo de software ofrece muchas ventajas a los desarrolladores, por ejemplo, las transformaciones de modelo permiten la conversión de un modelo fuente en otro modelo objetivo, el aumento del nivel de abstracción permite a los desarrolladores centrarse en el problema a resolver y restar importancia a los detalles de implementación. Ambos paradigmas, en la búsqueda de optimizar el tiempo de producción y calidad en el software generado, pueden reunir importantes ventajas en el proceso de producción de software. La combinación de Desarrollo Dirigido por Modelos y Líneas de Producto Software para producir productos software requiere la identificación de nuevos retos y necesidades de los stakeholders involucrados. La investigación presentada en esta tesis tiene el objetivo, apoyada en varios estudios empíricos realizados en entornos industriales, de aumentar el conocimiento y realizar una serie de propuestas de mejora del proceso de desarrollo software fundamentado en la combinación de Desarrollo Dirigido por Modelos y Líneas de Producto Software. Para alcanzar este objetivo se han estudiado cuatro dimensiones: procesado de requisitos, usabilidad, comprensión (en la configuración de productos) y gestión de errores. Cada una de estas dimensiones ha sido abordada en un estudio empírico, estudios presentados en trabajos de investigación y que forman la parte nuclear de esta tesis. Como resultado del trabajo realizado en esta tesis se han elaborado una serie de propuestas para mejorar el proceso de desarrollo software basado en la combinación de Desarrollo Dirigido por Modelos y Líneas de Producto Software y se han generado siete trabajos de investigación. Cinco de estos trabajos han sido presentados en conferencias de relevancia en el ámbito de la Ingeniería del Software: CAiSEForum'15, CAiSE'16, ESEM'16, ISD'17 y ESEM'17. Estos resultados de investigación han sido aplicados en el proceso de desarrollo de software de placas de inducción de la división de electrodomésticos de BSH (Bosch, Siemens, Gaggenau, Neff y Balay) y están siendo utilizados para su aplicación en la actual implantación para el desarrollo de software del PLC que controla los trenes en la empresa Construcciones y Auxiliar de Ferrocarriles. / Software reuse is a key factor in reducing costs and improving the quality of software product properties such as security, reliability, or performance. Taking this factor in account, the Software Product Line approach appears for software development. This approach promises to decrease the time spent in developing software systems and to significantly reduce the costs for development and maintenance of software systems, among other things. In addition, Model Driven Development is an approach for software development that proposes the use of models at various levels of abstraction and model transformations as main artifacts. The use of models as the main artifacts in software development offers many advantages for developers. For instance, model transformations allow the conversion of a source model into another target model. The increase in the level of abstraction allows the developers to focus on the problem to solve, subtracting importance to the implementation details. Both paradigms search to optimize the production time and quality of the generated software, and can result in important advantages in the software production process. However, the combination of Model Driven Development and Software Product Lines to develop software products requires the identification of the challenges and needs of the involved stakeholders. The research presented in this dissertation, supported by several empirical studies carried out in industrial environments, aims to increase the knowledge in the field and to do a set of proposals to improve the software development process based on the combination of Model Driven Development and Software Product Lines. To achieve this objective, four dimensions have been studied: requirements processing, usability, comprehension (configuring software products), and error management. The dimensions have been addressed through empirical studies, presented in research papers. These papers conform the core of this dissertation. As result of the work carried out for this dissertation, a set of proposals to improve the software development process based on the combination of Model Driven Development and Software Product Lines have been generated. Furthermore, seven research papers have been published. Five of these works have been presented at relevant conferences in the Software Engineering field: CAiSE Forum'15, CAiSE'16, ESEM'16, ISD'17 and ESEM'17. These research results have been applied in the software development process of the induction hobs from the electrical appliances division of BSH (under the brands Bosch, Siemens, Gaggenau, Neff, and Balay), and are also being used in the current implementation of the PLC that controls the trains manufactured by the Construcciones y Auxiliar de Ferrocarriles company. / La reutilització és un factor clau per a reduir els costos i millorar la qualitat de les propietats de productes programari com la seguretat, fiabilitat o rendiment. Seguint aquest factor sorgeix l'aproximació per al desenvolupament de programari utilitzant Línies de Productes Programari; aquesta aproximació promet, entre altres coses, escurçar el temps del desenvolupament dels sistemes programari i reduir significativament els costos de desenvolupament i manteniment. D'altra banda, el Desenvolupament Dirigit per Models és un enfocament per al desenvolupament de programari que proposa l'ús de models en diversos nivells d'abstracció i transformacions de model com artefactes principals. L'ús de models com els principals artefactes en el desenvolupament de programari ofereix molts avantatges als desenvolupadors, per exemple, les transformacions de model permeten la conversió d'un model font en un altre model objectiu, l'augment del nivell d'abstracció permet als desenvolupadors centrar-se en el problema a resoldre i restar importància als detalls d'implementació. Tots dos paradigmes, en la cerca d'optimitzar el temps de producció i qualitat en el programari generat, poden reunir importants avantatges en el procés de producció de programari.La combinació de Desenvolupament Dirigit per Models i Línies de Producte Programari per a produir productes programari requereix la identificació de nous reptes i necessitats dels \ingles{stakeholders} involucrats. La recerca presentada en aquesta tesi té l'objectiu, recolzada en diversos estudis empírics realitzats en entorns industrials, d'augmentar el coneixement i realitzar una sèrie de propostes de millora del procés de desenvolupament de programari fonamentat en la combinació de Desenvolupament Dirigit per Models i Línies de Producte Programari. Per a aconseguir aquest objectiu s'han estudiat quatre dimensions: processament de requisits, usabilitat, comprensió (en la configuració de productes) i gestió d'errors. Cadascuna d'aquestes dimensions ha sigut abordada en un estudi empíric, estudis presentats en treballs de recerca i que formen la part nuclear d'aquesta tesi. Com a resultat del treball realitzat en aquesta tesi s'han elaborat una sèrie de propostes per a millorar el procés de desenvolupament programari basat en la combinació de Desenvolupament Dirigit per Models i Línies de Producte Programari i s'han generat set treballs de recerca. Cinc d'aquests treballs han sigut presentats en conferències de rellevància en l'àmbit de l'Enginyeria del Programari: CAiSEForum'15, CAiSE'16, ESEM'16, ISD'17 i ESEM'17. Aquests resultats de recerca han sigut aplicats en el procés de desenvolupament de programari de plaques d'inducció de la divisió d'electrodomèstics de BSH (Bosch, Siemens, Gaggenau, Neff i Balay) i estan sent utilitzats per a la seua aplicació en l'actual implantació per al desenvolupament de programari del PLC que controla els trens en l'empresa Construcciones y Auxiliar de Ferrocarriles. / Echeverría Ochoa, J. (2018). Analysis and Improvement of a Software Production Process based on the Combination of Model Driven Development and Software Product Lines [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/107734
15

Restituição fotogramétrica segundo o padrão da estruturação de dados Geoespaciais Vetoriais no ambiente E-FOTO. / Photogrammetry restitution according to the EDGV on E-FOTO.

Rogério Luís Ribeiro Borba 18 March 2009 (has links)
No mês de outubro de 2007 a Comissão Nacional de Cartografia (CONCAR) lançou uma norma cartográfica composta de especificações técnicas para Estruturação de Dados Geoespaciais Digitais Vetoriais (EDGV) para a realização do mapeamento topográfico sistemático no Brasil. No mês de novembro de 2008 foi publicado o Decreto n 6666, que institui a Infra-Estrutura Nacional de Dados Espaciais e ratifica o uso das normas homologadas pela CONCAR, pelas instituições Federais. Este trabalho realiza dois requisitos, a saber: (a) fornece um esquema relacional alinhado as especificações da EDGV e (b) implementa um protótipo, com vistas a sua utilização no processo de restituição fotogramétrica nas instituições públicas responsáveis pelo mapeamento topográfico no Brasil. A implementação é realizada em ambiente de software livre. Também é importante salientar o caráter educacional da plataforma de software a ser implementada, para que o seu uso seja também direcionado para o ensino teórico e prático da fotogrametria digital nas instituições de ensino e pesquisa. / The National Commission of Cartography (CONCAR) in October 2007 launched a pattern composed of standard specifications for Structuring of Digital Geospatial Data Vector (EDGV) for systematic topographic mapping production in Brazil. In November of 2008 it was published the Decree n 6666, that established the National Spatial Data Infrastructure and ratified the use of standards approved by CONCAR. This work accomplishes two goals: (a) to provide a relational schema specification aligned with the EDGV and; (b) to provide an implementation of a prototype, for using in the photogrammetric restitution of topographic mapping in Brazil. The implementation is performed in an environment of free software, according to the GNU/GPL approch. It is also important to emphasize the educational character of the software to be implemented so that its use is also directed to the theoretical and practical studies of digital photogrammetry in the institutions of education and research.
16

Restituição fotogramétrica segundo o padrão da estruturação de dados Geoespaciais Vetoriais no ambiente E-FOTO. / Photogrammetry restitution according to the EDGV on E-FOTO.

Rogério Luís Ribeiro Borba 18 March 2009 (has links)
No mês de outubro de 2007 a Comissão Nacional de Cartografia (CONCAR) lançou uma norma cartográfica composta de especificações técnicas para Estruturação de Dados Geoespaciais Digitais Vetoriais (EDGV) para a realização do mapeamento topográfico sistemático no Brasil. No mês de novembro de 2008 foi publicado o Decreto n 6666, que institui a Infra-Estrutura Nacional de Dados Espaciais e ratifica o uso das normas homologadas pela CONCAR, pelas instituições Federais. Este trabalho realiza dois requisitos, a saber: (a) fornece um esquema relacional alinhado as especificações da EDGV e (b) implementa um protótipo, com vistas a sua utilização no processo de restituição fotogramétrica nas instituições públicas responsáveis pelo mapeamento topográfico no Brasil. A implementação é realizada em ambiente de software livre. Também é importante salientar o caráter educacional da plataforma de software a ser implementada, para que o seu uso seja também direcionado para o ensino teórico e prático da fotogrametria digital nas instituições de ensino e pesquisa. / The National Commission of Cartography (CONCAR) in October 2007 launched a pattern composed of standard specifications for Structuring of Digital Geospatial Data Vector (EDGV) for systematic topographic mapping production in Brazil. In November of 2008 it was published the Decree n 6666, that established the National Spatial Data Infrastructure and ratified the use of standards approved by CONCAR. This work accomplishes two goals: (a) to provide a relational schema specification aligned with the EDGV and; (b) to provide an implementation of a prototype, for using in the photogrammetric restitution of topographic mapping in Brazil. The implementation is performed in an environment of free software, according to the GNU/GPL approch. It is also important to emphasize the educational character of the software to be implemented so that its use is also directed to the theoretical and practical studies of digital photogrammetry in the institutions of education and research.
17

Diseño de un sistema de gestión de la seguridad de la información (SGSI), basada en la norma ISO/IEC 27001:2013, para el proceso de servicio post-venta de un integrador de soluciones en Telecomunicaciones

Torres León, Martin Renzo 30 April 2018 (has links)
Describe los conceptos involucrados al diseño de un Sistema de Gestión de Seguridad de la Información (SGSI) y su despliegue asociado al proceso de Servicio Post-Venta de un Integrador de Telecomunicaciones. Asimismo, este diseño contempló la adopción de los procedimientos y lineamientos indicados en la norma internacional ISO/IEC 27001 en su versión 2013 y los controles de seguridad asociados. Se demostró que los beneficios de este diseño al proceso de servicio Post-Venta en estudio fueron: La clasificación de los principales activos de información, la determinación de los principales riesgos a lo que los activos de información están expuestos, la propuesta de un Plan de Tratamiento de Riesgos (PTR) sobre los activos de información, y la definición de roles y responsabilidades dentro de la adopción de una estructura organizacional de un Sistema de Gestión de Seguridad de la Información (SGSI), los cuales se ajustaron a los requerimientos de seguridad y negocios definidos por la organización. / Describes the concepts involved in the design of an Information Security Management System (ISMS) and its deployment associated with the Post-Sale Service process of a Telecommunications Integrator. Likewise, this design contemplated the adoption of the procedures and guidelines indicated in the international standard ISO / IEC 27001 in its 2013 version and the associated security controls. It was demonstrated that the benefits of this design to the process of post-sale service under study were: The classification of the main information assets, the determination of the main risks to which the information assets are exposed, the proposal of a Plan of Treatment of Risks (PTR) on the information assets, and the definition of roles and responsibilities within the adoption of an organizational structure of an Information Security Management System (ISMS), which were adjusted to the security requirements and businesses defined by the organization. / Tesis
18

[en] DEPLOYMENT OF DISTRIBUTED COMPONENT-BASED APPLICATIONS ON CLOUD INFRASTRUCTURES / [pt] IMPLANTAÇÃO DE APLICAÇÕES BASEADAS EM COMPONENTES DISTRIBUÍDOS SOBRE INFRAESTRUTURAS NA NUVEM

EDWARD JOSE PACHECO CONDORI 07 November 2014 (has links)
[pt] A implantação de aplicações baseadas em componentes distribuídos é composta por um conjunto de atividades geridas por uma Infraestrutura de Implantação. Aplicações atuais estão se tornando cada vez mais complexas, necessitando de um ambiente alvo dinâmico e multi-plataforma. Assim, a atividade de planejamento de uma implantação é o passo mais crítico, pois define a configuração da infraestrutura de execução de forma a atender os requisitos do ambiente alvo de uma aplicação. Por outro lado, o modelo de serviço na nuvem chamado Infraestrutura como Serviço(IaaS) oferece recursos computacionais sob demanda, com características dinâmicas, escaláveis e elásticas. Nesta dissertação nós estendemos a Infraestrutura de Implantação para componentes SCS de forma a permitir o uso de nuvens privadas ou públicas como o ambiente alvo de uma implantação, através do uso de uma cloud API e políticas flexíveis para especificar um ambiente alvo personalizado. Além disso, hospedamos a infraestrutura de implantação na nuvem. Isto permitiu-nos usar recursos computacionais sob demanda para instanciar os serviços da Infraestrutura de Implantação, produzindo uma Plataforma como Serviço(PaaS) experimental. / [en] Deployment of distributed component-based applications is composed of a set of activities managed by a Deployment Infrastructure. Current applications are becoming increasingly more complex, requiring a multi-platform and a dynamic target environment. Thus, the planning activity is the most critical step because it defines the configuration of the execution infrastructure in order to satisfy the requirements of the application’s target environment. On the other hand, the cloud service model called Infrastructure as a Service (IaaS) offers on-demand computational resources with dynamic, scalable, and elastic features. In this work we have extended the Deployment Infrastructure for SCS componentes to support private or public clouds as its target environment, through the use of a cloud API and flexible policies to specify a customized target environment. Additionally, we host the Deployment Infrastructure on the cloud, which allow us to use on-demand computational resources to instantiate Deployment Infrastructure services, creating an experimental Platform as a Service (PaaS).
19

"SemanticAgent, uma plataforma para desenvolvimento de agentes inteligentes" / SemanticAgent, a platform for development of Intelligent Agents capable of processing restricted natural language.

Lucena, Percival Silva de 15 April 2003 (has links)
Agentes inteligentes é um termo guarda-chuva que agrega diversas pesquisas no desenvolvimento de softwares autônomos que utilizam técnicas de Inteligência Artificial a fim de satisfazer metas estabelecidas por seus usuários. A construção de sistemas baseados em agentes inteligentes é uma tarefa complexa que envolve aspectos como comunicação entre agentes, planejamento, divisão de tarefas, coordenação, representação e manipulação de conhecimento e comportamentos, entre outras tarefas. Plataformas para agentes prevêem alguns serviços que permitem a desenvolvedores construir soluções sem a necessidade de se preocupar com todos detalhes da implementação. Um novo modelo para criação de agentes chamado 'agentes atômicos' é proposto com o objetivo de oferecer flexibilidade para o gerenciamento de conhecimento e implementação de comportamentos. A arquitetura Agentes Semânticos provê um framework para a implementação de tal modelo, oferecendo um conjunto de ferramentas para a criação de agentes inteligentes. Um protótipo de plataforma para agentes, baseado em tal arquitetura, foi desenvolvido em Java e permite a criação de aplicações capazes de processar linguagem natural restrita, manipular conhecimento e executar ações úteis. / Intelligent Agents is an umbrella term that aggregates different research on the development of autonomous software that uses Artificial Intelligence techniques in order to satisfy user requests. The construction of systems based on intelligent agents is a complex task that involves aspects such as agent communication, planning, work division, cooperation, epresentation and manipulation of knowledge,among other activities. Agent Platforms provide some services that allow developers to build solutions without the need of worrying about every implementation detail. A new model for creating agents, called 'atomic agents', is proposed with the goal of offering flexible knowledge management and behavior implementation for constructing software agents. The Semantic AgentArchitecture provides a framework for the implementation of such model, offering a set of tools for the creation of intelligent agents. A prototype Agent Platform, based on the architecture, was developed in Java and allows the creation of applications that are able to process restricted natural language, manipulate knowledge and execute useful actions.
20

Outsourcing Network Services via the NBI of the SDN / Externalisation de services réseau via l'interface nord de SDN

Aflatoonian, Amin 19 September 2017 (has links)
Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. Cette évolution continue du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services réseau. Ces derniers évoluent vers l'intégration d'une capacité « à la demande » dont la particularité consiste à permettre aux clients du SP de pouvoir les déployer et les gérer de manière autonome et optimale. Pour offrir une telle souplesse de fonctionnement, le SP doit pouvoir s'appuyer sur une plateforme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Nous proposons dans un premier temps une caractérisation de la classe de services réseau à la demande. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui en précise chacune des étapes. L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous introduisons un Framework original qui encapsule le contrôleur SDN, et permet la gestion de toutes les étapes du cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé.Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous illustrons notre approche par l'ouverture d'une API BYOC sécurisée basée sur XMPP. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. Nous illustrons la faisabilité de notre approche par l¿exemple du service IPS (système de prévention d'intrusion) décliné en BYOC. / Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP¿s platform. To offer this freedom to its customers, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology.We first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps.The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. We propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an on-demand service by the delegation of part of its control to an external third party. An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based Northbound Interface (NBI) allowing opening up a secured BYOC-enabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. We illustrate the feasibility of our approach through a BYOC-based Intrusion Prevention System (IPS) service example.

Page generated in 0.0307 seconds