• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 26
  • 9
  • 9
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 50
  • 27
  • 21
  • 17
  • 14
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Um controle de versões refinado e flexível para artefatos de software / Flexible and fine-grained version control for software artifacts

Daniel Carnio Junqueira 07 January 2008 (has links)
As atividades de controle de versões são consideradas essenciais para a manutenção de sistemas de computador. Elas começaram a ser realizadas na década de 1950 de forma manual. As primeiras ferramentas de controle de versões, que surgiram nos anos setenta, não evoluíram significativamente desde sua criação e, até hoje, o controle de versões de arquivos é geralmente realizado em arquivos ou mesmo módulos completos, utilizando os conceitos que foram lançados há mais de três décadas. Com a popularização da utilização de sistemas computacionais, ocorreu um sensível aumento no número de sistemas existentes e, também, na complexidade dos mesmos. Além disso, muitas alterações ocorreram nos ambientes de desenvolvimento de software, e existe demanda por sistemas que permitam aos desenvolvedores ter cada vez mais controle automatizado sobre o que está sendo desenvolvido. Para isso, algumas abordagens de controle de versões refinados para artefatos de software foram propostas, mas, muitas vezes, não oferecem a exibilidade de utilização exigida pelos ambientes de desenvolvimento de software. Neste trabalho, é apresentado um sistema que visa a fornecer suporte ao controle de versões refinado e flexível para artefatos de software, tendo por base um modelo bem definido para representação das informações da estrutura dos arquivos que compõem determinado projeto de software, sejam eles código-fonte dos programas de computador, documentação criada em Latex, arquivos XML, entre outros. O sistema apresentado foi projetado para ser integrado com outras soluções utilizadas em ambientes de desenvolvimento de software / Version control tasks are considered essential for the maintenance of computers systems. They have been done since beginning of 50\'s in a by hand manner. First tools, which were released in 70\'s, didn\'t evolve significantly since its creation, and, in general, version control systems still work with entire files or even modules of software, having the same concepts that were launched more than three decades ago. With the popularization of computers systems there had been a sensible increase in the number of existing systems and also in the complexity of these systems. Besides that many changes have taken place in the software development environments, and there is demand for systems which allow developers to have more automated control about what is being developed. Regard to this demand some approaches of fine-grained version control have been proposed, but they usually do not provide the required exibility for its use in the real software development environments. In this work its presented a system which aims at providing support for exible and fine-grained version control of software artifacts, using a well defined model to represent the logical structure of the files which compose a software project, independently of its type - they can be XML files, source-code files, Latex files and others. The system has been designed to be integrated with other software solutions used in software development environments
72

Management konfigurace / Configuration management

Vala, Martin January 2016 (has links)
The aim of this master thesis is to resolve the problems based on requirements of the company. Initially a theoretical part was prepared which is focused on configuration management and related terms of a particular problem. After theoretical introduction it was systematically approached to the problem and a possible solution was proposed based on analysis of the current situation taking into account configuration management. The solution was to create a technical-organizational guideline which is a tool for the elimination of errors of employees and for general overview. The final technical-organizational guideline can be found in the attachment and in the main part of this thesis describes development of the guideline.
73

A mapping approach for configuration management tools to close the gap between two worlds and to regain trust: Or how to convert from docker to legacy tools (and vice versa)

Meissner, Roy, Kastner, Marcus 30 October 2018 (has links)
In this paper we present the tool 'DockerConverter', an approach and a software to map a Docker configuration to various matured systems and also to reverse engineer any available Docker image in order to increase the confidence (or trust) into it. We show why a mapping approach is more promising than constructing a Domain Specific Language and why we chose a Docker image instead of the Dockerfile as the source model. Our overall goal is to enable Semantic Web research projects and especially Linked Data enterprise services to be better integrated into enterprise applications and companies.
74

Optimization of Configuration Management Processes

Kristensson, Johan January 2016 (has links)
Configuration management is a process for establishing and maintaining consistency of a product's performance, as well as functional and physical attributes with regards to requirements, design and operational information throughout its lifecycle. The way configuration management is implemented in a project has a huge impact on the project’s chance of success. Configuration management is, however, notoriously difficult to implement in a good way, i.e. in such a way that it increases performance and decrease the risk of projects. What works well in one field may be difficult to implement or will not work in another. The aim of this thesis is to present a process for optimizing configuration management processes, using a telecom company as a case study. The telecom company is undergoing a major overhaul of their customer relationship management system, and they have serious issues with quality of the software that is produced and meeting deadlines, and therefore wants to optimize its existing CM processes in order to help with these problems. Data collected in preparation for the optimization revealed that configuration management tools were not used properly, tasks that could be automated were done manually, and existing processes were not built on sound configuration management principles. The recommended optimization strategy would have been to fully implement a version handling tool, and change the processes to take better advantage of a properly implemented version handling tool. This was deemed too big a change though, so instead a series of smaller changes with less impact were implemented, with the aim of improving quality control to minimize the number of bugs that reached production. The majority of the changes had the purpose of replicating the most basic functions of a version handling tool, as well as automating manual tasks that were error prone. / Configuration management är en process för att etablera och bevara konsistensen hos en produkts prestanda, så väl som funktionella och fysiska attribut med avseende på krav, design och driftinformation genom dess livscykel. Hur konfigurationshantering implementeras i ett projekt har en avsevärd betydelse för huruvida projektet kommer att lyckas eller ej. Configuration management är dock ökänt för att vara svårt att implementera på ett bra sätt, d.v.s. så att det ökar prestandan och minskar risken i projekt. Det som fungerar bra inom en bransch kan vara svårt att implementera eller fungerar inte i en annan. Målet med denna studie är presentera en process for optimering av konfigurationshanteringsprocesser där ett telekomföretag använts som en fallstudie. Telekomföretaget genomgår en stor upprusting av sitt kund-system. Företaget har stora problem med kvalitén på den mjukvara de tar fram och att möta levaranstidpunkter, och vill därför förbättra sina processer för att komma till rätta med dessa problem. Data som samlades in inför optimeringen visar att CM-verktyg ej användes på korrekt vis, arbetsuppgifter som kunde automatiserats gjordes manuellt, och existerande processer byggde ej på best practices inom CM. De rekommenderade optimeringsstrategin var att implementera och använda ett versionhanteringssystem, och ändra processerna för att dra nytta av fördelarna med ett korrekt implementerat versionshanteringssystem. Detta ansågs dock vara en allt för stor förändring, så istället genomfördes ett antal mindre ändringar med mindre påverkan, med målet att förbättra kvalitetskontrollerna och minimera antalet fel som nådde produktion. Majoriteten av ändringarna hade syftet att replikera de mest grundläggande funktionaliteten hos ett versionhanteringsverktyg, så väl som att automatisera felbenägna manuella rutiner.
75

Implementace procesu Configuration management / Configuration management process implementation

Šipka, Ladislav January 2010 (has links)
The aim of this Thesis is to describe the practical implementation of process management in terms of Configuration Management process and subsequent implementation support tool, configuration and management database, focusing on describing and identifying particular steps needed for the definition and implementation process and the subsequent selection and implementation support tools. As an initial base of this Thesis I have used the practical experience of projects focusing on the definition and implementation of process management and subsequent implementation support tools, in which I acted in the role of the Configuration Manager. It means I assisted at course of entire projects. The Thesis focuses mainly on my run and defined Configuration Management process and describes the process and importance of various activities leading to the successful establishment of a process into practice, including identified problems and some of their solutions. The result of this Thesis should approach introducing one of the process of family ITIL v. 2 to practice and subsequent leading to the selection and implementation of the configuration and management database, as a major output of this process.
76

Ontology mapping: a logic-based approach with applications in selected domains

Wong, Alfred Ka Yiu, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
In advent of the Semantic Web and recent standardization efforts, Ontology has quickly become a popular and core semantic technology. Ontology is seen as a solution provider to knowledge based systems. It facilitates tasks such as knowledge sharing, reuse and intelligent processing by computer agents. A key problem addressed by Ontology is the semantic interoperability problem. Interoperability in general is a common problem in different domain applications and semantic interoperability is the hardest and an ongoing research problem. It is required for systems to exchange knowledge and having the meaning of the knowledge accurately and automatically interpreted by the receiving systems. The innovation is to allow knowledge to be consumed and used accurately in a way that is not foreseen by the original creator. While Ontology promotes semantic interoperability across systems by unifying their knowledge bases through consensual understanding, common engineering and processing practices, it does not solve the semantic interoperability problem at the global level. As individuals are increasingly empowered with tools, ontologies will eventually be created more easily and rapidly at a near individual scale. Global semantic interoperability between heterogeneous ontologies created by small groups of individuals will then be required. Ontology mapping is a mechanism for providing semantic bridges between ontologies. While ontology mapping promotes semantic interoperability across ontologies, it is seen as the solution provider to the global semantic interoperability problem. However, there is no single ontology mapping solution that caters for all problem scenarios. Different applications would require different mapping techniques. In this thesis, we analyze the relations between ontology, semantic interoperability and ontology mapping, and promote an ontology-based semantic interoperability solution. We propose a novel ontology mapping approach namely, OntoMogic. It is based on first order logic and model theory. OntoMogic supports approximate mapping and produces structures (approximate entity correspondence) that represent alignment results between concepts. OntoMogic has been implemented as a coherent system and is applied in different application scenarios. We present case studies in the network configuration, security intrusion detection and IT governance & compliance management domain. The full process of ontology engineering to mapping has been demonstrated to promote ontology-based semantic interoperability.
77

Agile Prototyping : A combination of different approaches into one main process

Abu Baker, Mohamed January 2009 (has links)
<p>Software prototyping is considered to be one of the most important tools that are used by software engineersnowadays to be able to understand the customer’s requirements, and develop software products that are efficient,reliable, and acceptable economically. Software engineers can choose any of the available prototyping approaches tobe used, based on the software that they intend to develop and how fast they would like to go during the softwaredevelopment. But generally speaking all prototyping approaches are aimed to help the engineers to understand thecustomer’s true needs, examine different software solutions and quality aspect, verification activities…etc, that mightaffect the quality of the software underdevelopment, as well as avoiding any potential development risks.A combination of several prototyping approaches, and brainstorming techniques which have fulfilled the aim of theknowledge extraction approach, have resulted in developing a prototyping approach that the engineers will use todevelop one and only one throwaway prototype to extract more knowledge than expected, in order to improve thequality of the software underdevelopment by spending more time studying it from different points of view.The knowledge extraction approach, then, was applied to the developed prototyping approach in which thedeveloped model was treated as software prototype, in order to gain more knowledge out of it. This activity hasresulted in several points of view, and improvements that were implemented to the developed model and as a resultAgile Prototyping AP, was developed. AP integrated more development approaches to the first developedprototyping model, such as: agile, documentation, software configuration management, and fractional factorialdesign, in which the main aim of developing one, and only one prototype, to help the engineers gaining moreknowledge, and reducing effort, time, and cost of development was accomplished but still developing softwareproducts with satisfying quality is done by developing an evolutionary prototyping and building throwawayprototypes on top of it.</p>
78

Ontology mapping: a logic-based approach with applications in selected domains

Wong, Alfred Ka Yiu, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
In advent of the Semantic Web and recent standardization efforts, Ontology has quickly become a popular and core semantic technology. Ontology is seen as a solution provider to knowledge based systems. It facilitates tasks such as knowledge sharing, reuse and intelligent processing by computer agents. A key problem addressed by Ontology is the semantic interoperability problem. Interoperability in general is a common problem in different domain applications and semantic interoperability is the hardest and an ongoing research problem. It is required for systems to exchange knowledge and having the meaning of the knowledge accurately and automatically interpreted by the receiving systems. The innovation is to allow knowledge to be consumed and used accurately in a way that is not foreseen by the original creator. While Ontology promotes semantic interoperability across systems by unifying their knowledge bases through consensual understanding, common engineering and processing practices, it does not solve the semantic interoperability problem at the global level. As individuals are increasingly empowered with tools, ontologies will eventually be created more easily and rapidly at a near individual scale. Global semantic interoperability between heterogeneous ontologies created by small groups of individuals will then be required. Ontology mapping is a mechanism for providing semantic bridges between ontologies. While ontology mapping promotes semantic interoperability across ontologies, it is seen as the solution provider to the global semantic interoperability problem. However, there is no single ontology mapping solution that caters for all problem scenarios. Different applications would require different mapping techniques. In this thesis, we analyze the relations between ontology, semantic interoperability and ontology mapping, and promote an ontology-based semantic interoperability solution. We propose a novel ontology mapping approach namely, OntoMogic. It is based on first order logic and model theory. OntoMogic supports approximate mapping and produces structures (approximate entity correspondence) that represent alignment results between concepts. OntoMogic has been implemented as a coherent system and is applied in different application scenarios. We present case studies in the network configuration, security intrusion detection and IT governance & compliance management domain. The full process of ontology engineering to mapping has been demonstrated to promote ontology-based semantic interoperability.
79

Fluid Web e componentes de conteudo digital : da visão centrada em documentos para a visão centrada em conteudo / Fluid Web and digital content components : from the document-centric view to the content-centric view

Santanchè, André, 1968- 08 October 2006 (has links)
Orientador: Claudia Bauer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-07T03:38:23Z (GMT). No. of bitstreams: 1 Santanche_Andre_D.pdf: 5630081 bytes, checksum: a9ac93609b33f3525c7597c3bbc398b9 (MD5) Previous issue date: 2006 / Resumo: A Web está evoluindo de um espaço para publicação/consumo de documentos para um ambiente para trabalho colaborativo, onde o conteúdo digital pode viajar e ser replicado, adaptado, decomposto, fundido e transformado. Designamos esta perspectiva por Fluid Web. Esta visão requer uma reformulação geral da abordagem típica orientada a docu­mentos que permeia o gerenciamento de conteúdo na Web. Esta tese apresenta nossa solução para a Fluid Web, que permite nos deslocarmos de uma perspectiva orientada a documentos para outra orientada a conteúdo, onde "conteúdo" pode ser qualquer objeto digital. A solução é baseada em dois eixos: (i) uma unidade auto-descritiva que encap­sula qualquer tipo de artefato de conteúdo - o Componente de Conteúdo Digital (Digital Content Component - DCC); e (ii) uma infraestrutura para a Fluid Web que permite o gerenciamento e distribuição de DCCs na Web, cujo objetivo é dar suporte à colaboração na Web. Concebidos para serem reusados e adaptados, os DCCs encapsulam dados e software usando uma única estrutura, permitindo deste modo composição homogênea e proces­samento de qualquer conteúdo digital, seja este executável ou não. Estas propriedades são exploradas pela nossa infraestrutura para a Fluid Web, que engloba mecanismos de descoberta e de anotação de DCCs em múltiplos níveis, gerenciamento de configurações e controle de versões. Nosso trabalho explora padrões de Web Semântica e ontologias ta­xonômicas, que servem como uma ponte semântica, unificando vocabulários para gerenci­amento de DCCs e facilitando as tarefas de descrição/indexação/descoberta de conteúdo. Os DCCs e sua infraestrura foram implementados e são ilustrados por meio de exemplos práticos, para aplicações científicas. As principais contribuições desta tese são: o modelo de Digital Content Component; o projeto da infraestrutura para a Fluid Web baseada em DCCs, com suporte para armaze­namento baseado em repositórios, compartilhamento, controle de versões e gerenciamento de configurações distribuídas; um algoritmo para a descoberta de conteúdo digital que explora a semântica associada aos DCCs; e a validação prática dos principais conceitos desta pesquisa, com a implementação de protótipos / Abstract: The Web is evolving from a space for publicationj consumption of documents to an en­vironment for collaborative work, where digital content can traveI and be replicated, adapted, decomposed, fusioned and transformed. We call this the Fluid Web perspective. This view requires a thorough revision of the typical document-oriented approach that permeates content management on the Web. This thesis presents our solution for the Fluid Web, which allows moving from the document-oriented to a content-oriented pers­pective, where "content" can be any digital object. The solution is based on two axes: a self-descriptive unit to encapsulate any kind of content artifact - the Digital Content Component (DCC); and a Fluid Web infrastructure that provides management and de­ployment of DCCs through the Web, and whose goal is to support collaboration on the Web. Designed to be reused and adapted, DCCs encapsulate data and software using a single structure, thus allowing homogeneous composition and processing of any digital content, be it executable or noto These properties are exploited by our Fluid Web infrastructure, which supports DCC multilevel annotation and discovery mechanisms, configuration ma­nagement and version controI. Our work extensively explores Semantic Web standards and taxonomic ontologies, which serve as a semantic bridge, unifying DCC management vo­cabularies and improving DCC descriptionjindexingjdiscovery. DCCs and infrastructure have been implemented and are illustrated by means of examples, for scientific applicati­ons. The main contributions of this thesis are: the model of Digital Content Component; the design of the Fluid Web infrastructure based on DCCs, with support for repository­based storage, distributed sharing, version control and configuration management; an algorithm for digital content discovery that explores DCe semantics; and a practical validation of the main concepts in this research through implementation of prototypes / Doutorado / Banco de Dados / Mestre em Ciência da Computação
80

Medição e estimação de esforços de atividades de gerencia de configuração de software / Measurement and estimation of effort of software configuration management activities

Lima, Ewerton Rodrigues de 27 July 2006 (has links)
Orientadores: Mario Jino, Jacques Wainer / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T22:10:35Z (GMT). No. of bitstreams: 1 Lima_EwertonRodriguesde_M.pdf: 987163 bytes, checksum: e7f810e7d978629a3fdf409175cc29c9 (MD5) Previous issue date: 2006 / Resumo: Este trabalho trata de estimação e medição de esforço necessário para aplicação de Gerência de Configuração de Software (GCS) nos projetos de uma organização específica, com os objetivos de: 1) tornar a equipe de GCS capaz de relatar o esforço mensal dos analistas em atividades de GCS, provendo visibilidade da demanda gerada pelos projetos; 2) estimar o esforço necessário para aplicar GCS em novos projetos, permitindo adequar a força de trabalho à nova demanda. A abordagem para medição do esforço consistiu em dois passos: 1) identificação das atividades envolvidas na aplicação de GCS para as quais faz sentido registrar esforço; 2) desenvolvimento de uma ferramenta para facilitar o registro de todas as atividades realizadas pela equipe. Os dados acumulados pela ferramenta foram utilizados para acompanhar, analisar e relatar o esforço mensal de cada analista e para identificar os fatores influenciadores de esforço demandado pelos projetos, permitindo a definição de parâmetros e um método de estimação. Para sistematizar o uso desse método outra ferramenta foi desenvolvida; essa última identifica as características dos novos projetos, sua relação com os fatores influenciadores de esforço e estima o número de horas mensais requeridas na aplicação de GCS, com base em dados de esforço de projetos já finalizados. O método de estimação de esforço foi aplicado em projetos já iniciados, para os quais o consumo mensal de esforço é conhecido e cujos dados não foram considerados na calibração da ferramenta. Os resultados obtidos são satisfatórios e as duas ferramentas são atualmente utilizadas pela organização / Abstract: This work deals with estimation and measurement of the effort necessary to apply Software Configuration Management (SCM) to a particular organization¿s projects, aiming to: make the SCM team able to report the monthly effort of analysts in performing SCM activities, providing visibility of the demand generated by the projects; 2) estimate the effort necessary to apply SCM to new projects, enabling to adequate the work force to the new demand. The approach for effort measurement consisted of two steps: 1) identification of the activities in SCM for which it makes sense to record effort; 2) development of a tool to record all activities performed by SCM team. Data acquired by the tool were used to track, analyze and report the monthly effort of each analyst and to identify the factors which affect the effort required by the projects, enabling definition of parameters and of an estimation method. To systematize this method¿s usage another tool was developed; it identifies the new project¿s characteristics, their relationship with the factors which affect the effort and estimates the number of hours required monthly in the application of SCM, based on finished projects data. The estimation method has been applied to current projects, for which the monthly consumed effort is known and whose data were not used for tool calibration. The obtained results are satisfactory and both tools are currently being used in the organization / Mestrado / Engenharia de Computação / Mestre em Engenharia de Computação

Page generated in 0.1481 seconds