Spelling suggestions: "subject:"[een] VERSION CONTROL"" "subject:"[enn] VERSION CONTROL""
21 |
Um modelo de versões apoiado em objetos compostos para utilização em instâncias e esquemas de bases de dados orientadas a objetos / Versioning model for schemas and composite objects in object-oriented database systemsCamolesi Junior, Luiz 01 November 1996 (has links)
As informações contidas em uma Base de Dados para projeto não são obtidas apenas cumulativamente, mas também através de refinamentos e mudanças sucessivas nas informações já disponíveis. Diversas pesquisas mostram-se preocupadas com este aspecto e propõem conceitos e mecanismos de controle de Versões que podem ser incorporados a Modelos de Bases de Dados Orientadas a Objetos. Alguns destes trabalhos, aqui estudados, focalizam o uso de Versões na evolução não apenas da Base de Dados Extensional (Instâncias), ou seja, nas informações colhidas do mundo real e utilizadas pelas aplicações, mas também sua utilização como um mecanismo eficiente de Evolução do Esquema de Dados (a Base de Dados Intencional). Com o objetivo principal de constituir um núcleo básico de conceitos e mecanismos que possam atender as mais variadas aplicações, este trabalho estabelece um Modelo de Versões apoiado no conceito de Objeto Composto e que permite uma correlação direta e transparente entre Versões de Instâncias e de Esquemas, ou seja, cada Versão na Base Extensional tem relação direta e única com a Versão da Base Intencional utilizada em sua instanciação. Adicionalmente, este trabalho estabelece um Meta-Modelo de Versões cujas especificações poderão direcionar as pesquisas de futuros Modelos de Versões, no sentido de apoiar a elaboração de Modelos de Versões sofisticados ou simples para aplicações específicas ou gerais, e também poderão ser utilizadas para o estabelecimento de mecanismos para a classificação e comparação de Modelos de Versões. / Data stored in project databases are obtained not only by the increasing inclusion of more and more data, but also through refinements and alterations into the already existent information. There are many works involving those subjects, studying concepts and control mechanisms to support data versioning in Object-Oriented Database Systems. Some of these works focus on the Version Control support in the stored data (the Extensional Database), aiming at recognizing and controlling the occurrence of many data versions from the same subject. Other works focus on the support of version control over the Data Schema (the Intentional Database), aiming at finding mechanisms that permit the recovery of different data structured in different ways from the same subject. This work presents a Version Model, based on the Composite Objects concept, providing a homogeneous support to Version Control in the Extensional and the Intentional Databases. In this model, the Extensional data is partitioned into Composite Objects, and the parts of each Object are interpreted with only one of several possible schemes that are used to instantiate the parts of objects of this kind. This Version Model was conceived to be useful to a broad range of application domains, deriving a set of concepts that had permitted to construct a Version Meta-Model. The Meta-Model is sufficiently generic to aid constructing elementary or complex Version Models, applied to generic or specific needs, and depicting mechanisms to aid the classification and comparison of existing or proposed Version Model.
|
22 |
Efficient Algorithms for Comparing, Storing, and Sharing Large Collections of Phylogenetic TreesMatthews, Suzanne 2012 May 1900 (has links)
Evolutionary relationships between a group of organisms are commonly summarized in a phylogenetic (or evolutionary) tree. The goal of phylogenetic inference is to infer the best tree structure that represents the relationships between a group of organisms, given a set of observations (e.g. molecular sequences). However, popular heuristics for inferring phylogenies output tens to hundreds of thousands of equally weighted candidate trees. Biologists summarize these trees into a single structure called the consensus tree. The central assumption is that the information discarded has less value than the information retained. But, what if this assumption is not true?
In this dissertation, we demonstrate the value of retaining and studying tree collections. We also conduct an extensive literature search that highlights the rapid growth of trees produced by phylogenetic analysis. Thus, high performance algorithms are needed to accommodate this increasing production of data. We created several efficient algorithms that allow biologists to easily compare, store and share tree collections over tens to hundreds of thousands of phylogenetic trees. Universal hashing is central to all these approaches, allowing us to quickly identify the shared evolutionary relationships contained in tree collections. Our algorithms MrsRF and Phlash are the fastest in the field for comparing large collections of trees. Our algorithm TreeZip is the most efficient way to store large tree collections. Lastly, we developed Noria, a novel version control system that allows biologists to seamlessly manage and share their phylogenetic analyses.
Our work has far-reaching implications for both the biological and computer science communities. We tested our algorithms on four large biological datasets, each consisting of 20; 000 to 150; 000 trees over 150 to 525 taxa. Our experimental results on these datasets indicate the long-term applicability of our algorithms to modern phylogenetic analysis, and underscore their ability to help scientists easily exchange and analyze their large tree collections. In addition to contributing to the reproducibility of phylogenetic analysis, our work enables the creation of test beds for improving phylogenetic heuristics and applications. Lastly, our data structures and algorithms can be applied to managing other tree-like data (e.g. XML).
|
23 |
Estudo da distribuição de uma base de dados apoiada no modelo de representação de objetos / Distribution modeling in the object representation modelJoão Eduardo Ferreira 23 October 1991 (has links)
A distribuição de uma Base de Dados convencional caracteriza-se pela necessidade dos dados estarem disponíveis, ao mesmo tempo, a todos os usuários, de modo que os problemas de conflito devido à concorrência para obtenção dos mesmos, tornam-se muito acentuados. A Base de Dados apoiada no Modelo de Representação de Objetos(MRO), devido a suas características semânticas,oferece o suporte necessário para atendimento das necessidades de distribuição num ambiente de desenvolvimento de projetos. Neste trabalho e feita uma proposta de um modelo lógico e funcional para a distribuição da Base de Dados apoiada no MRO. Esta distribuição se caracteriza pela disponibilidade dos dados de forma que cada item (objeto) da base cópia possua um tipo de ligação com a base original. Foram definidos cinco tipos de ligação: apenas leitura (r-), isolado(is), flagrante(fl), mutuamente exclusivo(me)e independente(in). Com isto, tanto a base cópia como a original, respeitando as limitações impostas pelo tipo de ligação entre as mesmas, podem evoluir, e depois de um determinado tempo sofrerem um processo de integração, que também e caracterizado pelo tipo de ligação entre a base original e cópia. / One of the most important characteristics of Distributed Database Systems is the permanent availability of data to all users every same time. This situation emphasizes the conflicts occurring due to the needs of users competing for the same data. Due to its semantics characteristics, the Database Management Systems based on the Object Representation Model (MRO) offer support to meet the distribution needs of computer-aided project development environments. In this work, a functional and logical model for the distribution of MRO based databases are presented. Distribution is characterized based on the required availability of each data item. Each item (object) in each copy database has a link of a specific link to the original database. Five types of links were defined: read only(r-), isolated(is), snapshot(fl), mutually exclusive(me) and independent(in). This arrangement allows both the copy and the original database to evolve in parallel, restricted by the limits imposed by the kinds of links between them. After a while, the copy and the original databases may enter into an integration process, which is also governed by these links.
|
24 |
Um modelo de versões apoiado em objetos compostos para utilização em instâncias e esquemas de bases de dados orientadas a objetos / Versioning model for schemas and composite objects in object-oriented database systemsLuiz Camolesi Junior 01 November 1996 (has links)
As informações contidas em uma Base de Dados para projeto não são obtidas apenas cumulativamente, mas também através de refinamentos e mudanças sucessivas nas informações já disponíveis. Diversas pesquisas mostram-se preocupadas com este aspecto e propõem conceitos e mecanismos de controle de Versões que podem ser incorporados a Modelos de Bases de Dados Orientadas a Objetos. Alguns destes trabalhos, aqui estudados, focalizam o uso de Versões na evolução não apenas da Base de Dados Extensional (Instâncias), ou seja, nas informações colhidas do mundo real e utilizadas pelas aplicações, mas também sua utilização como um mecanismo eficiente de Evolução do Esquema de Dados (a Base de Dados Intencional). Com o objetivo principal de constituir um núcleo básico de conceitos e mecanismos que possam atender as mais variadas aplicações, este trabalho estabelece um Modelo de Versões apoiado no conceito de Objeto Composto e que permite uma correlação direta e transparente entre Versões de Instâncias e de Esquemas, ou seja, cada Versão na Base Extensional tem relação direta e única com a Versão da Base Intencional utilizada em sua instanciação. Adicionalmente, este trabalho estabelece um Meta-Modelo de Versões cujas especificações poderão direcionar as pesquisas de futuros Modelos de Versões, no sentido de apoiar a elaboração de Modelos de Versões sofisticados ou simples para aplicações específicas ou gerais, e também poderão ser utilizadas para o estabelecimento de mecanismos para a classificação e comparação de Modelos de Versões. / Data stored in project databases are obtained not only by the increasing inclusion of more and more data, but also through refinements and alterations into the already existent information. There are many works involving those subjects, studying concepts and control mechanisms to support data versioning in Object-Oriented Database Systems. Some of these works focus on the Version Control support in the stored data (the Extensional Database), aiming at recognizing and controlling the occurrence of many data versions from the same subject. Other works focus on the support of version control over the Data Schema (the Intentional Database), aiming at finding mechanisms that permit the recovery of different data structured in different ways from the same subject. This work presents a Version Model, based on the Composite Objects concept, providing a homogeneous support to Version Control in the Extensional and the Intentional Databases. In this model, the Extensional data is partitioned into Composite Objects, and the parts of each Object are interpreted with only one of several possible schemes that are used to instantiate the parts of objects of this kind. This Version Model was conceived to be useful to a broad range of application domains, deriving a set of concepts that had permitted to construct a Version Meta-Model. The Meta-Model is sufficiently generic to aid constructing elementary or complex Version Models, applied to generic or specific needs, and depicting mechanisms to aid the classification and comparison of existing or proposed Version Model.
|
25 |
Prediction of Code LifetimeNordfors, Per January 2017 (has links)
There are several previous studies in which machine learning algorithms are used to predict how fault-prone a piece of code is. This thesis takes on a slightly different approach by attempting to predict how long a piece of code will remain unmodified after being written (its “lifetime”). This is based on the hypothesis that frequently modified code is more likely to contain weaknesses, which may make lifetime predictions useful for code evaluation purposes. In this thesis, the predictions are made with machine learning algorithms which are trained on open source code examples from GitHub. Two different machine learning algorithms are used: the multilayer perceptron and the support vector machine. A piece of code is described by three groups of features: code contents, code properties obtained from static code analysis, and metadata from the version control system Git. In a series of experiments it is shown that the support vector machine is the best performing algorithm and that all three feature groups are useful for predicting lifetime. Both the multilayer perceptron and the support vector machine outperform a baseline prediction which always outputs the mean lifetime of the training set. This indicates that lifetime to some extent can be predicted based on information extracted from the code. However, lifetime prediction performance is shown to be highly dataset dependent with large error magnitudes.
|
26 |
Analytický nástroj pro získávání statistik ze sytémů správy verzí / Analytical tool for information extraction from version control systemsChromický, Václav January 2013 (has links)
This thesis discusses the extraction of information from version control systems. Its goal is to describe the implementation of a software application that facilitates this type of extraction, focusing on the version control system Git. The theoretical part of the thesis identifies and analyses data stored in repositories. It also evaluates the tools available on the market using specific criteria. The practical part specifies development requirements, describes the resulting software application, and contains a how-to manual for extending the application and implementing one's own metrics that lead to gaining information. The application is developed in the CoffeeScript programming language and Node.js engine. It contains several example metrics. The output is a graphical user interface with interactive graphs served by a built-in HTTP server. Another output option is a machine-readable export to a file.
|
27 |
Exploring methods for dependency management in multi-repositories : Design science research at Saab Training and simulationPersson, Oskar, Svensson, Samuel January 2021 (has links)
Dependency problems for developers are like sneezing for people with pollen allergies during the spring, an everyday problem. This is especially true when working in multi-repositories. The dependency problems that occur do so as a byproduct of enabling developers to work on different components of a project in smaller teams, where everything is version controlled.Nearly all developers use version control systems, such as Git, Mercurial, or Subversion. While version control systems have helped developers for nearly 40 years and are constantly getting updated, there are still functionalities that do not exist. One example of that is having a good way of managing dependencies and allowing developers to download projects without having to handle dependency problems manually. The solutions that version control systems offer to help manage dependencies (e.g., Git’s submodules or Mercurial’s subrepositories), do not enable developers a fail-safe download or build the project if it contains dependency problems.In this study, a case study was conducted at Saab Training and Simulation to explore methods for dependency management as well as discuss and highlight some of the problems that emerge when working with dependencies in multi-repositories.An argument can be made that the functionality of dependency management systems, both package managers and version control systems’ solutions are not up to date on how dependencies are used in the development, during this time.In this paper, a novel approach to dependency management is introduced with the possibility to describe the dependencies dynamically by providing the utility to describes usages of a repository (such as simulation of hardware or the main project). As well as discussing the necessary functionalities that are required to handle such a system.By re-opening the dialog about dependency management as well as describing problems that arise in such environments, the goal is to inspire further research within these areas.
|
28 |
Distributed collaboration on RDF datasets using GitArndt, Natanael, Radtke, Norman, Martin, Michael 23 June 2017 (has links)
Collaboration is one of the most important topics regarding the evolution of the World Wide Web and thus also for the Web of Data. In scenarios of distributed collaboration on datasets it is necessary to provide support for multiple different versions of datasets to exist simultaneously, while also providing support for merging diverged datasets. In this paper we present an approach that uses SPARQL 1.1 in combination with the version control system Git, that creates commits for all changes applied to an RDF dataset containing multiple named graphs. Further the operations provided by Git are used to distribute the commits among collaborators and merge diverged versions of the dataset. We show the advantages of (public) Git repositories for RDF datasets and how this represents a way to collaborate on RDF data and consume it. With SPARQL 1.1 and Git in combination, users are given several opportunities to participate in the evolution of RDF data.
|
29 |
Jazykové verze webu / Language Version of WebLaga, Ondřej January 2008 (has links)
This thesis concerns a dilemma of multi-lingual web applications. The document describes some general solutions while suggesting such applications, however first of all it is aimed for information system VUT and its enlargement for translation administration. The text contains structural description of this system and instruments used during its development, but especially it defines system requirements of programming engineers and translators, describes and evaluates new language versions solution and there are possibilities of contingent extensions considered at the conclusion of my thesis.
|
30 |
Securing Data in a Cloud Environment: Access Control, Encryption, and Immutability / Säkerhetshantering av data som överförs genom molnbaserade tjänster: åtkomstkontroll, kryptering och omutlighetAl Khateeb, Ahmad, Summaq, Abdulrazzaq January 2023 (has links)
The amount of data and the development of new technologies used by all society-critical organizations are increasing dramatically. In parallel, data breaches, cyber-attacks, and their devastating consequences are also on the rise, as well as the number of individuals and organizations that are potential targets for such attacks. This places higher demands on security in terms of protecting data against cyber-attacks and controlling access to data that authenticated users want to access. The paper focuses on studying concepts of secure data practices in a GitLab-based cloud environment. The objective is to give answers to questions such as how to ensure the guarantee of secure data and protect it from unauthorized access and changes. The work behind this thesis includes exploring techniques for access control, data encryption, and data immutability. The study is followed by an implementation project that includes fetching code from GitLab verifying user identity and access control, managing data access, and displaying the results. The results of the thesis demonstrate the effectiveness of the implemented security measures in protecting data and controlling access. / Mängden av data och utvecklingen av banbrytande teknologier som idag används av alla samhällsbärande organisationer ökar drastiskt. I samma takt ökar dataintrång, cyberattacker och dess förödande konsekvenser samt antalet personer och organisationer som utgör potentiella offer för sådana typer av attacker. Detta ställer högre krav på säkerheten när det gäller att skydda data mot cyberattacker, men även att kontrollera åtkomsten till data som autentiserade användare vill komma åt. Rapporten fokuserar på att studera hur data säkras i GitLab-baserade molnsystem. Syftet med detta arbete är att ge svar på frågeställningar som till exempel att lova säker åtkomst och skydd för data från obehörig åtkomst och ändringar. Arbetet bakom detta projekt inkluderade undersökning av tekniker som används inom accesskontroll, datakryptering och data-omutlighet. Studien resulterade i en implementation som möjliggör att hämta signerade ändringar (Commits) från GitLab, verifiera användaridentiteten och åtkomstbehörighet, hantera dataåtkomst samt presentera resultaten. Resultaten av detta examensarbete demonstrerar effektiviteten av den implementerade säkerhetsteknikerna i att skydda data och kontrollera access.
|
Page generated in 0.0457 seconds