Spelling suggestions: "subject:"databases,"" "subject:"atabases,""
971 |
Extensão de um SGBD para incluir o gerenciamento da informação temporal. / Extension of a DBMS to include the management of temporal information.Rodrigo Katsumoto Sakai 09 August 2007 (has links)
O fator temporal é uma variável natural da maioria dos sistemas de informação, pois no mundo real os eventos ocorrem de maneira dinâmica, modicando continuamente os valores dos seus objetos no decorrer do tempo. Muitos desses sistemas precisam registrar essa modicação e atribuir os instantes de tempo em que cada informação foi válida no sistema. Este trabalho reúne as características relacionadas aos Bancos de Dados Temporais e Bancos de Dados Objeto-Relacionais. O objetivo primordial é propor uma forma de implementar alguns aspectos temporais, desenvolvendo um módulo que faça parte das características e funcionalidades internas de um SGBD. O módulo temporal contempla principalmente a parte de restrições de integridade temporal que é utilizada para manter a consistência da informação temporal armazenada. Para isso, é proposto um novo tipo de dado que melhor representa as marcas temporais dos objetos. Uma parte importante para a implementação desse projeto é a utilização de um SGBD objeto-relacional que possui algumas características orientadas a objetos que permitem a extensão de seus recursos, tornando-o capaz de gerenciar alguns aspectos temporais. O módulo temporal desenvolvido torna esses aspectos temporais transparentes para o usuário. Por conseqüência, esses usuários são capazes de utilizar os recursos temporais com maior naturalidade. / The temporal factor is a natural variable of the majority of the information systems, therefore in the real world the events occur in dynamic way, modifying continuously the values of its objects in elapsing of the time. Many of these systems need to register this modication and to attribute the instants of time where each information was valid in the system. This work congregates the characteristics related to the Temporal Databases and Object-Relational Databases. The primordial objective is to consider a form to implement some temporal aspects, developing a module that is part of the characteristics and internal functionalities of a DBMS. The temporal module mainly contemplates the part of restrictions of temporal integrity that is used to keep the consistency of the stored temporal information. For this, a new data type is proposed that better represent the objects timestamps. An important part for the implementation of this project is the use of a object-relational DBMS that has some object-oriented characteristics that allow the extension of its resources, becoming capable to manage some temporal aspects. The developed temporal module becomes these transparent temporal aspects for the user. For consequence, these users are capable to use the temporal resources more naturally.
|
972 |
Projeto evolutivo de bases de dados : uma abordagem iterativa e incremental usando modularização de bases de dados / Evolutionary database design : an iterative and incremental approach using database modularizationGuedes, Gustavo Bartz, 1983- 02 November 2014 (has links)
Orientadores: Gisele Busichia Baioco, Regina Lúcia de Oliveira Moraes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-24T15:26:05Z (GMT). No. of bitstreams: 1
Guedes_GustavoBartz_M.pdf: 5989312 bytes, checksum: 0e3053f8f1adcbcf13039b8caeb8a87e (MD5)
Previous issue date: 2014 / Resumo: Sistemas de software evoluem ao longo do tempo devido a novos requisitos ou a alterações nos já existentes. As mudanças são ainda mais presentes nos métodos de desenvolvimento de software iterativos e incrementais, como os métodos ágeis, que pressupõem a entrega contínua de módulos operacionais de software. Os métodos ágeis, como o Scrum e a Programação Extrema, são baseados em aspectos gerenciais do projeto e em técnicas de codificação do sistema. Entretanto, mudanças nos requisitos provavelmente terão reflexo no esquema da base de dados, que deverá ser alterado para suportá-los. Quando o sistema se encontra em produção, alterações no esquema da base de dados são onerosas, pois é necessário manter a semântica dos dados em relação à aplicação. Portanto, este trabalho de mestrado apresenta o processo evolutivo de modularização de bases de dados, uma abordagem para projetar a base de dados de modo iterativo e incremental. A modularização é executada no projeto conceitual e amplia a capacidade de abstração do esquema de dados gerado facilitando as evoluções futuras. Por fim, foi desenvolvida uma ferramenta que automatiza o processo evolutivo de modularização de bases de dados, chamada de Evolutio DB Designer. Essa ferramenta permite modularizar o esquema da base de dados e gerar automaticamente o esquema relacional a partir dos módulos de bases de dados / Abstract: Software systems evolve through time due to new requirements or changing in the existing ones. The need for constant changes is even more present on the iterative and incremental software development methods, such as those based on the agile methodology, that demand continuous delivery of operational software modules. The agile development methods, like Scrum and Extreme Programming, are based on management aspects of the project and techniques for software coding. However, changes in the requirements will probably affect the database schema, which will have to be modified to accommodate them. In a production system, changes to the database schema are costly, because from the application¿s perspective the data semantics needs to be maintained. Therefore, the present work presents the evolutionary database modularization design process, an approach for the iterative and incremental design of the database. The modularization process is executed during the conceptual design improving the abstraction capacity of the generated data schema resulting in a graceful schema evolution. In addition, a tool that automates the evolutionary database modularization design process was developed, called Evolutio DB Designer. It allows the modular design of the database schema and automatically generates the relational data schema based on the database modules / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
|
973 |
Especificação e implementação do banco de dados do projeto e-phenology / Specification and implementation of the database of the e-phenologyMariano, Greice Cristina, 1986- 08 September 2013 (has links)
Orientadores: Ricardo da Silva Torres, Leonor Patricia Cerdeira Morellato / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T23:56:37Z (GMT). No. of bitstreams: 1
Mariano_GreiceCristina_M.pdf: 1816826 bytes, checksum: cc663b657f48189f61cec85c7549130d (MD5)
Previous issue date: 2013 / Resumo: As mudanças ambientais tornaram-se uma questão importante na agenda global. Um exemplo representativo desses problemas surge no contexto dos estudos de fenologia. Recentemente, fenologia tem ganho importância como o indicador mais simples e confiável dos efeitos das mudanças climáticas sobre plantas e animais. A escassez ou falta de informações e sistemas de monitoramento em regiões tropicais, em particular, na América do Sul, vêm estimulando diversos centros de pesquisa a desenvolverem trabalhos visando preencher esta lacuna. Um exemplo é o Projeto e-phenology, que é multidisciplinar e combina pesquisas em Ciência da Computação e Fenologia. O principal objetivo do projeto é atacar os problemas práticos e teóricos envolvidos no uso de novas tecnologias para realizar a observação remota da fenologia de plantas e integrar estas informações com os dados de campo. Neste contexto, este trabalho apresenta a especificação e implementação de um banco de dados para gerenciar as informações que devem ser manipuladas pelo Projeto ephenology. A proposta apresentada permite a integração de dados de fenologia coletados a partir de observações no campo, com dados climáticos obtidos de sensores de clima e dados de imagens obtidas por câmeras digitais. Tanto a modelagem quanto a implementação do banco de dados tiveram como base os dados dos estudos de fenologia de plantas realizados pelos biólogos e ecólogos do grupo do Laboratório de Fenologia da UNESP de Rio Claro / Abstract: Environmental changes have become an important issue on the world. A representative example of these problems arises in context of studies of phenology. Recently, phenology has gained importance as the simplest and most reliable indicator of the effects of climate change on plants and animals. The shortage or lack of information and monitoring systems in tropical regions, in particularly in South America, has encouraged many centers to develop researches to fulfill this gap. One example includes the e-phenology project. The e-phenology is a multidisciplinary project that combines research in Computer Science and Phenology. The project's main goal is to attack the practical and theoretical problems involved in using new technologies for monitoring plant phenology remotely and integrating obtained data with on-the-ground observations. In this context, this work presents the specification and implementation of a database to manage the information that should be handled by the e-phenology Project. The proposal allows the integration of phenology data collected from field observations, with climate data obtained from climate sensors and image data obtained by digital cameras. Both the modeling and the implementation of the database were based on studies on plant phenology conducted by biologists and ecologists of the Laboratory of phenology at UNESP of Rio Claro / Mestrado / Ciência da Computação / Mestra em Ciência da Computação
|
974 |
Objek-georiënteerde en rolgebaseerde verspreide inligtingsekerheid in 'n oop transaksieverwerking omgewingVan der Merwe, Jacobus 07 October 2014 (has links)
M.Sc. (Computer Science) / Information is a valuable resource in any organisation and more and more organisations are realising this and want efficient means to protect it against disclosure, modification or destruction. Although relatively efficient security methods have been available almost as long as information databases, they all provide additional cost. This cost does not only involve money but also cost in terms of system performance and management of information security. Any new information security model must also provide better management of information security. In this dissertation we present a model that provides information security and aims to lower the technical skills required to manage information security using this approach. In any business organisation we can describe each employee's duties. Put in other words, we can say that each employee has a specific business role in the organisation. In organisations with many employees there are typically many employees that have more or less the same duties in the organisation. This means that employees can be grouped according to their business roles. We use an employee's role as a description of his/her duties in a business organisation. ' Each role needs resources to perform its duties in the organisation. In terms of computer systems, each role needs computer resources such as printers. Most roles need access to data files in the organisation's database but it is not desirable to give all roles access to all data files. It is obvious that roles have specific privileges and restrictions in terms of information resources. Information security can be achieved by identifying the business roles in an organisation and giving these roles only the privileges needed to fulfill their business function and then assigning these roles to people (users of the organisation's computer system). This is called role-based security. People's business functions are related, for example clerks and clerk-managers are related in the sense that a clerk-manager is a manager of clerks. Business roles are related in the same way. For an information security manager to assign roles to users it is important to see this relationship between roles. In this dissertation we present this relationship using a lattice graph which we call a role lattice. The main advantage of this is that it is eases information security management...
|
975 |
Web-based access to online database vendorsLedwaba, Lesiba Stephen 12 September 2012 (has links)
M.Inf. / This research investigated the role played by Web-based interfacing in improving online searching. A comparative analysis was undertaken to investigate end-user searching in both conventional online systems and Web-based services. The results of the analysis necessitated further improvements in Web interfacing. In fact, this study identified areas in which online searching poses problems and finally suggested features which need to be incorporated into further developments of Web interfaces to online systems.
|
976 |
La versification de Raymond Queneau, approche statistique à partir d'une base de données / Raymond Queneau's versification, a statistical approach, making use of a databaseBories, Anne-Sophie 26 March 2013 (has links)
Nous proposons une approche statistique de la versification de Raymond Queneau. Au cœur de notre travail se trouve une base de données MySQL, qui rassemble des informations descriptives à propos de la versification des 15.996 vers publiés par Queneau de son vivant. Jusqu’ici, les bases de données consacrées à la métrique ont exploré les vers réguliers, laissant de côté le vers libre et les questions spécifiques qu’il pose. Notre base envisage conjointement ces deux catégories de vers. Nous en tirons des statistiques, des représentations graphiques, et une approche globale du texte.La versification de Raymond Queneau a été peu étudiée. Il s’agit d’un corpus hétérogène, pour lequel la distinction entre vers libres et vers réguliers n’est pas toujours opérante. Au sein de ces formes variées, nous avons cherché des traits fixes, des motifs récurrents, des tendances, des routines. Nous proposons une typologie des vers queniens, décrivons la parenté du vers libre quenien avec le vers classique, modélisons des structures de la poésie de Queneau et étudions les significations liées à ses choix métriques.Il ressort de nos résultats que la versification de Queneau est porteuse de signification. Queneau y manifeste son refus des conventions, et son choix systématique d’une troisième voie réconciliant conservatisme et innovation.Ce travail ouvre des perspectives pour les bases de données consacrées à la versification. De nouvelles bases de données sont à développer, pour d’autres corpus, qui enrichiront les champs de la stylistique et de la poétique. / We present a statistical approach to Raymond Queneau’s versification. At the centre of the study is a MySQL database, which compiles descriptive data on the versification of the 15,996 lines of poetry published by Queneau during his lifetime. Until now databases dedicated to metrics have focussed on strict verse, leaving aside free verse and the specific issues it raises. Our database explores both categories together, providing the source for statistics, graphs and a comprehensive approach to the text.Raymond Queneau’s versification has not been studied to any great extent. It is a heterogeneous corpus, where the strict vs. free verse distinction does not apply consistently. Within these diverse forms, this study endeavours to find fixed features, recurring patterns, trends and routines. This exploration has resulted in the creation of a typology of Queneau’s verse, a description of how free and strict verse are related in his writing, the making of a model for his poems’ structures, and a study of his metrical choices’ meanings.Our results show that Queneau’s versification conveys various meanings. Through it he expresses his reluctance towards conventions and his choice of third path, bringing together conservatism and innovation.The approach behind this thesis also opens up new perspectives regarding databases dedicated to versification. Similar databases can be developed for other corpora, which will enrich both stylistics and poetics.
|
977 |
A conceptual object-oriented model to support educators in an outcomes-based environmentHarmse, Rudi Gerhard January 2001 (has links)
The introduction of outcomes-based education (OBE) in South Africa has led to a new learner-centred approach with an emphasis on the outcomes that the learners need to achieve. With this learner-centred focus has come a greater need for record keeping. It is now necessary to track each learner’s progress towards the attainment of the learning outcomes. This progress is tracked in relation to assessment standards that are defined for every learning outcome. These assessment standards define the results expected of learners at certain stages in their development. The new OBE system has emphasised accountability and this is expressed in a requirement to keep evidence to justify the assessment results given. The large numbers of learners and the increased managerial demand of OBE cause problems to educators who may find themselves unable to keep track of the learners’ progress under such conditions. This dissertation investigates the structure of the new OBE system as well as its assessment and evidence requirements. From this the features required from a support system for educators in an OBE environment are determined. The supporting processes needed to enable these features to be implemented, as well as the storage requirements of such a system are identified. In addition to OBE, the field of Computer Integrated Learning Environments (CILEs) and Intelligent Tutoring Systems (ITSs) are investigated and useful details identified are added to the requirements for an OBE support system. The dissertation then presents an object-oriented conceptual model of the items that need to be stored in order to allow the features of an OBE support system to be implemented. The relationships between these items are also indicated in this model.
|
978 |
Experimental Database Export/Import for InPUTKarlsson, Stefan January 2013 (has links)
The Intelligent Parameter Utilization Tool (InPUT) is a format and API for thecross-language description of experiments, which makes it possible to defineexperiments and their contexts at an abstract level in the form of XML- andarchive-based descriptors. By using experimental descriptors, programs can bereconfigured without having to be recoded and recompiled and the experimentalresults of third-parties can be reproduced independently of the programminglanguage and algorithm implementation. Previously, InPUT has supported theexport and import of experimental descriptors to/from XML documents, archivefiles and LaTex tables. The overall aim of this project was to develop an SQLdatabase design that allows for the export, import, querying, updating anddeletion of experimental descriptors, implementing the design as an extensionof the Java implementation of InPUT (InPUTj) and to verify the generalapplicability of the created implementation by modeling real-world use cases.The use cases covered everything from simple database transactions involvingsimple descriptors to complex database transactions involving complexdescriptors. In addition, it was investigated whether queries and updates ofdescriptors are executed more rapidly if the descriptors are stored in databasesin accordance with the created SQL schema and the queries and updates arehandled by the DBMS PostgreSQL or, if the descriptors are stored directly infiles and the queries and updates are handled by the default XML-processingengine of InPUTj (JDOM). The results of the test case indicate that the formerusually allows for a faster execution of queries while the latter usually allowsfor a faster execution of updates. Using database-stored descriptors instead offile-based descriptors offers many advantages, such as making it significantlyeasier and less costly to manage, analyze and exchange large amounts of experi-mental data. However, database-stored descriptors complement file-baseddescriptors rather than replace them. The goals of the project were achieved,and the different types of database transactions involving descriptors can nowbe handled via a simple API provided by a Java facade class.
|
979 |
Standardizing our perinatal language to facilitate data sharingMassey, Kiran Angelina 05 1900 (has links)
Our ultimate goal as obstetric and neonatal care providers is to improve care for mothers and their babies. Continuous quality improvement (CQI) involves iterative cycles of practice change and audit of ongoing clinical care identifying practices that are associated with good outcomes. A vital prerequisite to this evidence based medicine is data collection.
In Canada, much of the country is covered by separate fragmented silos known as regional reproductive care databases or perinatal health programs. A more centralized system which includes collaborative efforts is required. Moving in this direction would serve many purposes: efficiency, economy in the setting of limited resources and shrinking budgets and lastly, interaction among data collection agencies. This interaction may facilitate translation and transfer of knowledge to care-givers and patients. There are however many barriers towards such collaborative efforts including privacy, ownership and the standardization of both digital technologies and semantics.
After thoroughly examining the current existing perinatal data collection among Perinatal Health Programs (PHPs), and the Canadian Perinatal Network (CPN) database, it was evident that there is little standardization of definitions. This serves as one of the most important barriers towards data sharing.
To communicate effectively and share data, researchers and clinicians alike must construct a common perinatal language. Communicative tools and programs such as SNOMED CT® offer a potential solution, but still require much work due to their infancy. A standardized perinatal language would not only lay the definitional foundation in women’s health and obstetrics but also serve as a major contribution towards a universal electronic health record. / Medicine, Faculty of / Obstetrics and Gynaecology, Department of / Graduate
|
980 |
Stream databasesPawluk, Przemyslaw January 2006 (has links)
One of the most important issues in contemporary database application became processing of the large amount of data. Second problem is different characteristic of the data that are processed by the system. Those data are perceived as large continuous streams of elements rather than finite sets of elements. This thesis presents new class of database systems that handles data stream processing, issues and benefits form using these systems in the context of telecommunication systems. / This thesis presents stream databases which are able to process data streams. Moreover advantages and disadvantages of those systems are presented. / przemyslaw.pawluk@pwr.wroc.pl (+48)697-958-666
|
Page generated in 0.0475 seconds