• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Modelagem de processo de extração de conhecimento em banco de dados para sistemas de suporte à decisão. / Modeling of knowledge discovery in databases for decision systems.

Shiba, Sonia Kaoru 26 June 2008 (has links)
Este trabalho apresenta a modelagem de um processo de extração de conhecimento, onde a aquisição de informações para a análise de dados têm como origem os bancos de dados transacionais e data warehouse. A mineração de dados focou-se na geração de modelos descritivos a partir de técnicas de classificação baseada no Teorema de Bayes e no método direto de extração de regras de classificação, definindo uma metodologia para a geração de modelos de aprendizagem. Foi implementado um processo de extração de conhecimento para a geração de modelos de aprendizagem para suporte à decisão, aplicando técnicas de mineração de dados para modelos descritivos e geração de regras de classificação. Explorou-se a possibilidade de transformar os modelos de aprendizagem em bases de conhecimento utilizando um banco de dados relacional, disponível para acesso via sistema especialista, para a realização de novas classificações de registros, ou então possibilitar a visualização dos resultados a partir de planilhas eletrônicas. No cenário descrito neste trabalho, a organização dos procedimentos da etapa de pré-processamento permitiu que a extração de atributos adicionais ou transformação de dados fosse realizada de forma iterativa, sem a necessidade de implementação de novos programas de extração de dados. Desta forma, foram definidas todas as atividades essenciais do pré-processamento e a seqüência em que estas devem ser realizadas, além de possibilitar a repetição dos procedimentos sem perdas das unidades codificadas para o processo de extração de dados. Um modelo de processo de extração de conhecimento iterativo e quantificável, em termos das etapas e procedimentos, foi configurado vislumbrando um produto final com o projeto da base de conhecimento para ações de retenção de clientes e regras para ações específicas com segmentos de clientes. / This work presents a model of knowledge discovery in databases, where the information for data analysis comes from a repository of transactional information systems and data-warehouse. The data mining focused on the generation of descriptive models by means of classification techniques based on the Bayes\' theorem and a extraction method of classification rules, defining a methodology to propose new learning models. The process of knowledge extraction was implemented for the generation of learning models for support the make decision, applying data mining for descriptive models and generation of classification rules. This work explored the possibility of transforming the learning models in knowledge database using a relational database, to be accessible by a specialist system, to classify new records or to allow the visualization of the results through electronic tables. The organization of the procedures in the pre-processing allowed to extract additional attributes or to transform information in an interactive process, with no need of new programs to extract the information. This way, all the essential activities of the pre-processing were defined and the sequence in which these should be developed. Additionally, this allowed the repetition of the procedures with no loss of units for the process of information extraction. A model of process for the interactive and quantifiable extraction of knowledge, in terms of the stages and procedures, was idealized in order to develop a product with the project of the knowledge databases for actions of retention of clients and rules for specific actions within clients\' segments.
712

Modelagem de dados para planejamento e gestão operacional de transportes. / Data modeling for transportation planning and operations management.

Giacaglia, Marcelo Eduardo 03 February 1999 (has links)
A integração das atividades da gestão de sistemas de transportes urbanos, em todos os níveis, é altamente desejável no âmbito de uma entidade. A integração interinstitucional é também importante quando existe sobreposição de planos ou de áreas de atuação, situação típica de grandes aglomerados urbanos. Para que os planos dessas entidades sejam consistentes entre si, há necessidade do uso compartilhado de informações. Esse compartilhamento é melhor obtido pela integração dos respectivos bancos de dados. Tal integração, entretanto, tem sido inviabilizada por fatores de natureza técnica e até política ligadas à necessária autonomia de cada uma. Nesta Tese, após a análise de diversas tentativas de solução para o problema, são identificados os aspectos críticos para a modelagem e a construção de Sistemas de Bancos de Dados, como elementos de suporte à Integração do Planejamento e da Gestão Operacional de Transportes, tanto no âmbito intra-institucional como interinstitucional. Para cada um deles, são indicadas as deficiências de sistemas existentes e propostas soluções ou diretrizes. Os aspectos críticos identificados e tratados são: a capacidade de suporte à integração das diferentes atividades da gestão operacional: planejamento, programação e acompanhamento, em uma mesma entidade; a capacidade de suporte à integração dos diferentes níveis decisórios em uma mesma entidade; o conteúdo, a estrutura e a estabilidade temporal de bancos de dados utilizados em planejamento estratégico e/ou tático, regional e/ou local; a capacidade de prover a comunicação interinstitucional; a estabilidade face a mudanças institucionais. / The integration of urban transportation systems management activities, at all levels, pertaining one entity, is highly desirable. Integration across entities is also important when their plans or transportation systems overlap, as usually happens in urban agglomerations. In order to achieve consistent plans among these entities, information must be shared. This sharing of information is better obtained through the integration of their databases. However, such integration has been unsuccessful because of technical and even political factors related to the necessary autonomy of each one. In this Thesis, after analysing several attempts to solve the problem, critical aspects of modelling and building database systems to support Integrated Transportation Planning and Management, pertaining to one entity and also across multiple entities, are identified. For each one, deficiencies of existing systems are analysed and solutions are proposed or directions to follow are given. The critical aspects identified and dealt with are: the ability to support the integration of the different transportation management activities at the operational level: planning, programming and attendance, pertaining one entity; the ability to support the integration of different decision levels pertaining one entity; the content, structure and the temporal stability of databases used for planning at the strategic and/or tactical and regional and/or local levels; the ability to provide communication among different entities; the stability in the face of institutional changes.
713

Updating XML Views of Relational Data

Mulchandani, Mukesh K 29 April 2003 (has links)
XML has emerged as the standard data format for Internet-based business applications. In many bussiness settings, a relational database management system(RDBMS) will serve as the storage manager for data from XML documents. In such a system, once the XML data is shredded and loaded into the storage system, XML queries posed against these (now virtual) XML documents are processed by translating them as much as possible into SQL queries against the underlying relational storage. Clearly, in order to support full database functionalities over XML data, we must allow users not only to query but also to specify updates on XML documents. Today while the XML query language XQuery is being standardized by W3C, no syntax for updating XML documents is included in this language proposal as of now. In this thesis, we have developed techniques for supporting translation of XML updates on XML views of relational data into SQL updates on the underlying relations. These techniques are based on techniques for supporting translation of updates on object-based views of relational data into SQL updates on underlying relations cite{keller91}. The system has been implemented as a part of XML Management System, called Rainbow, that is being developed at the Worcester Polytechnic Institute (WPI). We have used XQuery as XML query language and Oracle as the backend relational store for implementation of the system. Experimental studies show that incremental XML updates supported by our system is a better choice than complete reload of XML documents under a variety of system settings.
714

The natural history and management of vestibular schwannomas

Martin, Thomas Peter Cutlack January 2012 (has links)
Over the past decade (2000-), the management of vestibular schwannomas has been in a state of flux. An increasing availability of magnetic resonance imaging has allowed clinicians to monitor tumour progression and increasingly, it has become recognised that once diagnosed, a significant proportion of lesions do not continue to grow. As a result, a number of neurotological centres have advocated conservative management as appropriate for small-medium sized tumours. Birmingham has been one of these centres, and this thesis presents data gathered over the past fifteen years that reflects this change in management, drawing upon the Birmingham Vestibular Schwannoma Database maintained by the author. The thesis addresses issues pertinent to conservative management: growth rates among observed tumours, risk factors for growth, the evolution of hearing while under observation and proposes a radiological surveillance protocol. More broadly, the thesis examines other themes important in the management of patients with vestibular schwannomas: the role of functional surgery and the possibility of rehabilitation in single-sided deafness. A number of chapters from the thesis have been published in peer-reviewed journals and are presented here in updated or amended form.
715

A construção de tipos de pessoas vistas a partir de bancos de dados: o caso da adolescência vulnerável

Lima, Juliana Meirelles de 17 September 2015 (has links)
Made available in DSpace on 2016-04-29T13:31:19Z (GMT). No. of bitstreams: 1 Juliana Meirelles de Lima.pdf: 1508176 bytes, checksum: 03ce0b2d7b02fe88e224e3ef0b6bef40 (MD5) Previous issue date: 2015-09-17 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / This dissertation aims to investigate the discursive dimension of scientific databases. We understand that these tools are discursive practices and not merely repositories of scientific literature. Whilst the way in which articles are added to databases broadens the access researchers have to information, it also restricts the choice of articles available for access and the order in which they are accessed. Therefore, as search tools, databases have an impact on scientific practice. We have followed the argumentation presented by Hacking (1999; 2001; 2007) in order to investigate how the discursive dimension of scientific databases makes up kinds of people, more specifically, we attempt to ascertain the role databases play in determining the concept of vulnerable adolescents. We studied the history behind the use and crystallization of the notion of vulnerability as an organizing concept in the various areas of knowledge and more specifically in Psychology. To this intent, we gathered quantitative information on periodicals from the CAPES portal through a software developed exclusively for this purpose. Subsequently, we gathered information on publications from the PsycINFO database, analyzing discursively the vulnerability and vulnerable adolescents versions performed in publications from subfields of Psychology. Lastly, we focused on publications from the SciELO and BVS databases indexed by Qualis Psychology, evaluating the association between vulnerability and adolescence in national scientific publications in Brazil / Esta dissertação tem por objetivo investigar a dimensão discursiva dos bancos de dados científicos. Entendemos que tais ferramentas são práticas discursivas e não meros repositórios de publicações científicas. A maneira como as publicações são neles inseridas amplia, por um lado, o acesso dos pesquisadores às informações, mas, por outro, restringe o número de produções acessadas e impõe uma ordem. Portanto, como ferramentas de busca, os bancos produzem efeitos nas práticas científicas. Seguimos a argumentação de Hacking (1999, 2001, 2007) a fim de investigar como essa dimensão discursiva cria tipos de pessoas, mais especificamente, buscamos entender que papel os bancos de dados exercem na construção do conceito de adolescência vulnerável. Percorremos o trajeto histórico do uso e da cristalização da noção de vulnerabilidade como conceito organizador nas diversas áreas do saber e, com maior especificidade, na Psicologia. Para isso, um levantamento quantitativo foi realizado no portal de periódicos da CAPES através de um software desenvolvido especificamente para esse propósito. Posteriormente realizamos um levantamento das publicações na PsycINFO, analisando discursivamente as versões de vulnerabilidade e de adolescência vulnerável performadas nas publicações de subáreas da Psicologia. Por fim, no banco SciELO e BVS, estudamos publicações indexadas no Qualis de Psicologia, estudando a associação entre vulnerabilidade e adolescência em publicações científicas nacionais
716

Evaluation des requêtes hybrides basées sur la coordination des services / Evaluation of hybrid queries based on service coordination

Cuevas Vicenttin, Victor 08 July 2011 (has links)
Les récents progrès réalisés en matière de communication (réseaux hauts débits, normalisation des protocoles et des architectures à objets répartis, explosion de l'internet) conduisent à l'apparition de systèmes de gestion de données et services largement répartis. Les données sont produites à la demande ou de manière continue au travers de divers dispositifs statiques ou mobiles. Cette thèse présente une approche pour l'évaluation de requêtes dites hybrides car intégrant différents aspects des données mobiles, continues, cachées rencontrées dans des environnements dynamiques. Notre approche consiste à représenter une telle requête comme une coordination de services comprenant des services de données et de calcul. Une telle coordination est définie par le flux de la requête et ceux d'opérateurs sur les données (e.g. join, select, union). Un flux de requête représente une expression construite avec les opérateurs de notre modèle de données. Ce flux est construit par un algorithme de ré-écriture à partir de la requête spécifiée dans notre langage de requête HSQL Les flux dit opérateurs composent des services de calcul afin de permettre l'évaluation d'un opérateur particulier. Le processeur de requêtes basées sur les services hybrides que nous avons développé met en mise en œuvre et valide nos propositions. / Recent trends in information technologies result in a massive proliferation of data which are carried over different kinds of networks, produced in either on-demand or streaming fashion, generated and accessible by a variety of devices, and that can involve mobility aspects. This thesis presents an approach for the evaluation of hybrid queries that integrate the various aspects involved in querying continuous, mobile and hidden data in dynamic environments. Our approach consists of representing such an hybrid query as a service coordination comprising data and computation services. A service coordination is specified by a query workflow and additional operator workflows. A query workflow represents an expression built with the operators of our data model. This workflow is constructed from a query specified in our proposed SQL-like query language, HSQL, by an algorithm we developed based on known results of database theory. Operator workflows enable to compose computation services to enable the evaluation of a particular operator. HYPATIA, a service-based hybrid query processor, implements and validates our approach.
717

Análise visual de dados relacionais: uma abordagem interativa suportada por teoria dos grafos / Visual analysis of relational databases: an interactive approach supported by graph theory

Lima, Daniel Mário de 18 December 2013 (has links)
Bancos de dados relacionais são fontes de dados rigidamente estruturadas, caracterizadas por relacionamentos complexos entre um conjunto de relações (tabelas). Entender tais relacionamentos é um desafio, porque os usuários precisam considerar múltiplas relações, entender restrições de integridade, interpretar vários atributos, e construir consultas SQL para cada tentativa de exploração. Neste cenário, introduz-se uma metodologia em duas etapas; primeiro utiliza-se um grafo organizado como uma estrutura hierárquica para modelar os relacionamentos do banco de dados, e então, propõe-se uma nova técnica de visualização para exploração relacional. Os resultados demonstram que a proposta torna a exploração de bases de dados significativamente simplificada, pois o usuário pode navegar visualmente pelos dados com pouco ou nenhum conhecimento sobre a estrutura subjacente. Além disso, a navegação visual de dados remove a necessidade de consultas SQL, e de toda complexidade que elas requerem. Acredita-se que esta abordagem possa trazer um paradigma inovador no que tange à compreensão de dados relacionais / Relational databases are rigid-structured data sources characterized by complex relationships among a set of relations (tables). Making sense of such relationships is a challenging problem because users must consider multiple relations, understand their ensemble of integrity constraints, interpret dozens of attributes, and draw complex SQL queries for each desired data exploration. In this scenario, we introduce a twofold methodology; we use a hierarchical graph representation to efficiently model the database relationships and, on top of it, we designed a visualization technique for rapidly relational exploration. Our results demonstrate that the exploration of databases is deeply simplified as the user is able to visually browse the data with little or no knowledge about its structure, dismissing the need of complex SQL queries. We believe our findings will bring a novel paradigm in what concerns relational data comprehension.
718

Query and mining in large graph databases.

January 2013 (has links)
图结构能够描述数据对象之间的复杂关系,因而被广泛应用于多种领域。随着相关应用领域的发展,图数据库的规模变得庞大且仍在不断增长。这给研究者在图查询和图挖掘方面带来新的挑战。本文主要研究以下三个问题:如何确定两个图的顶点对应关系,使得其中一个图的子结构匹配到另一个图的相似子结构;如何从含有多个小图的数据库中,找到与查询图相似的图;如何在由不同类别的图组成的数据库中,选取特征子图并对图进行分类。 / 在本文中,对于第一个问题,我们提出了新的两段式图匹配算法。在第一阶段,我们采用了一个新的启发式策略,能够先选取锚顶点并向外扩展,进而快速得到初始匹配。在第二阶段,我们设计了新的算法对初始匹配加以改进,并且证明了新的匹配优于初始匹配。这个两段式图匹配算法能够快速有效地获得两个图的高质量匹配。为解决第二个问题,我们首先定义一个新的度量以衡量两图间的距离。它基于两图间的最大公共子图,能够很好地捕捉两个图的相同及不同之处。由于最大公共子图的计算是NP完全问题,为了快速回答top-k相似图查询,我们提出了一个高效算法,能够极大地减少最大公共子图的计算次数。这个算法根据距离度量的三种下界进行剪枝以筛选掉不合格的图。其中,前两种下界的计算基于两图的结构信息,第三种下界可由距离度量的三角不等式性质推出。我们还设计了三种不同的索引结构来支持剪枝,它们能够在剪枝效果和索引时间方面达到不同程度的平衡。关于第三个问题,我们发现了目前广泛使用的特征判别函数的两个主要缺陷,并据此提出了一个新的多样性特征判别函数。它不仅能衡量特征的判别性,而且能衡量特征的多样性。我们从多个方面分析了这个函数的性质,发现它能更好地区分不同类别的图。基于这个函数,我们设计了新的特征选取算法,获得很高的分类精度。 / Graph has powerful ability to model complex structural relationships among data objects and has been widely used in various applications. Along with the development of the application domains, graph databases become large and are growing rapidly in size. This brings researchers new challenges on graph query and mining, among which we mainly focus on investigating the following three problems: how to find the correspondence between the nodes of two large graphs so that some substructures in one graph are mapped to similar substructures in the other; another problem is how to retrieve similar graphs for a query graph from a graph database consisting of a large number of graphs; and the last problem is how to extract subgraph features to build an automated classification model for a graph database containing graphs which belong to different classes. / In this thesis, for the first problem, we propose a novel two-step approach which can efficiently match two large graphs over thousands of nodes with high matching quality. In the first stage, we design an anchor-selection/expansion scheme to construct a good initial matching heuristically. In the second stage, we propose a new approach to refine the initial matching and give the optimality of our refinement algorithm. Our approach can produce an approximate matching result with high quality and efficiency. To address the second problem, we introduce a new graph distance measure based on the maximum common subgraphs (MCS) of two graphs which can thoroughly capture the common as well as different structures of two graphs. Since computing the MCS of two graphs is NP-complete, to answer the top-k graph similarity query efficiently, we propose a fast algorithm which can significantly reduce the number of MCS computations. This algorithm prunes the unqualified graphs based on three lower bounds in which the first two are derived based on the structures of two graphs and the third is obtained based on the triangle property of the distance measure. Three index schemes are designed with different tradeoffs between pruning power and construction cost to assist the query processing. For the third problem, we identify two main issues of the current widely-used discriminative score for feature selection, and introduce a new diversified discriminative score to explore the additional value of the diversity together with the discriminativity. We analyze the properties of the newly-proposed diversified discriminative score from several perspectives and demonstrate that this score can make positive/negative graphs more separable. New algorithms are also proposed to select features based on the new score and they are shown to have high classification accuracy. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Zhu, Yuanyuan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 137-146). / Abstract also in Chinese. / Abstract --- p.i / Abstract in Chinese --- p.iii / Acknowledgments --- p.iv / Contents --- p.vi / List of Tables --- p.x / List of Figures --- p.xi / Notations --- p.1 / Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Motivation --- p.2 / Chapter 1.1.1. --- Large Graph Matching --- p.3 / Chapter 1.1.2. --- Top-k Graph Similarity Query --- p.4 / Chapter 1.1.3. --- Diversified Discriminative Feature Selection --- p.6 / Chapter 1.2. --- Contribution --- p.7 / Chapter 2. --- Preliminaries --- p.10 / Chapter 3. --- Related Work --- p.16 / Chapter 3.1. --- Graph Matching --- p.16 / Chapter 3.1.1. --- Exact Graph Matching --- p.16 / Chapter 3.1.2. --- Approximate Graph Matching --- p.17 / Chapter 3.2. --- Graph Similarity Query --- p.19 / Chapter 3.3. --- Graph Classification --- p.20 / Chapter 4. --- Large Graph Matching --- p.23 / Chapter 4.1. --- Problem Statement --- p.23 / Chapter 4.2. --- An Overview: Construction and Refinement --- p.24 / Chapter 4.3. --- Matching Construction --- p.26 / Chapter 4.3.1. --- Global and Local Node Similarity --- p.26 / Chapter 4.3.2. --- Anchor Selection and Expansion --- p.33 / Chapter 4.3.3. --- Discussion on τ for Anchor Selection --- p.36 / Chapter 4.4. --- Matching Refinement --- p.39 / Chapter 4.4.1. --- Vertex Cover Based Refinement --- p.39 / Chapter 4.4.2. --- Refinement and Its Optimality --- p.41 / Chapter 4.4.3. --- Randomly Refinement Excluding C - F₁ --- p.46 / Chapter 4.4.4. --- Randomly Refinement Including C - F₁ --- p.51 / Chapter 4.5. --- Labeled Graph Handling --- p.54 / Chapter 4.6. --- Experiments --- p.56 / Chapter 4.6.1. --- Comparison with the Approximate Algorithms --- p.59 / Chapter 4.6.2. --- Comparison with the Exact Algorithm --- p.63 / Chapter 4.6.3. --- Parameter and Scalability Testing --- p.65 / Chapter 4.6.4. --- Sensitivity of Randomness (PN) --- p.69 / Chapter 4.6.5. --- Effectiveness of Label Distribution --- p.70 / Chapter 4.7. --- Summary --- p.72 / Chapter 5. --- Top-k Graph Similarity Query --- p.73 / Chapter 5.1. --- Problem Statement --- p.73 / Chapter 5.2. --- The Framework --- p.78 / Chapter 5.3. --- Pruning without Indexing --- p.80 / Chapter 5.3.1. --- Edge Frequency Based Lower Bound --- p.80 / Chapter 5.3.2. --- Adjacency List Based Lower Bound --- p.82 / Chapter 5.3.3. --- Query Processing --- p.84 / Chapter 5.4. --- Pruning with Indexing --- p.85 / Chapter 5.4.1. --- The Triangle Property of Graph Distance --- p.86 / Chapter 5.4.2. --- Query Processing --- p.88 / Chapter 5.4.3. --- Indexing --- p.92 / Chapter 5.4.4. --- Discussion on the Generality of Our Framework --- p.94 / Chapter 5.5. --- Experiments --- p.94 / Chapter 5.5.1. --- Similarity Measures Evaluation --- p.96 / Chapter 5.5.2. --- Query Performance Evaluation --- p.98 / Chapter 5.5.3. --- Indexing Cost Evaluation --- p.102 / Chapter 5.6. --- Summary --- p.103 / Chapter 6. --- Diversified Discriminative Feature Selection --- p.105 / Chapter 6.1. --- Problem Statement --- p.105 / Chapter 6.2. --- Discriminative Score --- p.108 / Chapter 6.2.1. --- The Single Feature Discriminative Score --- p.109 / Chapter 6.2.2. --- A New Diversified Discriminative Score --- p.110 / Chapter 6.3. --- Property Statistics of Discriminative Score --- p.113 / Chapter 6.4. --- The Algorithms --- p.117 / Chapter 6.5. --- Ensemble D&D --- p.121 / Chapter 6.6. --- Experiments --- p.123 / Chapter 6.6.1. --- D&D Performance Analysis --- p.126 / Chapter 6.6.2. --- Comparison with Existing Algorithms --- p.127 / Chapter 6.6.3. --- Performance on Patterns Mined by GAIA --- p.129 / Chapter 6.7. --- Summary --- p.131 / Chapter 7. --- Conclusion and FutureWork --- p.132 / Chapter 7.1. --- Conclusion --- p.132 / Chapter 7.2. --- Future work --- p.134 / Bibliography --- p.136
719

Um Framework para construção de aplicações OO sobre SGBD relacional / Object-oriented application design in a relational database

Molz, Kurt Werner January 1999 (has links)
O paradigma da orientação a objetos esta se tomando a abordagem preferida para construção de sistemas em ambiente de banco de dados. Por outro lado, a tecnologia relacional e amplamente adotada para gerenciar dados corporativos. Os bancos de dados relacionais tornaram-se o padrão no armazenamento de dados para aplicações de processamento de transações on-line (OLTP). Estas tendências estão motivando a necessidade de construção de aplicações orientadas a objetos que acessem banco de dados relacionais. 0 uso de conceitos orientado a objetos, como herança, permitem uma modelagem !Dais adequada e uma melhor implementação da aplicação baseada em sistema de banco de dados orientado a objetos. Entretanto, os resultados do projeto orientado a objetos, podem também ser aplicados em sistemas clássicos de banco de dados. 0 trabalho apresenta o uso de padrões de projeto na construção de una arquitetura de um framework que auxilie o mapeamento de uma aplicação 00 a um SGBD relacional. Esta arquitetura segue a abordagem de persistência de objetos baseada em gateways, que é uma camada de software inserida entre o sistema gerenciador de banco de dados e a aplicação orientada a objetos, cujo o objetivo é dar suporte a um modelo de programação de aplicações 00. A característica principal desta arquitetura é a separação clara das classes que tratam da base de dados em relação as classes que tratam do domínio do problema da aplicação. Esta divisão de responsabilidades permite a substituição das classes referentes a base de dados por outras, permitindo a migração da aplicação entre bases de dados diferentes. São apresentados neste trabalho, formas de mapeamentos de esquemas orientados a objetos para esquemas relacionais. Estes mapeamentos acontecem do modelo 00 para o modelo relacional. E importante salientar, que a arquitetura que esta sendo proposta, não vai impedir que aplicações estruturadas deixem ter acesso a base de dados relacional mapeada, pois esta abordagem foi escolhida para permitir que novas aplicações 00 tenham acesso a base de dados relacionais já existentes. Como a implementação deste trabalho segue a abordagem de gateway, são apresentados os conceitos de orientação objetos, e como estes serão suportados na arquitetura, ou seja, o que o gateway devera implementar. / The paradigm of the object-oriented is becoming the approach preferred for construction of systems in database environment. On the other hand, the technology relational is adopted thoroughly for management corporate data. The relational databases they became the pattern in the storage of data for applications of processing of transactions on-line (OLTP). These tendencies are motivating the need of construction of applications object-oriented that acessem relational databases. The way of using object-oriented conception, how inheritance, to make possible the better modeling and implementation based in object-oriented database systems. Therefore, the objetc-oriented design results, also is possible to application in classics database systems. The work presents the use of project patterns in the construction of an architecture of a framework that aids the mapeamento of an application 00 to a SGBD relacional. This architecture follows the approach of set persistence of objects in gateways, that is a software layer inserted among the system database manager and the object-oriented application, whose the objective is to give support to a model of programming of applications 00. The main characteristic of this architecture is the clear separation of the classes that are about the database in relation to the classes that are about the domain of the problem of the application. This division of responsibilities allows the substitution of the referring classes the database for other, allowing the migration of the application among different databases. They are presented in this work, forms of mapping the object-oriented model for relational model. These mappings happens of the model 00 for the model relational. It is important to point out, that the architecture that it is being proposed, won't impede that structured applications let to have access to the relational database, because this approach was chosen to allow that new applications 00 has access the relational database already existent. As the implementation of this work follows the gateway approach, the concepts of object-oriented are presented, and as these they will be supported in the architecture, that is to say, which the gateway should implement.
720

Extensão de um SGBD para incluir o gerenciamento da informação temporal. / Extension of a DBMS to include the management of temporal information.

Sakai, Rodrigo Katsumoto 09 August 2007 (has links)
O fator temporal é uma variável natural da maioria dos sistemas de informação, pois no mundo real os eventos ocorrem de maneira dinâmica, modicando continuamente os valores dos seus objetos no decorrer do tempo. Muitos desses sistemas precisam registrar essa modicação e atribuir os instantes de tempo em que cada informação foi válida no sistema. Este trabalho reúne as características relacionadas aos Bancos de Dados Temporais e Bancos de Dados Objeto-Relacionais. O objetivo primordial é propor uma forma de implementar alguns aspectos temporais, desenvolvendo um módulo que faça parte das características e funcionalidades internas de um SGBD. O módulo temporal contempla principalmente a parte de restrições de integridade temporal que é utilizada para manter a consistência da informação temporal armazenada. Para isso, é proposto um novo tipo de dado que melhor representa as marcas temporais dos objetos. Uma parte importante para a implementação desse projeto é a utilização de um SGBD objeto-relacional que possui algumas características orientadas a objetos que permitem a extensão de seus recursos, tornando-o capaz de gerenciar alguns aspectos temporais. O módulo temporal desenvolvido torna esses aspectos temporais transparentes para o usuário. Por conseqüência, esses usuários são capazes de utilizar os recursos temporais com maior naturalidade. / The temporal factor is a natural variable of the majority of the information systems, therefore in the real world the events occur in dynamic way, modifying continuously the values of its objects in elapsing of the time. Many of these systems need to register this modication and to attribute the instants of time where each information was valid in the system. This work congregates the characteristics related to the Temporal Databases and Object-Relational Databases. The primordial objective is to consider a form to implement some temporal aspects, developing a module that is part of the characteristics and internal functionalities of a DBMS. The temporal module mainly contemplates the part of restrictions of temporal integrity that is used to keep the consistency of the stored temporal information. For this, a new data type is proposed that better represent the objects timestamps. An important part for the implementation of this project is the use of a object-relational DBMS that has some object-oriented characteristics that allow the extension of its resources, becoming capable to manage some temporal aspects. The developed temporal module becomes these transparent temporal aspects for the user. For consequence, these users are capable to use the temporal resources more naturally.

Page generated in 0.0797 seconds