• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 58
  • 35
  • 26
  • 23
  • 11
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 274
  • 164
  • 100
  • 81
  • 74
  • 42
  • 38
  • 36
  • 36
  • 33
  • 33
  • 32
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Vers des cubes matriciels supportant l’analyse spatiale à la volée dans un contexte décisionnel

Plante, Mathieu 20 April 2018 (has links)
Depuis l’avènement du SOLAP, la problématique consistant à produire des analyses spatiales à la volée demeure entière. Les travaux précédents se sont tournés vers l’analyse visuelle et le calcul préalable afin d’obtenir des résultats en moins de 10 secondes. L’intégration des données matricielles dans les cubes SOLAP possède un potentiel inexploré pour le traitement à la volée des analyses spatiales. Cette recherche vise à explorer les avantages et les considérations à exploiter les cubes matriciels afin de produire des analyses spatiales à la volée dans un contexte décisionnel. Elle contribue à l’évolution du cadre théorique de l’intégration des données matricielles dans les cubes en ajoutant notamment la notion de couverture matricielle au cube afin de mieux supporter les analyses spatiales matricielles. Elle identifie des causes de la consommation excessive de ressources pour le traitement de ces analyses et propose des pistes d’optimisation basées sur l’exploitation des dimensions matricielles géométriques.
102

Implementace BI ve velkoobchodu se surovinami / The implementation of Business Intelligence in a raw materials trading company

Hanák, Ondřej January 2012 (has links)
This diploma thesis is focused on an implementation of a Business Intelligence (BI) in a raw materials trading company, which trades brown coal. At first, a Balanced Scorecard analysis is made in this trading company. Then results of this analysis are used for the implementation of the BI. The first chapter of this thesis describes goals, methods and structure of the thesis. The second chapter contains information about those theses, which have similar topics as my thesis. Next chapters are theoretical part of the thesis. The third chapter describes the Business Intelligence and the fourth chapter describes the Balanced Scorecard. The fifth chapter is a theoretical-practical chapter and describes the company and the brown coal market. Practical part starts on the chapter six, which describes the use of the Balanced Scorecard in the given company. Next chapter uses the outputs of the chapter six and in a chapter seven there is an implementation of the BI. The eighth chapter describes reports of the BI and the ninth chapter contains conclusion and an evaluation of achieving the goals set by this thesis. Main contribution of this thesis is in the demonstration of design and implementation of the BI in a raw materials trading company.
103

Interactive visualization of financial data : Development of a visual data mining tool

Saltin, Joakim January 2012 (has links)
In this project, a prototype visual data mining tool was developed, allowing users to interactively investigate large multi-dimensional datasets visually (using 2D visualization techniques) using so called drill-down, roll-up and slicing operations. The project included all steps of the development, from writing specifications and designing the program to implementing and evaluating it. Using ideas from data warehousing, custom methods for storing pre-computed aggregations of data (commonly referred to as materialized views) and retrieving data from these were developed and implemented in order to achieve higher performance on large datasets. View materialization enables the program to easily fetch or calculate a view using other views, something which can yield significant performance gains if view sizes are much smaller than the underlying raw dataset. The choice of which views to materialize was done in an automated manner using a well-known algorithm - the greedy algorithm for view materialization - which selects the fraction of all possible views that is likely (but not guaranteed) to yield the best performance gain. The use of materialized views was shown to have good potential to increase performance for large datasets, with an average speedup (compared to on-the-fly queries) between 20 and 70 for a test dataset containing 500~000 rows. The end result was a program combining flexibility with good performance, which was also reflected by good scores in a user-acceptance test, with participants from the company where this project was carried out.
104

Χρήση της OLAP τεχνικής στην οπτικοποίηση κανόνων Data mining / Visualization of Data mining rules using OLAP

Γκίζα, Ειρήνη 27 August 2008 (has links)
Η διαδικασία εξόρυξης από δεδομένα [Data Mining] αποτελεί μια συνεχώς αναπτυσσόμενη διαδικασία ανακάλυψης γνώσης μέσω της εξαγωγής μέχρι πρότινος άγνωστης πληροφορίας από μεγάλες εμπορικές και επιστημονικές βάσεις δεδομένων. Η διαδικασία εξόρυξης από δεδομένα εξάγει κανόνες δια μέσου της επεξεργασίας κατηγορικών ή αριθμητικών δεδομένων, από βάσεις πολλών διαστάσεων (> από 4 χαρακτηριστικά). Η ταξινόμηση, η ομαδοποίηση και η συσχέτιση αποτελούν τις πιο γνωστές και πλέον χρησιμοποιούμενες τεχνικές Data Mining. Ωστόσο συνήθως και οι κανόνες που εξάγονται από τα δεδομένα μπορεί να είναι πολλοί και δυσνόητοι στον τελικό χρήστη/ αναλυτή ο οποίος ενδέχεται να μην είναι εξοικειωμένος με τις τεχνικές της Μηχανικής Μάθησης. Προκειμένου να επιλυθεί αυτό το πρόβλημα τα τελευταία έτη έχουν αναπτυχθεί διάφορες τεχνικές οπτικοποίησης (Visualization) τόσο των δεδομένων που χρησιμοποιούνται κατά τη διαδικασία Data Mining (ανεπεξέργαστα δεδομένα) όσο και των κανόνων που εξάγονται από την εφαρμογή της. Όλες οι τεχνικές οπτικοποίησης προσπαθούν να εκμεταλλευτούν την αντιληπτική ικανότητα του χρήστη στην κατανόηση των εξαγόμενων προτύπων. Επιπρόσθετα ο χρήστης τείνει να εμπιστεύεται περισσότερο ένα αποτέλεσμα όταν το κατανοεί πλήρως. Ο σκοπός των τεχνικών οπτικοποίησης συνίσταται ακριβώς σε αυτό. Στη διεθνή βιβλιογραφία έχουν παρουσιαστεί αρκετές μέθοδοι οπτικής παρουσίασης των δεδομένων ενώ τα τελευταία χρόνια η επιστημονική κοινότητα έχει εστιάσει το ενδιαφέρον της και στην οπτικοποίηση των αποτελεσμάτων του Data Mining. Στόχος της παρούσας διπλωματικής εργασίας είναι πέρα από την παράθεση των τεχνικών οπτικής παρουσίασης των εξαγόμενων κανόνων των διαδικασιών συσχέτισης [association], ταξινόμησης [classification] και [clustering] που έχουν παρουσιαστεί από την επιστημονική κοινότητα την τελευταία εικοσαετία, η παρουσίαση μιας νέας τεχνικής οπτικοποίησης των κανόνων data mining με χρήση της τεχνολογίας On Line Analytical Processing [OLAP]. Σε πιο ειδικό πλαίσιο, η προτεινόμενη τεχνική χρησιμοποιεί το δυσδιάστατο πίνακα που χρησιμοποιούν τα περισσότερα OLAP μοντέλα και την έννοια της ιεραρχίας προκειμένου να οπτικοποιήσει ένα σημαντικό αριθμό κανόνων data mining και από τις τρεις (3) προαναφερόμενες τεχνικές. Επίσης, παρουσιάζονται τα πειραματικά αποτελέσματα της οπτικοποίησης που δείχνουν πώς η προτεινόμενη τεχνική είναι χρήσιμη στην ανάλυση και στην κατανόηση των εξαγόμενων κανόνων. / Data Mining is an emerging knowledge discovery process of extracting previously unknown, actionable information from very large scientific and commercial databases. Usually, a data mining process extracts rules by processing high dimensional categorical and/or numerical data (> 4 attributes). Classification, Clustering and Association constitute for the most well known Data Mining tasks. However, in the data mining context often the user has to analyze hundreds of extracted rules in order to grasp valuable knowledge. Thus, the analysis of such rules by means of visual tools has evolved rapidly in recent years. Visual data mining attempts to take advantage of humans’ ability to perceive pattern and structure in visual form. The end user trusts more a result if he understand it completely. And this is the purpose of visual techniques. There have been proposed many techniques for visualizing the data in literature, whereas the last years many researchers have focused on the visualization of data mining results (knowledge visualization). Researchers have developed many tools to visualize data mining rules. However, few of these tools can handle effectively more than some dozens of data mining rules. In this thesis, we propose a new visualization technique of data mining rules based On Line Analytical Processing [OLAP]. More specifically, the proposed technique utilizes the standard two dimensional cross-tabulation table of most OLAP models in order to visualize even a great number of data mining rules from all techniques. We also present experimental results that demonstrate how the proposed technique is useful and helpful for analyzing and understanding extracted data mining rules.
105

Entrepôts de données NoSQL orientés colonnes dans un environnement cloud / Columnar NoSQL data warehouses in the cloud environment.

Dehdouh, Khaled 05 November 2015 (has links)
Le travail présenté dans cette thèse vise à proposer des approches pour construire et développer des entrepôts de données selon le modèle NoSQL orienté colonnes. L'intérêt porté aux modèles NoSQL est motivé d'une part, par l'avènement des données massives et d'autre part, par l'incapacité du modèle relationnel, habituellement utilisés pour implémenter les entrepôts de données, à permettre le passage à très grande échelle. En effet, les différentes modèles NoSQL sont devenus des standards dans le stockage et la gestion des données massives. Ils ont été conçus à l'origine pour construire des bases de données dont le modèle de stockage est le modèle « clé/valeur ». D'autres modèles sont alors apparus pour tenir compte de la variabilité des données : modèles orienté colonne, orienté document et orienté graphe. Pour développer des entrepôts de données massives, notre choix s'est porté sur le modèle NoSQL orienté colonnes car il apparaît comme étant le plus approprié aux traitements des requêtes décisionnelles qui sont définies en fonction d'un ensemble de colonnes (mesures et dimensions) issues de l'entrepôt. Cependant, le modèle NoSQL en colonnes ne propose pas d'opérateurs de type analyse en ligne (OLAP) afin d'exploiter les entrepôts de données.Nous présentons dans cette thèse des solutions innovantes sur la modélisation logique et physique des entrepôts de données NoSQL en colonnes. Nous avons proposé une approche de construction des cubes de données qui prend compte des spécificités de l'environnement du stockage orienté colonnes. Par ailleurs, afin d'exploiter les entrepôts de données en colonnes, nous avons défini des opérateurs d'agrégation permettant de créer des cubes OLAP. Nous avons proposé l'opérateur C-CUBE (Columnar-Cube) permettant de construire des cubes OLAP stockés en colonnes dans un environnement relationnel en utilisant la jointure invisible. MC-CUBE (MapReduce Columnar-Cube) pour construire des cubes OLAP stockés en colonnes dans un environnement distribué exploitant la jointure invisible et le paradigme MapReduce pour paralléliser les traitements. Et enfin, nous avons développé l'opérateur CN-CUBE (Columnar-NoSQL Cube) qui tient compte des faits et des dimensions qui sont groupés dans une même table lors de la génération de cubes à partir d'un entrepôt dénormalisé selon un certain modèle logique. Nous avons réalisé une étude de performance des modèles de données dimensionnels NoSQL et de nos opérateurs OLAP. Nous avons donc proposé un index de jointure en étoile adapté aux entrepôts de données NoSQL orientés colonnes, baptisé C-SJI (Columnar-Star Join Index). Pour évaluer nos propositions, nous avons défini un modèle de coût pour mesurer l'impact de l'apport de cet index. D'autre part, nous avons proposé un modèle logique baptisé FLM (Flat Logical Model) pour implémenter des entrepôts de données NoSQL orientés colonnes et de permettre une meilleure prise en charge par les SGBD NoSQL de cette famille.Pour valider nos différentes contributions, nous avons développé une plate-forme logicielle CG-CDW (Cube Generation for Columnar Data Warehouses) qui permet de générer des cubes OLAP à partir d'entrepôts de données en colonnes. Pour terminer et afin d'évaluer nos contributions, nous avons tout d'abord développé un banc d'essai décisionnel NoSQL en colonnes (CNSSB : Columnar NoSQL Star Schema Benchmark) basé sur le banc d'essai SSB (Star Schema Benchmark), puis, nous avons procédé à plusieurs tests qui ont permis de montrer l'efficacité des différents opérateurs d'agrégation que nous avons proposé. / The work presented in this thesis aims at proposing approaches to build data warehouses by using the columnar NoSQL model. The use of NoSQL models is motivated by the advent of big data and the inability of the relational model, usually used to implement data warehousing, to allow data scalability. Indeed, the NoSQL models are suitable for storing and managing massive data. They are designed to build databases whose storage model is the "key/value". Other models, then, appeared to account for the variability of the data: column oriented, document oriented and graph oriented. We have used the column NoSQL oriented model for building massive data warehouses because it is more suitable for decisional queries that are defined by a set of columns (measures and dimensions) from warehouse. However, the NoSQL model columns do not offer online analysis operators (OLAP) for exploiting the data warehouse.We present in this thesis new solutions for logical and physical modeling of columnar NoSQL data warehouses. We have proposed a new approach that allows building data cubes by taking the characteristics of the columnar environment into account. Thus, we have defined new cube operators which allow building columnar cubes. C-CUBE (Columnar-CUBE) for columnar relational data warehouses. MC-CUBE (MapReduce Columnar-CUBE) for columnar NoSQL data warehouses when measures and dimensions are stored in different tables. Finally, CN-CUBE (Columnar NoSQL-CUBE) when measures and dimensions are gathered in the same table according a new logical model that we proposed. We have studied the NoSQL dimensional data model performance and our OLAP operators, and we have proposed a new star join index C-SJI (Columnar-Star join index) suitable for columnar NoSQL data warehouses which store measures and dimensions separately. To evaluate our contribution, we have defined a cost model to measure the impact of the use of this index. Furthermore, we have proposed a logic model called FLM (Flat Logical Model) to represent a data cube NoSQL oriented columns and enable a better management by columnar NoSQL DBMS.To validate our contributions, we have developed a software framework CG-CDW (Cube Generation for Data Warehouses Columnar) to generate OLAP cubes from columnar data warehouses. Also, we have developed a columnar NoSQL decisional benchmark CNSSB (Columnar NoSQL Star Schema Benchmark) based on the SSB and finally, we conducted several tests that have shown the effectiveness of different aggregation operators that we proposed.
106

Aplikace BI v systému pro zvyšování kvalifikace / Application of BI in system for skill increasing

Laušman, Jakub January 2010 (has links)
This Diploma thesis deals with the analysis and proposal of the pilot BI solution for the company Svoboda & Partner CZ s.r.o., engaged in the automechanic expertise. Fundamental objectives of this work focus particularly on the above mentioned pilot BI solutions and partial objectives deal with an analysis of the company and its needs, arising from business strategy and market analysis applications available to support the visualization of transaction reporting. These objectives have been achieved by analyzing the company's business model in chapter seven, further analysis of needs and problems within the company in chapter eight. Before the analysis, the diploma thesis deals with a survey of available applications that offer functionality of dynamically generated graphs to aid visualization of transaction reporting in chapter six and finally a major analysis and preparation of pilot BI and transaction reporting for this business which you can find from chapter 9 to chapter eleven. Contribution of this work for the company will be a complete analysis and delivery of all necessary documents in order to decide whether the above BI solutions to the company implement or not. In the case of a positive decision of the company, it is expected that thanks to BI solutions the ways of looking at corporate data can be more expanded, more precise and more operational management of the company, achieving better performance, higher profits and more efficient decision-making based on better data for decision making.
107

SynopSys: Foundations for Multidimensional Graph Analytics

Rudolf, Michael, Voigt, Hannes, Bornhövd, Christof, Lehner, Wolfgang 02 February 2023 (has links)
The past few years have seen a tremendous increase in often irregularly structured data that can be represented most naturally and efficiently in the form of graphs. Making sense of incessantly growing graphs is not only a key requirement in applications like social media analysis or fraud detection but also a necessity in many traditional enterprise scenarios. Thus, a flexible approach for multidimensional analysis of graph data is needed. Whereas many existing technologies require up-front modelling of analytical scenarios and are difficult to adapt to changes, our approach allows for ad-hoc analytical queries of graph data. Extending our previous work on graph summarization, in this position paper we lay the foundation for large graph analytics to enable business intelligence on graph-structured data.
108

The Planning OLAP Model

Jaecksch, Bernhard, Lehner, Wolfgang 26 January 2023 (has links)
A wealth of multidimensional OLAP models has been suggested in the past, tackling various problems of modeling multidimensional data. However, all of these models focus on navigational and query operators for grouping, selection and aggregation. We argue that planning functionality is, next to reporting and analysis, an important part of OLAP in many businesses and as such should be represented as part of a multidimensional model. Navigational operators are not enough for planning, instead new factual data is created or existing data is changed. To our knowledge we are the first to suggest a multidimensional model with support for planning. Because the main data entities of a typical multidimensional model are used both by planning and reporting, we concentrate on the extension of an existing model, where we add a set of novel operators that support an extensive set of typical planning functions.
109

Recomendação semântica de documentos de texto mediante a personalização de agregações OLAP. / Semantic recommendation of text documents through personalizing OLAP aggregation

Berbel, Talita dos Reis Lopes 23 March 2015 (has links)
Made available in DSpace on 2016-06-02T19:07:09Z (GMT). No. of bitstreams: 1 BERBEL_Talita_2015.pdf: 2383674 bytes, checksum: 3c3c42908a145864cffb9aa42b7d45b7 (MD5) Previous issue date: 2015-03-23 / With the rapid growth of unstructured data, such as text documents, it becomes more and more interesting and necessary to extract such information to support decision making in business intelligence systems. Recommendations can be used in the OLAP process, because they allow users to have a particular experience in exploiting data. The process of recommendation, together with the possibility of query personalisation, allows recommendations to be increasingly relevant. The main contribution of this work is to propose an effective solution for semantic recommendation of documents through personalisation of OLAP aggregation queries in a data warehousing environment. In order to aggregate and recommend documents, we propose the use of semantic similarity. Domain ontology and the statistical measure of frequency are used in order to verify the similarity between documents. The threshold of similarity between documents in the recommendation process is adjustable and this is the personalisation that provides to the user an interactive way to improve the relevance of the results. The proposed case study is based on articles from PubMed and its domain ontology in order to create a prototype using real data. The results of the experiments are presented and discussed, showing that good recommendations and aggregations are possible with the suggested approach. The results are discussed on the basis of evaluation measures: precision, recall and F1-measure. / Com o crescimento do volume dos dados não estruturados, como os documentos de texto, torna-se cada vez mais interessante e necessário extrair informações deste tipo de dado para dar suporte à tomada de decisão em sistemas de Business Intelligence. Recomendações podem ser utilizadas no processo OLAP, pois permitem que os usuários tenham uma experiência diferenciada na exploração dos dados. O processo de recomendação, aliado à possibilidade da personalização das consultas dos usuários, tomadores de decisão, permite que as recomendações possam ser cada vez mais relevantes. A principal contribuição deste trabalho é a proposta de uma solução eficaz para a recomendação semântica de documentos mediante a personalização de consultas de agregação OLAP em um ambiente de Data Warehousing. Com o intuito de agregar e recomendar documentos propõe-se a utilização da similaridade semântica. A ontologia de domínio e a medida estatística de frequência são utilizadas com o objetivo de verificar a similaridade entre os documentos. O limiar de similaridade entre os documentos no processo de recomendação pode ser parametrizado e é esta a personalização que oferece ao usuário uma maneira interativa de melhorar a relevância dos resultados obtidos. O estudo de caso proposto se baseia em artigos da PubMed e em sua ontologia de domínio com o propósito de criar um protótipo utilizando dados reais. Os resultados dos experimentos realizados são expostos e analisados, mostrando que boas recomendações e agregações são possíveis utilizando a abordagem sugerida. Os resultados são discutidos com base nas métricas de avaliação: precision, recall e F1-measure.
110

Designing Conventional, Spatial, and Temporal Data Warehouses: Concepts and Methodological Framework

Malinowski Gajda, Elzbieta 02 October 2006 (has links)
Decision support systems are interactive, computer-based information systems that provide data and analysis tools in order to better assist managers on different levels of organization in the process of decision making. Data warehouses (DWs) have been developed and deployed as an integral part of decision support systems. A data warehouse is a database that allows to store high volume of historical data required for analytical purposes. This data is extracted from operational databases, transformed into a coherent whole, and loaded into a DW during the extraction-transformation-loading (ETL) process. DW data can be dynamically manipulated using on-line analytical processing (OLAP) systems. DW and OLAP systems rely on a multidimensional model that includes measures, dimensions, and hierarchies. Measures are usually numeric additive values that are used for quantitative evaluation of different aspects about organization. Dimensions provide different analysis perspectives while hierarchies allow to analyze measures on different levels of detail. Nevertheless, currently, designers as well as users find difficult to specify multidimensional elements required for analysis. One reason for that is the lack of conceptual models for DW and OLAP system design, which would allow to express data requirements on an abstract level without considering implementation details. Another problem is that many kinds of complex hierarchies arising in real-world situations are not addressed by current DW and OLAP systems. In order to help designers to build conceptual models for decision-support systems and to help users in better understanding the data to be analyzed, in this thesis we propose the MultiDimER model - a conceptual model used for representing multidimensional data for DW and OLAP applications. Our model is mainly based on the existing ER constructs, for example, entity types, attributes, relationship types with their usual semantics, allowing to represent the common concepts of dimensions, hierarchies, and measures. It also includes a conceptual classification of different kinds of hierarchies existing in real-world situations and proposes graphical notations for them. On the other hand, currently users of DW and OLAP systems demand also the inclusion of spatial data, visualization of which allows to reveal patterns that are difficult to discover otherwise. The advantage of using spatial data in the analysis process is widely recognized since it allows to reveal patterns that are difficult to discover otherwise. However, although DWs typically include a spatial or a location dimension, this dimension is usually represented in an alphanumeric format. Furthermore, there is still a lack of a systematic study that analyze the inclusion as well as the management of hierarchies and measures that are represented using spatial data. With the aim of satisfying the growing requirements of decision-making users, we extend the MultiDimER model by allowing to include spatial data in the different elements composing the multidimensional model. The novelty of our contribution lays in the fact that a multidimensional model is seldom used for representing spatial data. To succeed with our proposal, we applied the research achievements in the field of spatial databases to the specific features of a multidimensional model. The spatial extension of a multidimensional model raises several issues, to which we refer in this thesis, such as the influence of different topological relationships between spatial objects forming a hierarchy on the procedures required for measure aggregations, aggregations of spatial measures, the inclusion of spatial measures without the presence of spatial dimensions, among others. Moreover, one of the important characteristics of multidimensional models is the presence of a time dimension for keeping track of changes in measures. However, this dimension cannot be used to model changes in other dimensions. Therefore, usual multidimensional models are not symmetric in the way of representing changes for measures and dimensions. Further, there is still a lack of analysis indicating which concepts already developed for providing temporal support in conventional databases can be applied and be useful for different elements composing a multidimensional model. In order to handle in a similar manner temporal changes to all elements of a multidimensional model, we introduce a temporal extension for the MultiDimER model. This extension is based on the research in the area of temporal databases, which have been successfully used for modeling time-varying information for several decades. We propose the inclusion of different temporal types, such as valid and transaction time, which are obtained from source systems, in addition to the DW loading time generated in DWs. We use this temporal support for a conceptual representation of time-varying dimensions, hierarchies, and measures. We also refer to specific constraints that should be imposed on time-varying hierarchies and to the problem of handling multiple time granularities between source systems and DWs. Furthermore, the design of DWs is not an easy task. It requires to consider all phases from the requirements specification to the final implementation including the ETL process. It should also take into account that the inclusion of different data items in a DW depends on both, users' needs and data availability in source systems. However, currently, designers must rely on their experience due to the lack of a methodological framework that considers above-mentioned aspects. In order to assist developers during the DW design process, we propose a methodology for the design of conventional, spatial, and temporal DWs. We refer to different phases, such as requirements specification, conceptual, logical, and physical modeling. We include three different methods for requirements specification depending on whether users, operational data sources, or both are the driving force in the process of requirement gathering. We show how each method leads to the creation of a conceptual multidimensional model. We also present logical and physical design phases that refer to DW structures and the ETL process. To ensure the correctness of the proposed conceptual models, i.e., with conventional data, with the spatial data, and with time-varying data, we formally define them providing their syntax and semantics. With the aim of assessing the usability of our conceptual model including representation of different kinds of hierarchies as well as spatial and temporal support, we present real-world examples. Pursuing the goal that the proposed conceptual solutions can be implemented, we include their logical representations using relational and object-relational databases.

Page generated in 0.2148 seconds