Spelling suggestions: "subject:"warehouse""
211 |
Processing reporting function views in a data warehouse environmentLehner, Wolfgang, Hummer, W., Schlesinger, L. 02 June 2022 (has links)
Reporting functions reflect a novel technique to formulate sequence-oriented queries in SQL. They extend the classical way of grouping and applying aggregation functions by additionally providing a column-based ordering, partitioning, and windowing mechanism. The application area of reporting functions ranges from simple ranking queries (TOP(n)-analyses) over cumulative (Year-To-Date-analyses) to sliding window queries. We discuss the problem of deriving reporting function queries from materialized reporting function views, which is one of the most important issues in efficiently processing queries in a data warehouse environment. Two different derivation algorithms, including their relational mappings are introduced and compared in a test scenario.
|
212 |
On solving the view selection problem in distributed data warehouse architecturesLehner, Wolfgang, Bauer, Andreas 02 June 2022 (has links)
The use of materialized views in a data warehouse installation is a common tool to speed up mostly aggregation queries. The problems coming along with materialized aggregate views have triggered a huge variety of proposals, such as picking the optimal set of aggregation combinations, transparently rewriting user queries to take advantage of the summary data, or synchronizing pre-computed summary data as soon as the base data changes. The paper focuses on the problem of view selection in the context of distributed data warehouse architectures. While much research was done with regard to the view selection problem in the central case, we are not aware to any other work discussing the problem of view selection in distributed data warehouse systems. The paper proposes an extension of the concept of an aggregation lattice to capture the distributed semantics. Moreover, we extend a greedy-based selection algorithm based on an adequate cost model for the distributed case. Within a performance study, we finally compare our findings with the approach of applying a selection algorithm locally to each node in a distributed warehouse environment.
|
213 |
Optimistic Coarse-Grained Cache Semantics for Data MartsLehner, Wolfgang, Thiele, Maik, Albrecht, Jens 15 June 2022 (has links)
Data marts and caching are two closely related concepts in the domain of multi-dimensional data. Both store pre-computed data to provide fast response times for complex OLAP queries, and for both it must be guaranteed that every query can be completely processed. However, they differ extremely in their update behaviour which we utilise to build a specific data mart extended by cache semantics. In this paper, we introduce a novel cache exploitation concept for data marts - coarse-grained caching - in which the containedness check for a multi-dimensional query is done through the comparison of the expected and the actual cardinalities. Therefore, we subdivide the multi-dimensional data into coarse partitions, the so called cubletets, which allow to specify the completeness criteria for incoming queries. We show that during query processing, the completeness check is done with no additional costs.
|
214 |
Building a real data warehouse for market researchLehner, Wolfgang, Albrecht, J., Teschke, M., Kirsche, T. 08 April 2022 (has links)
This paper reflects the results of the evaluation phase of building a data production system for the retail research division of the GfK, Europe's largest market research company. The application specific requirements like end-user needs or data volume are very different from data warehouses discussed in the literature, making it a real data warehouse. In a case study, these requirements are compared with state-of-the-art solutions offered by leading software vendors. Each of the common architectures (MOLAP, ROLAP, HOLAP) was represented by a product. The result of this comparison is that all systems have to be massively tailored to GfK's needs, especially to cope with meta data management or the maintenance of aggregations.
|
215 |
Building a real data warehouse for market researchLehner, Wolfgang, Albrecht, J., Teschke, M., Kirsche, T. 19 May 2022 (has links)
This paper reflects the results of the evaluation phase of building a data production system for the retail research division of the GfK, Europe's largest market research company. The application specific requirements like end-user needs or data volume are very different from data warehouses discussed in the literature, making it a real data warehouse. In a case study, these requirements are compared with state-of-the-art solutions offered by leading software vendors. Each of the common architectures (MOLAP, ROLAP, HOLAP) was represented by a product. The result of this comparison is that all systems have to be massively tailored to GfK's needs, especially to cope with meta data management or the maintenance of aggregations.
|
216 |
Freight warehouse to architecture school: a representation of ideas in hardline, sketch, and textCorwin, Scott O. January 1994 (has links)
The Freight Warehouse Architecture Studio is adjacent to Virginia Commonwealth University in Richmond. Although designed as an adaptive reuse, it is a direct result of two things: a reading of Eisenman's Koizumi Project and working in the office for a few weeks immediately proceeding commencement on the studio. The reading was the onset of the theory necessary for the study, and the experience in the office offered the opportunity to establish the direction for the project.
The question of culture, understanding, and reading yields the question of the reconciliation of personal history and community history, how an architect intervenes in a location fraught with tradition. As a result, there is "a condition of a space evolving from within, not an insertion, from without.... So what is interesting about this space is we set up the mechanism of interplay, but we did not know what was going to happen. In other words, I am not saying it is a beautiful design.... In a sense it is mediated because the hand of design is taken away..." / Master of Architecture
|
217 |
Design, Implementation and Analysis of a Description Model for Complex Archaeological Objects / Elaboration, mise en œuvre et analyse d’un mod`ele de description d’objets arch´eologiques complexesOzturk, Aybuke 09 July 2018 (has links)
La céramique est l'un des matériaux archéologiques les plus importants pour aider à la reconstruction des civilisations passées. Les informations à propos des objets céramiques complexes incluent des données textuelles, numériques et multimédias qui posent plusieurs défis de recherche abordés dans cette thèse. D'un point de vue technique, les bases de données de céramiques présentent différents formats de fichiers, protocoles d'accès et langages d'interrogation. Du point de vue des données, il existe une grande hétérogénéité et les experts ont différentes façons de représenter et de stocker les données. Il n'existe pas de contenu et de terminologie standard, surtout en ce qui concerne la description des céramiques. De plus, la navigation et l'observation des données sont difficiles. L'intégration des données est également complexe en raison de laprésence de différentes dimensions provenant de bases de données distantes, qui décrivent les mêmes catégories d'objets de manières différentes.En conséquence, ce projet de thèse vise à apporter aux archéologues et aux archéomètres des outils qui leur permettent d'enrichir leurs connaissances en combinant différentes informations sur les céramiques. Nous divisons notre travail en deux parties complémentaires : (1) Modélisation de données archéologiques complexes, et (2) Partitionnement de données (clustering) archéologiques complexes. La première partie de cette thèse est consacrée à la conception d'un modèle de données archéologiques complexes pour le stockage des données céramiques. Cette base de donnée alimente également un entrepôt de données permettant des analyses en ligne (OLAP). La deuxième partie de la thèse est consacrée au clustering (catégorisation) des objets céramiques. Pour ce faire, nous proposons une approche floue, dans laquelle un objet céramique peut appartenir à plus d'un cluster (d'une catégorie). Ce type d'approche convient bien à la collaboration avec des experts, enouvrant de nouvelles discussions basées sur les résultats du clustering.Nous contribuons au clustering flou (fuzzy clustering) au sein de trois sous-tâches : (i) une nouvelle méthode d'initialisation des clusters flous qui maintient linéaire la complexité de l'approche ; (ii) un indice de qualité innovant qui permet de trouver le nombre optimal de clusters ; et (iii) l'approche Multiple Clustering Analysis qui établit des liens intelligents entre les données visuelles, textuelles et numériques, ce qui permet de combiner tous les types d'informations sur les céramiques. Par ailleurs, les méthodes que nous proposons pourraient également être adaptées à d'autres domaines d'application tels que l'économie ou la médecine. / Ceramics are one of the most important archaeological materials to help in the reconstruction of past civilizations. Information about complex ceramic objects is composed of textual, numerical and multimedia data, which induce several research challenges addressed in this thesis. From a technical perspective, ceramic databases have different file formats, access protocols and query languages. From a data perspective, ceramic data are heterogeneous and experts have differentways of representing and storing data. There is no standardized content and terminology, especially in terms of description of ceramics. Moreover, data navigation and observation are difficult. Data integration is also difficult due to the presence of various dimensions from distant databases, which describe the same categories of objects in different ways.Therefore, the research project presented in this thesis aims to provide archaeologists and archaeological scientists with tools for enriching their knowledge by combining different information on ceramics. We divide our work into two complementary parts: (1) Modeling of Complex Archaeological Data and (2) Clustering Analysis of Complex Archaeological Data. The first part of this thesis is dedicated to the design of a complex archaeological database model for the storage of ceramic data. This database is also used to source a data warehouse for doing online analytical processing (OLAP). The second part of the thesis is dedicated to an in-depth clustering (categorization) analysis of ceramic objects. To do this, we propose a fuzzy approach, where ceramic objects may belong to more than one cluster (category). Such a fuzzy approach is well suited for collaborating with experts, by opening new discussions based on clustering results.We contribute to fuzzy clustering in three sub-tasks: (i) a novel fuzzy clustering initialization method that keeps the fuzzy approach linear; (ii) an innovative quality index that allows finding the optimal number of clusters; and (iii) the Multiple Clustering Analysis approach that builds smart links between visual, textual and numerical data, which assists in combining all types ofceramic information. Moreover, the methods we propose could also be adapted to other application domains such as economy or medicine.
|
218 |
Η αντιμετώπιση της πληροφοριακής υπερφόρτωσης ενός οργανισμού με χρήση ευφυών πρακτόρωνΚόρδαρης, Ιωάννης 26 August 2014 (has links)
Η πληροφοριακή υπερφόρτωση των χρηστών αποτελεί βασικό πρόβλημα ενός οργανισμού. Η συσσώρευση μεγάλου όγκου πληροφορίας στα πληροφοριακά συστήματα, προκαλεί στους χρήστες άγχος και υπερένταση, με αποτέλεσμα να δυσχεραίνει την ικανότητά τους για λήψη αποφάσεων. Λόγω αυτού, η επίδραση της πληροφοριακής υπερφόρτωσης στους οργανισμούς είναι καταστροφική και απαιτείται η αντιμετώπισή της. Υπάρχουν διάφοροι τρόποι αντιμετώπισης της πληροφοριακής υπερφόρτωσης όπως τα συστήματα υποστήριξης λήψης αποφάσεων, τα συστήματα φιλτραρίσματος πληροφορίας, οι αποθήκες δεδομένων και άλλες τεχνικές της εξόρυξης δεδομένων και της τεχνητής νοημοσύνης, όπως είναι οι ευφυείς πράκτορες.
Οι ευφυείς πράκτορες αποτελούν εφαρμογές που εφάπτονται της τεχνικής νοημοσύνης, οι οποίες έχουν την ικανότητα να δρουν αυτόνομα, συλλέγοντας πληροφορίες, εκπαιδεύοντας τον εαυτό τους και επικοινωνώντας με τον χρήστη και μεταξύ τους. Συχνά, υλοποιούνται πολυπρακτορικά συστήματα προκει-μένου να επιλυθεί ένα πρόβλημα του οργανισμού. Στόχος τους είναι να διευκολύνουν τη λήψη αποφάσεων των χρηστών, προτείνοντας πληροφορίες βάσει των προτιμήσεών τους.
Ο σκοπός της παρούσας διπλωματικής εργασίας είναι να αναλύσει σε βάθος τους ευφυείς πράκτορες, σαν μία αποτελεσματική μέθοδο αντιμετώπισης της πληροφοριακής υπερφόρτωσης, να προτείνει πειραματικούς πράκτορες προτά-σεων και να εξετάσει επιτυχημένες υλοποιήσεις. Συγκεκριμένα, παρουσιάζεται ένα ευφυές σύστημα διδασκαλίας για την ενίσχυση του e-Learning/e-Teaching, προτείνεται ένα σύστημα πρακτόρων για τον οργανισμό Flickr, ενώ εξετάζεται το σύστημα προτάσεων του Last.fm και ο αλγόριθμος προτάσεων του Amazon.
Τέλος, αναλύεται μια πειραματική έρευνα ενός ευφυούς πράκτορα προτάσεων, ο οποίος αντιμετώπισε με επιτυχία την αντιληπτή πληροφοριακή υπερφόρτωση των χρηστών ενός θεωρητικού ηλεκτρονικού καταστήματος. Τα αποτελέσματα του πειράματος παρουσίασαν την επίδραση της αντιληπτής πληροφοριακής υπερφόρτωσης και του φορτίου πληροφορίας στην ποιότητα επιλογής, στην εμπιστοσύνη επιλογής και στην αντιληπτή αλληλεπίδραση μεταξύ ηλεκτρονικού καταστήματος και χρήστη, ενώ παρατηρήθηκε η καθοριστική συμβολή της χρήσης των ευφυών πρακτόρων στην αντιμετώπιση της πληροφοριακής υπερφόρτωσης. / -
|
219 |
Metode i postupci ubrzavanja operacija i upita u velikim sistemima baza i skladišta podataka (Big Data sistemi) / The methods and procedures for accelerating operations and queries in large database systems and data warehouses ( Big Data Systems )Ivković Jovan 29 September 2016 (has links)
<p>Predmet istraživanja ove doktorske disertacije je mogućnost uspostavljanja modela Big Data sistema sa pripadajućom softversko – hardverskom arhitekturom za podršku senzorskim mrežama i IoT uređajima. Razvijeni model počiva na energetsko efikasnim, heterogenim, masovno paralelizovaim SoC hardverskim platformama, uz podršku softverske aplikativne arhitekture (poput OpenCL) za unifikovan rad.<br />Pored aktuelnih hardverskih, softverskih i mrežnih računarskih tehnologija i arhitektura namenjenih za rad podkomponenata modelovanog sistema u radu je predstavljen istorijski osvrt na njihov razvoj. Time je naglašena tendencija cikličnog kretanja koncepcijskih paradigmi računarstva, kroz svojevrstne ere centralizacije – decentralizacije computinga. U radu su predstavljene tehnologije i metode za ubrzavanje operacija u bazama i skladištima podataka. Istražene su mogućnosti za bolju pripremu Big Data informacionih sistema koji treba da zadovolje potrebe novo najavljene informatičke revolucije opšte primene računarstva tzv. Ubiquitous computing-a i Interneta stvari (IoT).</p> / <p>The research topic of this doctoral thesis is the possibility of establishing a model for Big Data System with corresponding software-hardware architecture to support sensor networks and IoT devices. The developed model is based on energy efficient, heterogeneous, massively parallelized SoC hardware platforms, with the support of software application architecture. (Such as an open CL) for unified operation. In addition to current hardware, software and network computing technologies, and architecture intended to operate subcomponents of the system modeled in this paper is presented as an historical overview of their development. Which emphasizes the tendency of the cyclic movement of the conceptual paradigm of computing, through the unique era of centralization/decentralization of computing. The thesis presents the technology and methods to accelerate operations in databases and data warehouses. We also investigate the possibilities for a better preparation of Big Data information systems to meet the needs of the newly announced IT revolution in the announced general application of computing called Ubiquitous computing and the Internet of Things (IoT).</p>
|
220 |
Vers une organisation globale durable de l’approvisionnement des ménages : bilans économiques et environnementaux de différentes chaînes de distribution classiques et émergentes depuis l’entrepôt du fournisseur jusqu’au domicile du ménage / Towards a global sustainable organisation of housholdsAyadi, Abdessalem 26 September 2014 (has links)
La logistique urbaine, et celle du dernier kilomètre notamment, est un sujet de préoccupation majeure pour les villes d’aujourd’hui. Pour répondre à cette préoccupation, nous avons établi dans le chapitre introductif un historique de la problématique de la logistique urbaine pour mieux comprendre son développement au fil des années, permettant ainsi de déduire qu’il est fondamental d’étudier la globalité de la chaîne de distribution dans ce travail de thèse pour mieux résoudre la problématique de la logistique urbaine. En revanche, nous étions confrontés à un sujet redoutable par sa complexité et l’absence de données complètes et fiables. De plus, nous assistons dans les dernières années, à une multiplication des schémas logistiques que ce soit pour la livraison des magasins à partir des entrepôts des fournisseurs ou pour l’approvisionnement des clients à partir des surfaces de vente.De ce fait, nous avons fixé comme objectif d’identifier toutes les organisations logistiques existantes et émergentes en France et ailleurs (deux séjours d’un an en Angleterre et en Suisse). Pour ce faire, nous avons déterminé dans le deuxième chapitre les paramètres de différenciation des modes d’organisation en amont (de l’entrepôt du fournisseur à la surface de vente) et en aval de la chaîne (de la surface de vente au domicile du client). Or il n’existe pas aujourd’hui de bilan économique et environnemental complet permettant d’arbitrer entre différentes formes de distribution classiques et à distance en tenant compte des particularités des familles des produits (non alimentaires, secs, frais, surgelés) et de la diversité de leurs modes de livraison.Face à ces contraintes de taille, nous avons eu recours aux enquêtes de terrain dans ce travail de recherche, qui ont été l’occasion de nouer de très nombreux contacts avec les acteurs de la grande distribution, permettant ainsi de recueillir des données techniques et économiques de première main et inédites jusqu’ici. En plus de la résolution du verrou empirique dans le troisième chapitre, ce travail de thèse a permis également de lever des verrous méthodologiques relatifs à la reconstitution et à l’évaluation des coûts et des émissions logistiques (pour les entrepôts de stockage et les plateformes de transit en amont ; et pour les surfaces de vente et les plateforme de mutualisation en aval) et des coûts et des émissions des véhicules de transport (des articulés et des porteurs en amont ; et des VUL, voitures particulières, transports publics, deux roues, et marche à pied en aval). Enfin, ce travail de thèse a permis d’aboutir à la construction d’une base de données et la mise au point d’un outil d’aide à la décision permettant ainsi de déduire, dans le quatrième chapitre, les bilans économique et environnemental de la globalité de la chaîne depuis l’entrepôt du fournisseur jusqu’au domicile du ménage. Cet outil devrait se révéler très utile pour les politiques publiques, les stratégies futures des grands distributeurs et leurs prestataires logistiques afin de privilégier les modes d’organisation économes et durables, et même pour le client final afin d’estimer les coûts et les émissions de ses actes d’achat dans les différentes alternatives de vente classique et à distance. / Urban logistics and the last mile in particular, is a major concern for cities today. To address this concern, we have established in the introductory chapter a history of the problem of urban logistics. This allows a better understanding of its development over the years, and deducing that it’s essential to study the supply chain in its entirety to better solve the problem of urban logistics. However, we were faced with a daunting task: the lack of comprehensive and reliable data. In addition, there has been a multiplication of distribution channels in recent years. This includes the delivery from warehouses to stores and further to households from the retail space.Therefore, we intended to identify all existing and emerging logistics organizations in France and beyond (one year exchange stay in England and Switzerland for research purposes). To do this, we established in the second chapter certain parameters that differentiate the logistics modes of various organizations upstream (from manufacturers to retail stores) and downstream (from retail stores to households). Unfortunately, there does not exist any economic and environmental assessment to settle between different forms of traditional and modern electronic distribution, by taking into account the various characteristics of different products families (non-food, dry, fresh, frozen) and the diversity of their delivery modes.Faced with constraints of such size, we conducted surveys with different actors of distribution channels, which provided the opportunity to make contacts, thus collect firsthand and so far unpublished technical and economic data. In addition to the resolution of empirical inadequacy in the third chapter, this research also helped to develop a methodological approach related to the reconstruction and evaluation of logistics costs and emissions (in warehouses, transit platforms, retail stores and shared platforms) and also the costs and emissions of vehicles (trucks, delivery van, cars, public transport, bikes, motorbikes and walking).Finally, this research has lead to the construction of a database and the development of a decision support tool to infer, in the fourth chapter, the economic and environmental appraisal of the entire supply chain from the supplier's warehouse to the final customer. This tool can be useful for public policy, future strategies of retailers and Third-Party Logistics providers to focus on efficient and sustainable modes of organization, and even it will benefit the customer to estimate the costs and emissions of its acts of purchase in classic and e-grocery shopping.
|
Page generated in 0.0539 seconds