• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 59
  • 35
  • 26
  • 23
  • 11
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 275
  • 165
  • 100
  • 81
  • 74
  • 42
  • 38
  • 37
  • 36
  • 33
  • 33
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Datový sklad se zaměřením na optimalizaci ETL procesu / Data Warehouse with a Focus on the ETL Process Optimization

Veselý, Ivan January 2011 (has links)
This thesis focus on creating data warehouse and process of its implementation. The content is: Introduction into data warhousing, implemetation of data warehouse and process of its population.
172

Vytváření OLAP modelů reportů na základě metadat / OLAP Reports Model Creating Based on Metadata

Franek, Zdenko January 2010 (has links)
An important part of knowledge of report creator is knowledge of database schema and database query language, from which the data for report are extracted. In the reporting services for database systems and Business Intelligence systems initiative arises to separate the position of database specialist from the position of reports maker. One of the solutions offers using metadata interlayer between the database schema and report. This interlayer is called the report model. Its use is not currently supported in the process of reporting, or is only very limited. The aim of this thesis is to suggest the possibility of using the report model in the process of building reports with an emphasis on the OLAP analysis.
173

Smart Cube Predictions for Online Analytic Query Processing in Data Warehouses

Belcin, Andrei 01 April 2021 (has links)
A data warehouse (DW) is a transformation of many sources of transactional data integrated into a single collection that is non-volatile and time-variant that can provide decision support to managerial roles within an organization. For this application, the database server needs to process multiple users’ queries by joining various datasets and loading the result in main memory to begin calculations. In current systems, this process is reactionary to users’ input and can be undesirably slow. In previous studies, it was shown that a personalization scheme of a single user’s query patterns and loading the smaller subset into main memory the query response time significantly shortened the query response time. The LPCDA framework developed in this research handles multiple users’ query demands, and the query patterns are subject to change (so-called concept drift) and noise. To this end, the LPCDA framework detects changes in user behaviour and dynamically adapts the personalized smart cube definition for the group of users. Numerous data mart (DM)s, as components of the DW, are subject to intense aggregations to assist analytics at the request of automated systems and human users’ queries. Subsequently, there is a growing need to properly manage the supply of data into main memory that is in closest proximity to the CPU that computes the query in order to reduce the response time from the moment a query arrives at the DW server. As a result, this thesis proposes an end-to-end adaptive learning ensemble for resource allocation of cuboids within a a DM to achieve a relevant and timely constructed smart cube before the time in need, as a way of adopting the just-in-time inventory management strategy applied in other real-world scenarios. The algorithms comprising the ensemble involve predictive methodologies from Bayesian statistics, data mining, and machine learning, that reflect the changes in the data-generating process using a number of change detection algorithms. Therefore, given different operational constraints and data-specific considerations, the ensemble can, to an effective degree, determine the cuboids in the lattice of a DM to pre-construct into a smart cube ahead of users submitting their queries, thereby benefiting from a quicker response than static schema views or no action at all.
174

Analýza registra zmlúv

Kuciaková, Andrea January 2019 (has links)
The thesis describes the creation of the acquisition and analysis of data from the public register of contracts and other defined sources. In the introduction there are described the existing variants of scrapping tools and their use, as well as the problems of Business Intelligence and data warehouses. The next section is devoted to identifying source data. Subsequently, the procedure and design of the solution are described, on which basis was designed the data warehouse. Implementation of ETL processes and creation of final reports are mentioned in the implementation part.
175

Flexible Data Extraction for Analysis using Multidimensional Databases and OLAP Cubes / Flexibelt extraherande av data för analys med multidimensionella databaser och OLAP-kuber

Jernberg, Robert, Hultgren, Tobias January 2013 (has links)
Bright is a company that provides customer and employee satisfaction surveys, and uses this information to provide feedback to their customers. Data from the surveys are stored in a relational database and information is generated both by directly querying the database as well as doing analysis on extracted data. As the amount of data grows, generating this information takes increasingly more time. Extracting the data requires significant manual work and is in practice avoided. As this is not an uncommon issue, there is a substantial theoretical framework around the area. The aim of this degree project is to explore the different methods for achieving flexible and efficient data analysis on large amounts of data. This was implemented using a multidimensional database designed for analysis as well as an OnLine Analytical Processing (OLAP) cube built using Microsoft's SQL Server Analysis Services (SSAS). The cube was designed with the possibility to extract data on an individual level through PivotTables in Excel. The implemented prototype was analyzed, showing that the prototype consistently delivers correct results severalfold as efficient as the current solution as well as making new types of analysis possible and convenient. It is concluded that the use of an OLAP cube was a good choice for the issue at hand, and that the use of SSAS provided the necessary features for a functional prototype. Finally, recommendations on possible further developments were discussed. / Bright är ett företag som tillhandahåller undersökningar för kund- och medarbetarnöjdhet, och använder den informationen för att ge återkoppling till sina kunder. Data från undersökningarna sparas i en relationsdatabas och information genereras både genom att direkt fråga databasen såväl som att göra manuell analys på extraherad data. När mängden data ökar så ökar även tiden som krävs för att generera informationen. För att extrahera data krävs en betydande mängd manuellt arbete och i praktiken undviks det. Då detta inte är ett ovanligt problem finns det ett gediget teoretiskt ramverk kring området. Målet med detta examensarbete är att utforska de olika metoderna för att uppnå flexibel och effektiv dataanalys på stora mängder data. Det implementerades genom att använda en multidimensionell databas designad för analys samt en OnLine Analytical Processing (OLAP)-kub byggd med Microsoft SQL Server Analysis Services (SSAS). Kuben designades med möjligheten att extrahera data på en individuell nivå med PivotTables i Excel. Den implementerade prototypen analyserades vilket visade att prototypen konsekvent levererar korrekta resultat flerfaldigt så effektivt som den nuvarande lösningen såväl som att göra nya typer av analys möjliga och lättanvända. Slutsatsen dras att användandet av en OLAP-kub var ett bra val för det aktuella problemet, samt att valet att använda SSAS tillhandahöll de nödvändiga funktionaliteterna för en funktionell prototyp. Slutligen diskuterades rekommendationer av möjliga framtida utvecklingar.
176

El modelo GOLD: un modelo conceptual orientado a objetos para el diseño de aplicaciones OLAP

Trujillo, Juan 21 June 2001 (has links)
No description available.
177

Webový portál skladových zásob / Web Portal of the Goods Store

Obrátil, Tomáš January 2008 (has links)
This master thesis presents ZZM spol. s r.o.`s web portal of good store, which should improve availability of goods and sevices to ZZM`s customers and make a good way to evaluation of taking and runing the goods in a separate regions managing by ZZM. Project make use of PHP and MySQL technology. Application includes technology for autentization, security and session for following behaviour and obtaining information about customers. For better decision making aggregated data will be present in OLAP technology. Web portal will be independent application communicating with actual internal system of firm ZZM based on communication protocol PDK version 6.
178

Metody pro podporu rozhodování v prostředí lékařské aplikace / Decision Support Methods in a Medical Application

Mrázek, Petr January 2009 (has links)
{The diploma thesis is dealing with the present medical application extension by the means for decision support. The first part of the work is focused on the general problem of data warehouses, OLAP and the data mining. The second part attends to the very proposal and implementation of the extension in the form of the very application, which enables executing OLAP analysis upon the gathered medical data.
179

OLAP REPORTING APPLICATION USING OFFICE WEB COMPONENTS

Kasi Reddy, Swathi Reddy 13 September 2007 (has links)
No description available.
180

A Comparison of Leading Database Storage Engines in Support of Online Analytical Processing in an Open Source Environment

Tocci, Gabriel 01 May 2013 (has links) (PDF)
Online Analytical Processing (OLAP) has become the de facto data analysis technology used in modern decision support systems. It has experienced tremendous growth, and is among the top priorities for enterprises. Open source systems have become an effective alternative to proprietary systems in terms of cost and function. The purpose of the study was to investigate the performance of two leading database storage engines in an open source OLAP environment. Despite recent upgrades in performance features for the InnoDB database engine, the MyISAM database engine is shown to outperform the InnoDB database engine under a standard benchmark. This result was demonstrated in tests that included concurrent user sessions as well as asynchronous user sessions using data sets ranging from 6GB to 12GB. Although MyISAM outperformed InnoDB in all test performed, InnoDB provides ACID compliant transaction technologies are beneficial in a hybrid OLAP/OLTP system.

Page generated in 0.0315 seconds