• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 85
  • 84
  • 46
  • 23
  • 12
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 407
  • 407
  • 105
  • 100
  • 94
  • 74
  • 69
  • 61
  • 61
  • 61
  • 52
  • 49
  • 48
  • 43
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Vytváření OLAP modelů reportů na základě metadat / OLAP Reports Model Creating Based on Metadata

Franek, Zdenko January 2010 (has links)
An important part of knowledge of report creator is knowledge of database schema and database query language, from which the data for report are extracted. In the reporting services for database systems and Business Intelligence systems initiative arises to separate the position of database specialist from the position of reports maker. One of the solutions offers using metadata interlayer between the database schema and report. This interlayer is called the report model. Its use is not currently supported in the process of reporting, or is only very limited. The aim of this thesis is to suggest the possibility of using the report model in the process of building reports with an emphasis on the OLAP analysis.
232

Smart Cube Predictions for Online Analytic Query Processing in Data Warehouses

Belcin, Andrei 01 April 2021 (has links)
A data warehouse (DW) is a transformation of many sources of transactional data integrated into a single collection that is non-volatile and time-variant that can provide decision support to managerial roles within an organization. For this application, the database server needs to process multiple users’ queries by joining various datasets and loading the result in main memory to begin calculations. In current systems, this process is reactionary to users’ input and can be undesirably slow. In previous studies, it was shown that a personalization scheme of a single user’s query patterns and loading the smaller subset into main memory the query response time significantly shortened the query response time. The LPCDA framework developed in this research handles multiple users’ query demands, and the query patterns are subject to change (so-called concept drift) and noise. To this end, the LPCDA framework detects changes in user behaviour and dynamically adapts the personalized smart cube definition for the group of users. Numerous data mart (DM)s, as components of the DW, are subject to intense aggregations to assist analytics at the request of automated systems and human users’ queries. Subsequently, there is a growing need to properly manage the supply of data into main memory that is in closest proximity to the CPU that computes the query in order to reduce the response time from the moment a query arrives at the DW server. As a result, this thesis proposes an end-to-end adaptive learning ensemble for resource allocation of cuboids within a a DM to achieve a relevant and timely constructed smart cube before the time in need, as a way of adopting the just-in-time inventory management strategy applied in other real-world scenarios. The algorithms comprising the ensemble involve predictive methodologies from Bayesian statistics, data mining, and machine learning, that reflect the changes in the data-generating process using a number of change detection algorithms. Therefore, given different operational constraints and data-specific considerations, the ensemble can, to an effective degree, determine the cuboids in the lattice of a DM to pre-construct into a smart cube ahead of users submitting their queries, thereby benefiting from a quicker response than static schema views or no action at all.
233

Analyse von Logistikdaten:: Neue Erkenntnisse - mit alten Techniken gewinnen

Schulze, Frank January 2014 (has links)
Motivation • Warum sollten wir Daten analysieren? Beyond Excel • Wie können wir Daten analysieren? Fallbeispiele • Welche Erkenntnisse haben wir gewonnen? • Flächenbedarf der Montage • Arbeitskräftebedarf in der Logistik • (Auto-) Korrelation
234

Flexible Data Extraction for Analysis using Multidimensional Databases and OLAP Cubes / Flexibelt extraherande av data för analys med multidimensionella databaser och OLAP-kuber

Jernberg, Robert, Hultgren, Tobias January 2013 (has links)
Bright is a company that provides customer and employee satisfaction surveys, and uses this information to provide feedback to their customers. Data from the surveys are stored in a relational database and information is generated both by directly querying the database as well as doing analysis on extracted data. As the amount of data grows, generating this information takes increasingly more time. Extracting the data requires significant manual work and is in practice avoided. As this is not an uncommon issue, there is a substantial theoretical framework around the area. The aim of this degree project is to explore the different methods for achieving flexible and efficient data analysis on large amounts of data. This was implemented using a multidimensional database designed for analysis as well as an OnLine Analytical Processing (OLAP) cube built using Microsoft's SQL Server Analysis Services (SSAS). The cube was designed with the possibility to extract data on an individual level through PivotTables in Excel. The implemented prototype was analyzed, showing that the prototype consistently delivers correct results severalfold as efficient as the current solution as well as making new types of analysis possible and convenient. It is concluded that the use of an OLAP cube was a good choice for the issue at hand, and that the use of SSAS provided the necessary features for a functional prototype. Finally, recommendations on possible further developments were discussed. / Bright är ett företag som tillhandahåller undersökningar för kund- och medarbetarnöjdhet, och använder den informationen för att ge återkoppling till sina kunder. Data från undersökningarna sparas i en relationsdatabas och information genereras både genom att direkt fråga databasen såväl som att göra manuell analys på extraherad data. När mängden data ökar så ökar även tiden som krävs för att generera informationen. För att extrahera data krävs en betydande mängd manuellt arbete och i praktiken undviks det. Då detta inte är ett ovanligt problem finns det ett gediget teoretiskt ramverk kring området. Målet med detta examensarbete är att utforska de olika metoderna för att uppnå flexibel och effektiv dataanalys på stora mängder data. Det implementerades genom att använda en multidimensionell databas designad för analys samt en OnLine Analytical Processing (OLAP)-kub byggd med Microsoft SQL Server Analysis Services (SSAS). Kuben designades med möjligheten att extrahera data på en individuell nivå med PivotTables i Excel. Den implementerade prototypen analyserades vilket visade att prototypen konsekvent levererar korrekta resultat flerfaldigt så effektivt som den nuvarande lösningen såväl som att göra nya typer av analys möjliga och lättanvända. Slutsatsen dras att användandet av en OLAP-kub var ett bra val för det aktuella problemet, samt att valet att använda SSAS tillhandahöll de nödvändiga funktionaliteterna för en funktionell prototyp. Slutligen diskuterades rekommendationer av möjliga framtida utvecklingar.
235

Modeling strategies using predictive analytics : Forecasting future sales and churn management / Strategier för modelleringmedprediktiv analys

Aronsson, Henrik January 2015 (has links)
This project was carried out for a company named Attollo, a consulting firm specialized in Business Intelligence and Corporate Performance Management. The project aims to explore a new area for Attollo, predictive analytics, which is then applied to Klarna, a client of Attollo. Attollo has a partnership with IBM, which sells services for predictive analytics. The tool that this project is carried out with, is a software from IBM: SPSS Modeler. Five different examples are given of what and how the predictive work that was carried out at Klarna consisted of. From these examples, the different predictive models' functionality are described. The result of this project demonstrates, by using predictive analytics, how predictive models can be created. The conclusion is that predictive analytics enables companies to understand their customers better and hence make better decisions. / Detta projekt har utforts tillsammans med ett foretag som heter Attollo, en konsultfirma som ar specialiserade inom Business Intelligence & Coporate Performance Management. Projektet grundar sig pa att Attollo ville utforska ett nytt omrade, prediktiv analys, som sedan applicerades pa Klarna, en kund till Attollo. Attollo har ett partnerskap med IBM, som saljer tjanster for prediktiv analys. Verktyget som detta projekt utforts med, ar en mjukvara fran IBM: SPSS Modeler. Fem olika exempel beskriver det prediktiva arbetet som utfordes vid Klarna. Fran dessa exempel beskrivs ocksa de olika prediktiva modellernas funktionalitet. Resultatet av detta projekt visar hur man genom prediktiv analys kan skapa prediktiva modeller. Slutsatsen ar att prediktiv analys ger foretag storre mojlighet att forsta sina kunder och darav kunna gora battre beslut.
236

Combining Big Data And Traditional Business Intelligence – A Framework For A Hybrid Data-Driven Decision Support System

Dotye, Lungisa January 2021 (has links)
Since the emergence of big data, traditional business intelligence systems have been unable to meet most of the information demands in many data-driven organisations. Nowadays, big data analytics is perceived to be the solution to the challenges related to information processing of big data and decision-making of most data-driven organisations. Irrespective of the promised benefits of big data, organisations find it difficult to prove and realise the value of the investment required to develop and maintain big data analytics. The reality of big data is more complex than many organisations’ perceptions of big data. Most organisations have failed to implement big data analytics successfully, and some organisations that have implemented these systems are struggling to attain the average promised value of big data. Organisations have realised that it is impractical to migrate the entire traditional business intelligence (BI) system into big data analytics and there is a need to integrate these two types of systems. Therefore, the purpose of this study was to investigate a framework for creating a hybrid data-driven decision support system that combines components from traditional business intelligence and big data analytics systems. The study employed an interpretive qualitative research methodology to investigate research participants' understanding of the concepts related to big data, a data-driven organisation, business intelligence, and other data analytics perceptions. Semi-structured interviews were held to collect research data and thematic data analysis was used to understand the research participants’ feedback information based on their background knowledge and experiences. The application of the organisational information processing theory (OIPT) and the fit viability model (FVM) guided the interpretation of the study outcomes and the development of the proposed framework. The findings of the study suggested that data-driven organisations collect data from different data sources and process these data to transform them into information with the goal of using the information as a base of all their business decisions. Executive and senior management roles in the adoption of a data-driven decision-making culture are key to the success of the organisation. BI and big data analytics are tools and software systems that are used to assist a data-driven organisation in transforming data into information and knowledge. The suggested challenges that organisations experience when they are trying to integrate BI and big data analytics were used to guide the development of the framework that can be used to create a hybrid data-driven decision support system. The framework is divided into these elements: business motivation, information requirements, supporting mechanisms, data attributes, supporting processes and hybrid data-driven decision support system architecture. The proposed framework is created to assist data-driven organisations in assessing the components of both business intelligence and big data analytics systems and make a case-by-case decision on which components can be used to satisfy the specific data requirements of an organisation. Therefore, the study contributes to enhancing the existing literature position of the attempt to integrate business intelligence and big data analytics systems. / Dissertation (MIT (Information Systems))--University of Pretoria, 2021. / Informatics / MIT (Information Systems) / Unrestricted
237

Metody pro podporu rozhodování v prostředí lékařské aplikace / Decision Support Methods in a Medical Application

Mrázek, Petr January 2009 (has links)
{The diploma thesis is dealing with the present medical application extension by the means for decision support. The first part of the work is focused on the general problem of data warehouses, OLAP and the data mining. The second part attends to the very proposal and implementation of the extension in the form of the very application, which enables executing OLAP analysis upon the gathered medical data.
238

Collocation of Data in a Multi-temperate Logical Data Warehouse

Martin, Bryan January 2019 (has links)
No description available.
239

Multi-objective scheduling for real-time data warehouses

Thiele, Maik, Bader, Andreas, Lehner, Wolfgang 19 January 2023 (has links)
The issue of write-read contention is one of the most prevalent problems when deploying real-time data warehouses. With increasing load, updates are increasingly delayed and previously fast queries tend to be slowed down considerably. However, depending on the user requirements, we can improve the response time or the data quality by scheduling the queries and updates appropriately. If both criteria are to be considered simultaneously, we are faced with a so-called multi-objective optimization problem. We transformed this problem into a knapsack problem with additional inequalities and solved it efficiently. Based on our solution, we developed a scheduling approach that provides the optimal schedule with regard to the user requirements at any given point in time. We evaluated our scheduling in an extensive experimental study, where we compared our approach with the respective optimal schedule policies of each single optimization objective.
240

On the problem of generating common predecessors

Lehner, Wolfgang, Hümmer, Wolfgang, Schlesinger, Lutz, Bauer, Andreas J. 10 January 2023 (has links)
Using common subexpressions to speed up a set of queries is a well known and long studied problem. However, due to the isolation requirement, operating a database in the classic transactional way does not offer many applications to exploit the benefits of simultaneously computing a set of queries. In the opposite, many applications can be identified in the context of data warehousing, e. g. optimizing the incremental maintenance process of multiple dependent materialized views or the generation of application specific data marts. In the paper we discuss the problem whether it is always advisable to generate the most complete common predecessor for a given set of queries or to restrict a predecessor to a subset of all possible base tables. As we will see, this question cannot be answered without having knowledge about the cardinality of queries after aggregation. However, if we can rely on this information, we can come up with an optimal predecessor for a common set of queries.

Page generated in 0.0499 seconds