• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Flexible Data Extraction for Analysis using Multidimensional Databases and OLAP Cubes / Flexibelt extraherande av data för analys med multidimensionella databaser och OLAP-kuber

Jernberg, Robert, Hultgren, Tobias January 2013 (has links)
Bright is a company that provides customer and employee satisfaction surveys, and uses this information to provide feedback to their customers. Data from the surveys are stored in a relational database and information is generated both by directly querying the database as well as doing analysis on extracted data. As the amount of data grows, generating this information takes increasingly more time. Extracting the data requires significant manual work and is in practice avoided. As this is not an uncommon issue, there is a substantial theoretical framework around the area. The aim of this degree project is to explore the different methods for achieving flexible and efficient data analysis on large amounts of data. This was implemented using a multidimensional database designed for analysis as well as an OnLine Analytical Processing (OLAP) cube built using Microsoft's SQL Server Analysis Services (SSAS). The cube was designed with the possibility to extract data on an individual level through PivotTables in Excel. The implemented prototype was analyzed, showing that the prototype consistently delivers correct results severalfold as efficient as the current solution as well as making new types of analysis possible and convenient. It is concluded that the use of an OLAP cube was a good choice for the issue at hand, and that the use of SSAS provided the necessary features for a functional prototype. Finally, recommendations on possible further developments were discussed. / Bright är ett företag som tillhandahåller undersökningar för kund- och medarbetarnöjdhet, och använder den informationen för att ge återkoppling till sina kunder. Data från undersökningarna sparas i en relationsdatabas och information genereras både genom att direkt fråga databasen såväl som att göra manuell analys på extraherad data. När mängden data ökar så ökar även tiden som krävs för att generera informationen. För att extrahera data krävs en betydande mängd manuellt arbete och i praktiken undviks det. Då detta inte är ett ovanligt problem finns det ett gediget teoretiskt ramverk kring området. Målet med detta examensarbete är att utforska de olika metoderna för att uppnå flexibel och effektiv dataanalys på stora mängder data. Det implementerades genom att använda en multidimensionell databas designad för analys samt en OnLine Analytical Processing (OLAP)-kub byggd med Microsoft SQL Server Analysis Services (SSAS). Kuben designades med möjligheten att extrahera data på en individuell nivå med PivotTables i Excel. Den implementerade prototypen analyserades vilket visade att prototypen konsekvent levererar korrekta resultat flerfaldigt så effektivt som den nuvarande lösningen såväl som att göra nya typer av analys möjliga och lättanvända. Slutsatsen dras att användandet av en OLAP-kub var ett bra val för det aktuella problemet, samt att valet att använda SSAS tillhandahöll de nödvändiga funktionaliteterna för en funktionell prototyp. Slutligen diskuterades rekommendationer av möjliga framtida utvecklingar.
322

Business Value of the "DataWarehouse Appliance" Technology / Affärsvärde med tekniken "Data Warehouse Appliance"

Undén, Saga, Westerlund, Eric January 2012 (has links)
The recent increase in the amount of stored company data and exceeding interest in data analysis has resulted in new requirements on Data Warehousing solutions. This has led to the development of Data Warehouse Appliances, which this research project aims to investigate the business value of. The result is intended to support companies that are considering an investment, and give them an understanding of the technology’s benefits. The research project was conducted in two parts. Vendors of the Appliance technology were interviewed, as well as their customers. The results from the vendor interviews together with a literature study provided a knowledge base for the analysis of the user companies’ interviews. The results clearly indicate that there is value in the technology for larger companies. The research shows that although the main benefits advocated by the vendors match the perceived ones of the user companies, there are other aspects which they value even more. Examples of this include a reduced amount of administrative tasks and support from a single source. The research also reveals that the benefits estimated by the customer at the time of purchase were not their most valued benefits in hindsight. / Företag lagrar allt större datamängder och låter dessa ligga till grund för komplicerade dataanalyser, vilket ställer nya krav på deras befintliga Data Warehouse--‐lösningar. Detta har lett till utvecklingen av Data Warehouse Appliance, vars affärsnytta detta projekt syftar till att utreda. Resultatet kommer tillhandahålla beslutsunderlag för de företag som överväger en investering i tekniken. Undersökningen genomfördes i två steg. Intervjuer genomfördes med leverantörer som tillhandahåller tekniken såväl som med deras användande kunder. Resultaten från leverantörsintervjuerna tillsammans med en omfattande litteraturstudie låg sedan till grund för den analys som gjordes av intervjuerna med de användande företagen. Resultaten visar på ett verkligt värde i tekniken för företag med stora datamängder. Undersökningen visar att de fördelar som framhålls som teknikens främsta av leverantörerna bekräftas av deras användande kunder, men att det finns andra vinster de värdesätter ännu mer. Dessa inkluderar en minskad teknisk komplexitet, en minskad mängd administrativa uppgifter samt support från en enda källa. Undersökningen visar även att de faktorer som spelat störst roll vid investeringen inte är desamma som tillskrivs störst värde i efterhand.
323

Modeling strategies using predictive analytics : Forecasting future sales and churn management / Strategier för modelleringmedprediktiv analys

Aronsson, Henrik January 2015 (has links)
This project was carried out for a company named Attollo, a consulting firm specialized in Business Intelligence and Corporate Performance Management. The project aims to explore a new area for Attollo, predictive analytics, which is then applied to Klarna, a client of Attollo. Attollo has a partnership with IBM, which sells services for predictive analytics. The tool that this project is carried out with, is a software from IBM: SPSS Modeler. Five different examples are given of what and how the predictive work that was carried out at Klarna consisted of. From these examples, the different predictive models' functionality are described. The result of this project demonstrates, by using predictive analytics, how predictive models can be created. The conclusion is that predictive analytics enables companies to understand their customers better and hence make better decisions. / Detta projekt har utforts tillsammans med ett foretag som heter Attollo, en konsultfirma som ar specialiserade inom Business Intelligence & Coporate Performance Management. Projektet grundar sig pa att Attollo ville utforska ett nytt omrade, prediktiv analys, som sedan applicerades pa Klarna, en kund till Attollo. Attollo har ett partnerskap med IBM, som saljer tjanster for prediktiv analys. Verktyget som detta projekt utforts med, ar en mjukvara fran IBM: SPSS Modeler. Fem olika exempel beskriver det prediktiva arbetet som utfordes vid Klarna. Fran dessa exempel beskrivs ocksa de olika prediktiva modellernas funktionalitet. Resultatet av detta projekt visar hur man genom prediktiv analys kan skapa prediktiva modeller. Slutsatsen ar att prediktiv analys ger foretag storre mojlighet att forsta sina kunder och darav kunna gora battre beslut.
324

Combining Big Data And Traditional Business Intelligence – A Framework For A Hybrid Data-Driven Decision Support System

Dotye, Lungisa January 2021 (has links)
Since the emergence of big data, traditional business intelligence systems have been unable to meet most of the information demands in many data-driven organisations. Nowadays, big data analytics is perceived to be the solution to the challenges related to information processing of big data and decision-making of most data-driven organisations. Irrespective of the promised benefits of big data, organisations find it difficult to prove and realise the value of the investment required to develop and maintain big data analytics. The reality of big data is more complex than many organisations’ perceptions of big data. Most organisations have failed to implement big data analytics successfully, and some organisations that have implemented these systems are struggling to attain the average promised value of big data. Organisations have realised that it is impractical to migrate the entire traditional business intelligence (BI) system into big data analytics and there is a need to integrate these two types of systems. Therefore, the purpose of this study was to investigate a framework for creating a hybrid data-driven decision support system that combines components from traditional business intelligence and big data analytics systems. The study employed an interpretive qualitative research methodology to investigate research participants' understanding of the concepts related to big data, a data-driven organisation, business intelligence, and other data analytics perceptions. Semi-structured interviews were held to collect research data and thematic data analysis was used to understand the research participants’ feedback information based on their background knowledge and experiences. The application of the organisational information processing theory (OIPT) and the fit viability model (FVM) guided the interpretation of the study outcomes and the development of the proposed framework. The findings of the study suggested that data-driven organisations collect data from different data sources and process these data to transform them into information with the goal of using the information as a base of all their business decisions. Executive and senior management roles in the adoption of a data-driven decision-making culture are key to the success of the organisation. BI and big data analytics are tools and software systems that are used to assist a data-driven organisation in transforming data into information and knowledge. The suggested challenges that organisations experience when they are trying to integrate BI and big data analytics were used to guide the development of the framework that can be used to create a hybrid data-driven decision support system. The framework is divided into these elements: business motivation, information requirements, supporting mechanisms, data attributes, supporting processes and hybrid data-driven decision support system architecture. The proposed framework is created to assist data-driven organisations in assessing the components of both business intelligence and big data analytics systems and make a case-by-case decision on which components can be used to satisfy the specific data requirements of an organisation. Therefore, the study contributes to enhancing the existing literature position of the attempt to integrate business intelligence and big data analytics systems. / Dissertation (MIT (Information Systems))--University of Pretoria, 2021. / Informatics / MIT (Information Systems) / Unrestricted
325

Zobrazování datové kostky / Data Cube Visualization

Dittrich, Petr January 2009 (has links)
The topic of the master's thesis is a concept and implementation of the prototype application TOPZ demostrating the data warehouse. All theoretical facts about the data storage space are discussed at first. It is also specified the areas of possible improvements of the data warehouse. The specification of requirements and concept of the demonstration application are described in the following part. Testing the performace of the data warehouse is discussed in last chapter.
326

Metody pro podporu rozhodování v prostředí lékařské aplikace / Decision Support Methods in a Medical Application

Mrázek, Petr January 2009 (has links)
{The diploma thesis is dealing with the present medical application extension by the means for decision support. The first part of the work is focused on the general problem of data warehouses, OLAP and the data mining. The second part attends to the very proposal and implementation of the extension in the form of the very application, which enables executing OLAP analysis upon the gathered medical data.
327

Collocation of Data in a Multi-temperate Logical Data Warehouse

Martin, Bryan January 2019 (has links)
No description available.
328

Efficiently synchronizing multidimensional schema data

Schlesinger, Lutz, Bauer, Andreas J., Lehner, Wolfgang, Ediberidze, G., Gutzmann, M. 13 December 2022 (has links)
Most existing concepts in data warehousing provide a central database system storing gathered raw data and redundantly computed materialized views. While in current system architectures client tools are sending queries to a central data warehouse system and are only used to graphically present the result, the steady rise in power of personal computers and the expansion of network bandwidth makes it possible to store replicated parts of the data warehouse at the client thus saving network bandwidth and utilizing local computing power. Within such a scenario a - potentially mobile - client does not need to be connected to a central server while performing local analyses. Although this scenario seems attractive, several problems arise by introducing such an architecture: For example schema data could be changed or new fact data could be available. This paper is focusing on the first problem and presents ideas on how changed schema data can be detected and efficiently synchronized between client and server exploiting the special needs and requirements of data warehousing.
329

Multi-objective scheduling for real-time data warehouses

Thiele, Maik, Bader, Andreas, Lehner, Wolfgang 19 January 2023 (has links)
The issue of write-read contention is one of the most prevalent problems when deploying real-time data warehouses. With increasing load, updates are increasingly delayed and previously fast queries tend to be slowed down considerably. However, depending on the user requirements, we can improve the response time or the data quality by scheduling the queries and updates appropriately. If both criteria are to be considered simultaneously, we are faced with a so-called multi-objective optimization problem. We transformed this problem into a knapsack problem with additional inequalities and solved it efficiently. Based on our solution, we developed a scheduling approach that provides the optimal schedule with regard to the user requirements at any given point in time. We evaluated our scheduling in an extensive experimental study, where we compared our approach with the respective optimal schedule policies of each single optimization objective.
330

PERFORMANCE COMPARISON OF PROPERTY MAP INDEXING AND BITMAP INDEXING FOR DATA WAREHOUSING

GUPTA, ASHIMA January 2002 (has links)
No description available.

Page generated in 0.0932 seconds