• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Dynamic cubing for hierarchical multidimensional data space

Ahmed, Usman 18 February 2013 (has links) (PDF)
Data warehouses are being used in many applications since quite a long time. Traditionally, new data in these warehouses is loaded through offline bulk updates which implies that latest data is not always available for analysis. This, however, is not acceptable in many modern applications (such as intelligent building, smart grid etc.) that require the latest data for decision making. These modern applications necessitate real-time fast atomic integration of incoming facts in data warehouse. Moreover, the data defining the analysis dimensions, stored in dimension tables of these warehouses, also needs to be updated in real-time, in case of any change. In this thesis, such real-time data warehouses are defined as dynamic data warehouses. We propose a data model for these dynamic data warehouses and present the concept of Hierarchical Hybrid Multidimensional Data Space (HHMDS) which constitutes of both ordered and non-ordered hierarchical dimensions. The axes of the data space are non-ordered which help their dynamic evolution without any need of reordering. We define a data grouping structure, called Minimum Bounding Space (MBS), that helps efficient data partitioning of data in the space. Various operators, relations and metrics are defined which are used for the optimization of these data partitions and the analogies among classical OLAP concepts and the HHMDS are defined. We propose efficient algorithms to store summarized or detailed data, in form of MBS, in a tree structure called DyTree. Algorithms for OLAP queries over the DyTree are also detailed. The nodes of DyTree, holding MBS with associated aggregated measure values, represent materialized sections of cuboids and tree as a whole is a partially materialized and indexed data cube which is maintained using online atomic incremental updates. We propose a methodology to experimentally evaluate partial data cubing techniques and a prototype implementing this methodology is developed. The prototype lets us experimentally evaluate and simulate the structure and performance of the DyTree against other solutions. An extensive study is conducted using this prototype which shows that the DyTree is an efficient and effective partial data cubing solution for a dynamic data warehousing environment.
362

A Dementia Care Mapping (DCM) data warehouse as a resource for improving the quality of dementia care : exploring requirements for secondary use of DCM data using a user-driven approach and discussing their implications for a data warehouse

Khalid, Shehla January 2016 (has links)
The secondary use of Dementia Care Mapping (DCM) data, if that data were held in a data warehouse, could contribute to global efforts in monitoring and improving dementia care quality. This qualitative study identifies requirements for the secondary use of DCM data within a data warehouse using a user-driven approach. The thesis critically analyses various technical methodologies and then argues the use and further demonstrates the applicability of a modified grounded theory as a user-driven methodology for a data warehouse. Interviews were conducted with 29 DCM researchers, trainers and practitioners in three phases. 19 interviews were face to face with the others on Skype and telephone with an average length of individual interview 45-60 minutes. The interview data was systematically analysed using open, axial and selective coding techniques and constant comparison methods. The study data highlighted benchmarking, mappers’ support and research as three perceived potential secondary uses of DCM data within a data warehouse. DCM researchers identified concerns regarding the quality and security of DCM data for secondary uses, which led to identifying the requirements for additional provenance, ethical and contextual data to be included in a warehouse alongside DCM data to meet requirements for secondary uses of this data for research. The study data was also used to extrapolate three main factors such as an individual mapper, the organization and an electronic data management that can influence the quality and availability of DCM data for secondary uses. The study makes further recommendations for designing a future DCM data warehouse.
363

Mineração de dados em múltiplas tabelas fato de um data warehouse.

Ribeiro, Marcela Xavier 19 May 2004 (has links)
Made available in DSpace on 2016-06-02T19:05:14Z (GMT). No. of bitstreams: 1 DissMXR.pdf: 889186 bytes, checksum: fe616fa6309b5ac267855726e8a6938b (MD5) Previous issue date: 2004-05-19 / Financiadora de Estudos e Projetos / The progress of the information technology has allowed huge amount of data to be stored. Those data, when submitted to a process of knowledge discovery, can bring interesting results. Data warehouses are repositories of high quality data. A procedure that has been adopted in big companies is the joint use of data warehouse and data mining technologies, where the process of knowledge discovery takes advantage over the high quality of the warehouse s data. When the data warehouse has information about more than one subject, it also has more than one fact table. The joint analysis of multiple fact tables can bring interesting knowledge as, for instance, the relationship between purchases and sales in a company. This research presents a technique to mine data from multiple fact tables of a data warehouse, which is a new kind of association rule mining. / O progresso da tecnologia de informação permitiu que quantidades cada vez maiores de dados fossem armazenadas. Esses dados, no formato original de armazenamento, não trazem conhecimento, porém, quando tratados e passados por um processo de extração de conhecimento, podem revelar conhecimentos interessantes. Os data warehouses são repositórios de dados com alta qualidade. Um procedimento que vem sendo amplamente adotado nas grandes empresas é a utilização conjunta das tecnologias de data warehouse e da mineração de dados, para que o processo de descoberta de conhecimento pela alta qualidade dos dados do data warehouse. Data warehouses que envolvem mais de um assunto também envolvem mais de uma tabela fato (tabelas que representam o assunto do data warehouse). A análise em conjunto de múltiplos assuntos de um data warehouse pode revelar padrões interessantes como o relacionamento entre as compras e as vendas de determinada organização. Este projeto de pesquisa está direcionado ao desenvolvimento de técnicas de mineração de dados envolvendo múltiplas tabelas fato de um data warehouse, que é um novo tipo de mineração de regras de associação.
364

Sales Information Provider / Försäljningsdatahämtning

Karlsson, Mathias January 2005 (has links)
Sammanfattning, max 25 rader. : Denna rapport utreder möjligheten till att ta in stora mängder data in i en databas och göra sammanslagningar. Detta för att sedan skicka en mängd data på ett smidigt sätt till en klient som ska bearbeta datat. Arbetet sträcker sig från databas till ett API möjligt att implementera i en applikation som önskar hämta informationen. Arbetet innebär en intelligent hämtning av data för visualisering. Det är ett av två examensarbeten som ligger till grund för en visualisering av försäljningsdata för sportbutikskedjan Stadium AB. Stadium AB har idag ca 80 butiker, vilket innebär en stor försäljning per vecka. Tanken är att detta ex-jobb tillsammans med det parallellt gående ex-jobbet ska vara till hjälp för Stadium AB vid inköp av produkter till nästkommande säsonger. Det ex-jobb som löpte parallellt med detta visualiserar mängden av produkter som säljs för en viss tidpunkt vilket ger Stadium möjlighet att se vilka tidpunkter de har för lite produkter i butiken samt när de har för mycket produkter. Detta ex-jobb ska förse visualiseringsapplikationen med den information som krävs. Sales Data Provider, som applikationen heter, bygger på en datalager lösning som grund. Den innehåller beräknade försäljningsdata på olika nivåer för att lätt kunna gräva sig ner i hierarkin och se information om hur olika produkter säljer. Som transportmedel från datalager till klient använder den Web Services med XML som media, för att möjliggöra en distans mellan datalager och klient. Dessutom innehåller den en logisk klient som tar hand om alla anrop mot Web Servicen och exponerar ett API som visualiseringsapplikationen kan använda sig av. Klienten innehåller både logik för att hämta data från Web Servicen och att exponera data genom en objektmodell.
365

Data Warehouse Testing : An Exploratory Study

Khan, M.Shahan Ali, ElMadi, Ahmad January 2011 (has links)
Context. The use of data warehouses, a specialized class of information systems, by organizations all over the globe, has recently experienced dramatic increase. A Data Warehouse (DW) serves organiza-tions for various important purposes such as reporting uses, strategic decision making purposes, etc. Maintaining the quality of such systems is a difficult task as DWs are much more complex than ordi-nary operational software applications. Therefore, conventional methods of software testing cannot be applied on DW systems. Objectives. The objectives of this thesis study was to investigate the current state of the art in DW testing, to explore various DW testing tools and techniques and the challenges in DW testing and, to identify the improvement opportunities for DW testing process. Methods. This study consists of an exploratory and a confirmatory part. In the exploratory part, a Systematic Literature Review (SLR) followed by Snowball Sampling Technique (SST), a case study at a Swedish government organization and interviews were conducted. For the SLR, a number of article sources were used, including Compendex, Inspec, IEEE Explore, ACM Digital Library, Springer Link, Science Direct, Scopus etc. References in selected studies and citation databases were used for performing backward and forward SST, respectively. 44 primary studies were identified as a result of the SLR and SST. For the case study, interviews with 6 practitioners were conducted. Case study was followed by conducting 9 additional interviews, with practitioners from different organizations in Sweden and from other countries. Exploratory phase was followed by confirmatory phase, where the challenges, identified during the exploratory phase, were validated by conducting 3 more interviews with industry practitioners. Results. In this study we identified various challenges that are faced by the industry practitioners as well as various tools and testing techniques that are used for testing the DW systems. 47 challenges were found and a number of testing tools and techniques were found in the study. Classification of challenges was performed and improvement suggestions were made to address these challenges in order to reduce their impact. Only 8 of the challenges were found to be common for the industry and the literature studies. Conclusions. Most of the identified challenges were related to test data creation and to the need for tools for various purposes of DW testing. The rising trend of DW systems requires a standardized testing approach and tools that can help to save time by automating the testing process. While tools for operational software testing are available commercially as well as from the open source community, there is a lack of such tools for DW testing. It was also found that a number of challenges are also related to the management activities, such as lack of communication and challenges in DW testing budget estimation etc. We also identified a need for a comprehensive framework for testing data warehouse systems and tools that can help to automate the testing tasks. Moreover, it was found that the impact of management factors on the quality of DW systems should be measured. / Shahan (+46 736 46 81 54), Ahmad (+46 727 72 72 11)
366

Applications of data mining algorithms to analysis of medical data.

Matyja, Dariusz January 2007 (has links)
Medical datasets have reached enormous capacities. This data may contain valuable information that awaits extraction. The knowledge may be encapsulated in various patterns and regularities that may be hidden in the data. Such knowledge may prove to be priceless in future medical decision making. The data which is analyzed comes from the Polish National Breast Cancer Prevention Program ran in Poland in 2006. The aim of this master's thesis is the evaluation of the analytical data from the Program to see if the domain can be a subject to data mining. The next step is to evaluate several data mining methods with respect to their applicability to the given data. This is to show which of the techniques are particularly usable for the given dataset. Finally, the research aims at extracting some tangible medical knowledge from the set. The research utilizes a data warehouse to store the data. The data is assessed via the ETL process. The performance of the data mining models is measured with the use of the lift charts and confusion (classification) matrices. The medical knowledge is extracted based on the indications of the majority of the models. The experiments are conducted in the Microsoft SQL Server 2005. The results of the analyses have shown that the Program did not deliver good-quality data. A lot of missing values and various discrepancies make it especially difficult to build good models and draw any medical conclusions. It is very hard to unequivocally decide which is particularly suitable for the given data. It is advisable to test a set of methods prior to their application in real systems. The data mining models were not unanimous about patterns in the data. Thus the medical knowledge is not certain and requires verification from the medical people. However, most of the models strongly associated patient's age, tissue type, hormonal therapies and disease in family with the malignancy of cancers. The next step of the research is to present the findings to the medical people for verification. In the future the outcomes may constitute a good background for development of a Medical Decision Support System.
367

Prototyp för dynamiskt beslutsstöd

Lundstedt, Mattias, Norell, Axel January 2014 (has links)
Företaget Nethouse har haft uppdraget att kravställa, utveckla och implementera ett verksamhetssystem åt Sveriges  Skorstensfejaremästares Riksförbund (SSR). Medlemsföretagen i SSR bedriver sotarverksamhet på uppdrag av Sveriges kommuner och är beroende av insamlad data kopplad till deras verksamhet. I det nyutvecklade systemet, som går under namnet Ritz, samlas informationen i en central databas och är tillgänglig för flertalet intressenter med hjälp av ny teknik och modernare lösningar. Systemet är helt webbaserat och körs som en molntjänst, tillgängligt via antingen en webbsida eller som mobilapplikation. Åtkomsten av data baseras på företagsnivå på ”stämplad” data i databasen och för att reglera åtkomsten för företagsanvändare till respektive företags data används rollbaserad åtkomstkontroll. Detta examensarbete har syftat till att utveckla en prototyp till en beslutsstödslösning för dynamisk åtkomst till de datamängder som lagras inom Ritz. Nethouse har efterfrågat en prototyp för en BI-lösning som visar på möjligheter och fördelar för intressenter till Ritz med att implementera en sådan. Då integration och förvaltning är viktiga faktorer för Nethouse har ett krav på prototypen varit att den utvecklats inom Microsofts programvaror, precis som resten av Ritz. Prototypen färdigställdes genom konstruerandet av ett centralt data warehouse enligt Ralph Kimballs metodologier och genom implementation av en OLAP-kub byggd i Microsoft SSAS. Dataöverföringen från datakällorna till beslutsstödslösningens data warehouse skedde genom utvecklandet av en ETL-process i Microsoft SSIS. Den resulterande kuben har främst utformats för att kunna besvara den sortens frågor som länsstyrelser ställer till sotarföretag i kontrollsyfte och stöder förfrågningar mot de två centrala affärsprocesserna sotning och brandskyddskontroll. Dessa förfrågningar kan filtreras på flertalet dimensioner som exempelvis tid, utförare, status och kontrollutfall. Prototypen begränsar även åtkomst till den information som användare har rätt att ta del av genom att koppla samman användare och objekt till geografiska indelningar som kallas distrikt. Denna dynamiska säkerhetslösning ger goda förutsättningar för att kunna hantera förändringar i användarnas behörighet i framtiden. Genom den utvalda lösningen behålls den dynamiska naturen i systemet, då åtkomst till beslutsstödstjänsten kan fås genom flertalet källor som stödjer uppkoppling mot Microsofts multidimensionella beslutsstödslösningar, bland annat Excel och SQL Server Reporting Services.
368

Implementace Business Intelligence řešení nad daty z provozu parkoviště / Business Intelligence implementation on top of parking lot traffic data

Machat, Sebastian January 2017 (has links)
In the world of increasing car sales and limited count of available parking spaces it is hard to imagine the parking space management issues would fade away anytime near. In a simi-lar way the Business Intelligence holds its position as one of the continuous trends in company IT environment. After the parking space navigation systems started to make its first appearance into the public parking lots, it started to be clear that their outputs could be potentially used to gain new information about the way the drivers behave and how the parking space is being used. That is the reason this thesis covers the field of acquirement of data from the parking space navigation system and its initial analytical use using the BI tools. Following the theoretic introduction of both Business Intelligence and BI project management comes the detailed info about the parking space navigation systems & the parking lot covered in the practical part of this text. These are followed by details of how the data collection procedures and navigation system data processing should be handled to allow building and filling of the data warehouse. On top of this DWH the analytic layer is both designed & implemented, followed by reports providing the information parking space owner requested. As the data warehouse contains a lot of additional data with expected possible use in the analytic processing, ie. using data mining tools, the last part of the thesis covers specifying single research question about the data available, which is later confirmed/denied using statistical analysis.
369

Implementace Business Intelligence ve firmě ČON s.r.o. / Implementation of Business Intelligence in ČON s.r.o. company

Hofman, Lukáš January 2008 (has links)
This dissertation thesis deals with the topic of Business Intelligence and Balanced Scorecard and its use in České odborné nakladatelství, s.r.o., a relatively small company of 29 employees, which publishes magazines about business, gastronomy, textile, foods, hotels etc. The first part of my thesis describes the BI and BSC theoretically and offers a methodical basis for the following chapters. It gives the reader a definition and structure of BI, with access to a Data Warehouse including the fundamentals of building an OLAP database. It furthermore explains what BSC is, which perspectives use and what a strategic map is. The theoretical chapters of my paper will draw conclusions about the use of BI and BSC and the links between them. The practical, hands-on, chapters continue with the actual application of BI and BSC in the afore-mentioned ČON, s.r.o. It describes the goals and ambitions of the company, which are further on divided into individual strategic goals and metrics, fragmented into perspectives of the BSC and portrayed in a strategic map. Upon analyzing the demands of the company a BI framework is created, data sources are identified, Data Warehouse are designed and brought into effect. In the end a multidimensional OLAP cube is created with MS SQL Server 2008. Finally, there is a MS Excel 2007 explanation and interpretation of the measured values (reporting), which is taken from the available sources of ČON, s.r.o. and is compared against the set strategic goals and metrics of the company. The results of this thesis are based on actual demands of the company and are ready to be implemented and used in real-life.
370

Možnosti BI při řešení úloh manažerského účetnictví / Possibilities for application of Business Intelligence in area of managerial accounting

Hlavička, Ondřej January 2011 (has links)
The thesis is focused on relations of managerial accounting and Business Intelligence systems. Main goal is to describe situations when BI solutions can effectively support specific managerial accounting tasks. In these situations multiple options are mentioned. This thesis is divided into three main parts. The first part is a theoretical introduction of measurement of performance and managerial accounting (chapter 2). The second part is an introduction of key Business Intelligence solution components and its architecture (chapter 3). Variantsof support of selected managerial accounting tasks with BI are presented in the third part (chapter 4).

Page generated in 0.0396 seconds