• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Strategic Alignment in Data Warehouses Two Case Studies

Bhansali, Neera, nbhansali@yahoo.com January 2007 (has links)
This research investigates the role of strategic alignment in the success of data warehouse implementation. Data warehouse technology is inherently complex, requires significant capital investment and development time. Many organizations fail to realize the full benefits from it. While failure to realize benefits has been attributed to numerous causes, ranging from technical to organizational reasons, the underlying strategic alignment issues have not been studied. This research confirms, through two case studies, that the successful adoption of the data warehouse depends on its alignment to the business plans and strategy. The research found that the factors that are critical to the alignment of data warehouses to business strategy and plans are (a) joint responsibility between data warehouse and business managers, (b) alignment between data warehouse plan and business plan, (c) business user satisfaction, (d) flexibility in data warehouse planning and (e) technical integration of the data warehouse. In the case studies, the impact of strategic alignment was visible both at implementation and use levels. The key findings from the case studies are that a) Senior management commitment and involvement are necessary for the initiation of the data warehouse project. The awareness and involvement of data warehouse managers in corporate strategies and a high level of joint responsibility between business and data warehouse managers is critical to strategic alignment and successful adoption of the data warehouse. b) Communication of the strategic direction between the business and data warehouse managers is important for the strategic alignment of the data warehouse. Significant knowledge sharing among the stakeholders and frequent communication between the iv data warehouse managers and users facilitates better understanding of the data warehouse and its successful adoption. c) User participation in the data warehouse project, perceived usefulness of the data warehouse, ease of use and data quality (accuracy, consistency, reliability and timelines) were significant factors in strategic alignment of the data warehouse. d) Technology selection based on its ability to address business and user requirements, and the skills and response of the data warehousing team led to better alignment of the data warehouse to business plans and strategies. e) The flexibility to respond to changes in business needs and flexibility in data warehouse planning is critical to strategic alignment and successful adoption of the data warehouse. Alignment is seen as a process requiring continuous adaptation and coordination of plans and goals. This research provides a pathway for facilitating successful adoption of data warehouse. The model developed in this research allows data warehouse professionals to ensure that their project when implemented, achieve the strategic goals and business objectives of the organization.
112

Agile vs Hyper Agile : en studie av agilitet i metoder för datamodellering

Svensson, Martin January 2012 (has links)
Inom utvecklingen av de flesta typer av datorsystem används datormodeller för att strukturera lagringen och användningen av data. Likaså finns det flera olika datamodelleringsmetoder att välja bland för detta ändamål. I samarbete med ett företag har en fallstudie genomförts med syfte att undersöka hur agiliteten i två av dessa metoder påverkar utvecklingen av ett Data Warehouse (DW).  De två datamodelleringsmetoder som undersökts är Data Vaulting och Hyper Agility och arbetet har fokuserat på att undersöka skillnaderna mellan dessa när det gäller mängden ETL-kod som måste skrivas, funktionaliteten i datatransformationerna, möjligheten till att uppdatera systemstrukturen samt den totala kostnaden för utvecklingen av DW-lösningen. Inom ramen för fallstudien har en litteraturstudie genomförts och kombinerats med material från sex intervjuer, där respondenterna varit konsulter såväl som företagsrepresentanter.   Resultaten av fallstudien visar att respektive metods agilitet har en stor påverkan på den kod som utvecklas. Ju högre agilitet metoden har desto mindre kod, tid och andra resurser som krävs. Dock medför även en förhöjd agilitet större komplexitet samt eventuell risk för ett misslyckat utvecklingsprojekt.
113

Data Warehouse Products Evaluation and Selection Decision

Cheng, Wang-chang 22 June 2012 (has links)
Along with the rapid expansion of information technology and the urgent demand of Decision Support System, only in a few years, data warehouse have been converting absolute theory into practical technology. More and more enterprises have been plunging into the data warehouse system for supporting business process and decision. Data warehouse system does transfer big data of enterprise to useful resource or information and will not impact current history data. The data warehouse depends on enterprise various demands and plays the role of Decision Support System. It is the reason that enterprise pays attention and positive investment. This paper identifies as a set of evaluating criteria on the literature review and consults the expert to decide vendor level, design the questionnaire and retrieve weight. At last the paper uses the real case evaluating data warehouse system by ELECTRE I. The result of this study can not only contribute to the understanding of functionalities of a data warehouse system but can provide a practical guideline for selection of a data warehouse system.
114

A Count-Based Partition Approach to the Design of the Range-Based Bitmap Indexes for Data Warehouses

Lin, Chien-Hsiu 29 July 2004 (has links)
Data warehouses contain data consolidated from several operational databases and provide the historical, and summarized data which is more appropriate for analysis than detail, individual records. On-Line Analytical Processing (OLAP) provides advanced analysis tools to extract information from data stored in a data warehouse. Fast response time is essential for on-line decision support. A bitmap index could reach this goal in read-mostly environments. When data has high cardinality, we prefer to use the Range-Based Index (RBI), which divides the attributes values into several partitions and a bitmap vector is used to represent a range. With RBI, however, the number of records assigned to different ranges can be highly unbalanced, resulting in different search times of disk accesses for different queries. Wu et al proposed an algorithm for RBI, DBEC, which takes the data distribution into consideration. But the DBEC strategy could not guarantee to get the partition result with the given number of bitmap vectors, PN. Moreover, for different data records with the same value, they may be partitioned into different bitmap vectors which takes long disk I/O time. Therefore, we propose the IPDF, CP, CP* strategies for constructing the dynamic range-based indexes concerning with the case that data has high cardinality and is not uniformly distributed. The IPDF strategy decides each partition according to the Probability Density Function (p.d.f.). The CP strategy sorts the data and partitions them into PN groups for every w continuous records. The CP* strategy is an improved version of the CP strategy by adjusting the cutting points such that data records with the same value will be assigned into the same partition. On the other hand, we could take the history of users' queries into consideration. Based on the greedy approach, we propose the GreedyExt and GreedyRange strategies. The GreedyExt strategy is used for answering exact queries and the GreedyRange strategy is used for answering range queries. The two strategies decide the set of queries to construct the bitmap vectors such that the average response time of answering queries could be reduced. Moreover, a bitmap index consists of a set of bitmap vectors and the size of the bitmap index could be much larger than the capacity of the disk. We propose the FZ strategy to compress each bitmap vector to reduce the size of the storage space and provide efficient bitwise operations without decompressing these bitmap vectors. Finally, from our performance analysis, the performance of the CP* strategy could be better than the CP strategy in terms of the number of disk accesses. From our simulation, we show that the ranges divided by the IPDF and CP* strategies are more uniform than those divided by the DBEC strategy. The GreedyExt and GreedyRange strategies could provide fast response time in most of situations. Moreover, the FZ strategy could reduce the storage space more than the WAH strategy.
115

A Recursive Relative Prefix Sum Approach to Range Queries in Data Warehouses

Wu¡@, Fa-Jung 07 July 2002 (has links)
Data warehouses contain data consolidated from several operational databases and provide the historical, and summarized data which is more appropriate for analysis than detail, individual records. On-Line Analytical Processing (OLAP) provides advanced analysis tools to extract information from data stored in a Data Warehouse. OLAP is designed to provide aggregate information that can be used to analyze the contents of databases and data warehouses. A range query applies an aggregation operation over all selected cells of an OLAP data cube where the selection is specified by providing ranges of values for numeric dimensions. Range sum queries are very useful in finding trends and in discovering relationships between attributes in the database. There is a method, prefix sum method, promises that any range sum query on a data cube can be answered in constant time by precomputing some auxiliary information. However, it is hampered by its update cost. For today's applications, interactive data analysis applications which provide current or "near current" information will require fast response time and have reasonable update time. Since the size of a data cube is exponential in the number of its dimensions, rebuilding the entire data cube can be very costly and is not realistic. To cope with this dynamic data cube problem, several strategies have been proposed. They all use specific data structures, which require extra storage cost, to response range sum query fast. For example, the double relative prefix sum method makes use of three components: a block prefix array, a relative overlay array and a relative prefix array to store auxiliary information. Although the double relative prefix sum method improves the update cost, it increases the query time. In the thesis, we present a method, called the recursive relative prefix sum method, which tries to provide a compromise between query and update cost. In the recursive relative prefix sum method with k levels, we use a relative prefix array and k relative overlay arrays. From our performance study, we show that the update cost of our method is always less than that of the prefix sum method. In most of cases, the update cost of our method is less than that of the relative prefix sum method. Moreover, in most of cases, the query cost of our method is less than that of the double relative prefix sum method. Compared with the dynamic data cube method, our method has lower storage cost and shorter query time. Consequently, our recursive relative prefix sum method has a reasonable response time for ad hoc range queries on the data cube, while at the same time, greatly reduces the update cost. In some applications, however, updating in some regions may happen more frequently than others. We also provide a solution, called the weighted relative prefix sum} method, for this situation. Therefore, this method can also provide a compromise between the range sum query cost and the update cost, when the update probabilities of different regions are considered.
116

En undersökning kring informationssäkerhet i datalager : En litteratur- och fältstudie

Crnic, Enes January 2010 (has links)
<p>Allt hårdare konkurrens har medfört att det är desto viktigare att beslutsansvariga i ett företag fattar snabba och korrekta beslut. För att förbättra och effektivisera beslutsfattandet och samtidigt skapa sig fördelar i förhållande till marknadskonkurrenterna, kan beslutsansvariga använda sig av ett datalager. Datalagret kan genom enorma mängder data som är insamlade från ett stort antal olika system, generera stora fördelar för ett företag. Men detta gäller dock endast under förutsättningen att datalagret är skyddat på ett lämpligt vis. Syftet med studien är att undersöka vilka lämpliga skyddsåtgärder som kan användas för att uppnå och bibehålla ett säkert datalager.<em> </em>För att besvara frågeställningen genomfördes en litteraturstudie och två intervjuer med företag som använder sig av datalager. Resultatet av den teoretiska undersökningen visar att fyra administrativa och fem logiska skyddsåtgärder är lämpliga att användas i syfte med att uppnå och bibehålla god informationssäkerhet i ett datalager. Den empiriska undersökningen bekräftar det sistnämnda, dock med vissa undantag.</p>
117

Metadatadriven transformering mellan datamodeller

Åhlfeldt, Fredrik January 2000 (has links)
<p>För att flytta information från en databas till ett datalager används det idag olika tekniker. Existerande transformeringstekniker baseras på att en applikation hanterar detta. Detta examensarbete går ut på att skapa och undersöka en metod som istället genomför transformeringen i en databas. Denna transformering är metadatadriven, eftersom metadata är den information om data som krävs för att en transformering ska vara möjlig. Arbetet bygger därför på en metadatastudie som behandlar representation och struktur av metadata. Målet med arbetet är att få fram en så generell transformeringsmetod som möjligt och metoden går ut på att transformera data från en normaliserad databasstruktur till en denormaliserad datalagersstruktur.</p>
118

Hur används datalager av företag i deras verksamhet?

Persson, Johan January 2000 (has links)
<p>öretag har under lång tid använt sig av olika typer av beslutsstödjande system för att fatta rätt beslut. Under 1990-talet har en typ av beslutsstödjande system som kallas datalager utvecklats. Datalagret har utvecklats till ett beslutsstödjande system som hjälper slutanvändarna att analysera data med hjälp av olika typer av verktyg med användarvänliga gränssnitt.</p><p>I rapporten undersöks hur datalager används av företag i deras verksamhet. Som underlag för rapporten har fem företag studerats. Bland företagen som ingick i undersökningen blev svaret att de använde sina datalager främst till att ta fram statistik och genomföra uppföljningar. Företagen visade även upp ett antal företagsspecifika anpassningar av sina datalager.</p><p>De företag som ingick i undersökningen visade även på en bred spridning av användandet av datalagren inom företagen. Antalet användare på företagen varierade från 20 st till 200 st. Undersökningen visade även att spridningen av användandet hierarkiskt i organisationen generellt var väl utvecklat.</p>
119

Data Warehouse : An Outlook of Current Usage of External Data

Olsson, Marcus January 2002 (has links)
<p>A data warehouse is a data collection that integrates large amounts of data from several sources, with the aim to support the decision-making process in a company. Data could be acquired from internal sources within the own organization, as well as from external sources outside the organization.</p><p>The comprehensive aim of this dissertation is to examine the current usage of external data and its sources for integration into DWs, in order to give users of a DW the best possible foundation for decision-making. In order to investigate this problem, we have conducted an interview study with DW developers.</p><p>Based on the interview study, the result shows that it is relative common to integrate external data into DWs. The study also identifies different types of external data that are integrated, and what external sources it is common to acquire data from. In addition, opportunities and pitfalls of integrating external data have also been highlighted.</p>
120

Problems Concerning External Data Incorporation in Data Warehouses

Niklasson, Markus January 2004 (has links)
<p>Data warehouses (DWs) have become one of the largest investments in the past years for organisations, and incorporating external data into a DW can give organisations huge possibilities. Organisations that successfully manage to incorporate external data into a DW have an advantage over those who do not, but there are problems with incorporating data acquired from outside the organisation, and there is a lack of research aimed at these problems. The comprehensive aim of this dissertation is to characterise and categorise problems with incorporating external data. The available literature was scanned to find problems and an interview study was conducted to validate the problems found in the literature. Respondents from five well-known organisations in Sweden participated and the result is a list of problems backed up by both literature and empirical findings</p>

Page generated in 0.0465 seconds